Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 4 (April 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) What are the conceptual foundations of thermodynamics? Mathematicians have explored this question [...] Read more.
View options order results:
result details:
Displaying articles 1-96
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

Open AccessEditorial Transfer Entropy
Entropy 2018, 20(4), 288; doi:10.3390/e20040288
Received: 12 April 2018 / Revised: 12 April 2018 / Accepted: 13 April 2018 / Published: 16 April 2018
PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Statistical relationships among the variables of a complex system reveal a lot about its physical behavior[...] Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessEditorial Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work
Entropy 2018, 20(4), 307; doi:10.3390/e20040307
Received: 19 April 2018 / Revised: 19 April 2018 / Accepted: 19 April 2018 / Published: 23 April 2018
PDF Full-text (386 KB) | HTML Full-text | XML Full-text
Abstract
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source
[...] Read more.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field. Full article
Figures

Research

Jump to: Editorial, Review, Other

Open AccessArticle A Feature Extraction Method Using Improved Multi-Scale Entropy for Rolling Bearing Fault Diagnosis
Entropy 2018, 20(4), 212; doi:10.3390/e20040212
Received: 6 February 2018 / Revised: 16 March 2018 / Accepted: 19 March 2018 / Published: 21 March 2018
PDF Full-text (5400 KB) | HTML Full-text | XML Full-text
Abstract
A feature extraction method named improved multi-scale entropy (IMSE) is proposed for rolling bearing fault diagnosis. This method could overcome information leakage in calculating the similarity of machinery systems, which is based on Pythagorean Theorem and similarity criterion. Features extracted from bearings under
[...] Read more.
A feature extraction method named improved multi-scale entropy (IMSE) is proposed for rolling bearing fault diagnosis. This method could overcome information leakage in calculating the similarity of machinery systems, which is based on Pythagorean Theorem and similarity criterion. Features extracted from bearings under different conditions using IMSE are identified by the support vector machine (SVM) classifier. Experimental results show that the proposed method can extract the status information of the bearing. Compared with the multi-scale entropy (MSE) and sample entropy (SE) methods, the identification accuracy of the features extracted by IMSE is improved as well. Full article
Figures

Figure 1

Open AccessArticle Viewpoint-Driven Simplification of Plant and Tree Foliage
Entropy 2018, 20(4), 213; doi:10.3390/e20040213
Received: 17 January 2018 / Revised: 16 March 2018 / Accepted: 16 March 2018 / Published: 21 March 2018
PDF Full-text (40946 KB) | HTML Full-text | XML Full-text
Abstract
Plants and trees are an essential part of outdoor scenes. They are represented by such a vast number of polygons that performing real-time visualization is still a problem in spite of the advantages of the hardware. Some methods have appeared to solve this
[...] Read more.
Plants and trees are an essential part of outdoor scenes. They are represented by such a vast number of polygons that performing real-time visualization is still a problem in spite of the advantages of the hardware. Some methods have appeared to solve this drawback based on point- or image-based rendering. However, geometry representation is required in some interactive applications. This work presents a simplification method that deals with the geometry of the foliage, reducing the number of primitives that represent these objects and making their interactive visualization possible. It is based on an image-based simplification that establishes an order of leaf pruning and reduces the complexity of the canopies of trees and plants. The proposed simplification method is viewpoint-driven and uses the mutual information in order to choose the leaf to prune. Moreover, this simplification method avoids the pruned appearance of the tree that is usually produced when a foliage representation is formed by a reduced number of leaves. The error introduced every time a leaf is pruned is compensated for if the size of the nearest leaf is altered to preserve the leafy appearance of the foliage. Results demonstrate the good quality and time performance of the presented work. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Open AccessArticle Comprehensive Evaluation of Coal-Fired Power Units Using Grey Relational Analysis and a Hybrid Entropy-Based Weighting Method
Entropy 2018, 20(4), 215; doi:10.3390/e20040215
Received: 4 March 2018 / Revised: 19 March 2018 / Accepted: 20 March 2018 / Published: 23 March 2018
PDF Full-text (1983 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, coal-fired power plants contribute the biggest part of power generation in China. Challenges of energy conservation and emission reduction of the coal-fired power plant encountering with a rapid growth due to the rising proportion of renewable energy generation in total
[...] Read more.
In recent years, coal-fired power plants contribute the biggest part of power generation in China. Challenges of energy conservation and emission reduction of the coal-fired power plant encountering with a rapid growth due to the rising proportion of renewable energy generation in total power generation. Energy saving power generation dispatch (ESPGD) based on power units sorting technology is a promising approach to meet the challenge. Therefore, it is crucial to establish a reasonable and feasible multi-index comprehensive evaluation (MICE) framework for assessing the performance of coal-fired power units accessed by the power grid. In this paper, a hierarchical multiple criteria evaluation system was established. Except for the typical economic and environmental indices, the evaluation system considering operational flexibility and power quality indices either. A hybrid comprehensive evaluation model was proposed to assess the unit operational performance. The model is an integration of grey relational analysis (GRA) with analytic hierarchy process (AHP) and a novel entropy-based method (abbreviate as BECC) which integrates bootstrap method and correlation coefficient (CC) into entropy principle to get the objective weight of indices. Then a case study on seven typical 600 megawatts coal-fired power units was carried out to illustrate the proposed evaluation model, and a weight sensitivity analysis was developed in addition. The results of the case study shows that unit 4 has the power generating priority over the rest ones, and unit 2 ranks last, with the lowest grey relational degree. The weight sensitivity analysis shows that the environmental factor has the biggest sensitivity coefficient. And the validation analysis of the developed BECC weight method shows that it is feasible for the MICE model, and it is stable with an ignorable uncertainty caused by the stochastic factor in the bootstrapping process. The elaborate analysis of the result reveals that it is feasible to rank power units with the proposed evaluation model. Furthermore, it is beneficial to synthesize the updated multiple criteria in optimizing the power generating priority of coal-fired power units. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Generalized Pesin-Like Identity and Scaling Relations at the Chaos Threshold of the Rössler System
Entropy 2018, 20(4), 216; doi:10.3390/e20040216
Received: 16 February 2018 / Revised: 15 March 2018 / Accepted: 20 March 2018 / Published: 23 March 2018
PDF Full-text (3305 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, using the Poincaré section of the flow we numerically verify a generalization of a Pesin-like identity at the chaos threshold of the Rössler system, which is one of the most popular three-dimensional continuous systems. As Poincaré section points of the
[...] Read more.
In this paper, using the Poincaré section of the flow we numerically verify a generalization of a Pesin-like identity at the chaos threshold of the Rössler system, which is one of the most popular three-dimensional continuous systems. As Poincaré section points of the flow show similar behavior to that of the logistic map, for the Rössler system we also investigate the relationships with respect to important properties of nonlinear dynamics, such as correlation length, fractal dimension, and the Lyapunov exponent in the vicinity of the chaos threshold. Full article
(This article belongs to the Special Issue Nonadditive Entropies and Complex Systems)
Figures

Figure 1

Open AccessArticle Quantifying Tolerance of a Nonlocal Multi-Qudit State to Any Local Noise
Entropy 2018, 20(4), 217; doi:10.3390/e20040217
Received: 31 January 2018 / Revised: 4 March 2018 / Accepted: 21 March 2018 / Published: 23 March 2018
PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
We present a general approach for quantifying tolerance of a nonlocal N-partite state to any local noise under different classes of quantum correlation scenarios with arbitrary numbers of settings and outcomes at each site. This allows us to derive new precise bounds
[...] Read more.
We present a general approach for quantifying tolerance of a nonlocal N-partite state to any local noise under different classes of quantum correlation scenarios with arbitrary numbers of settings and outcomes at each site. This allows us to derive new precise bounds in d and N on noise tolerances for: (i) an arbitrary nonlocal N-qudit state; (ii) the N-qudit Greenberger–Horne–Zeilinger (GHZ) state; (iii) the N-qubit W state and the N-qubit Dicke states, and to analyse asymptotics of these precise bounds for large N and d . Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Open AccessArticle Partition Function and Configurational Entropy in Non-Equilibrium States: A New Theoretical Model
Entropy 2018, 20(4), 218; doi:10.3390/e20040218
Received: 18 January 2018 / Revised: 16 March 2018 / Accepted: 22 March 2018 / Published: 23 March 2018
PDF Full-text (5457 KB) | HTML Full-text | XML Full-text
Abstract
A new model of non-equilibrium thermodynamic states has been investigated on the basis of the fact that all thermodynamic variables can be derived from partition functions. We have thus attempted to define partition functions for non-equilibrium conditions by introducing the concept of pseudo-temperature
[...] Read more.
A new model of non-equilibrium thermodynamic states has been investigated on the basis of the fact that all thermodynamic variables can be derived from partition functions. We have thus attempted to define partition functions for non-equilibrium conditions by introducing the concept of pseudo-temperature distributions. These pseudo-temperatures are configurational in origin and distinct from kinetic (phonon) temperatures because they refer to the particular fragments of the system with specific energies. This definition allows thermodynamic states to be described either for equilibrium or non-equilibrium conditions. In addition; a new formulation of an extended canonical partition function; internal energy and entropy are derived from this new temperature definition. With this new model; computational experiments are performed on simple non-interacting systems to investigate cooling and two distinct relaxational effects in terms of the time profiles of the partition function; internal energy and configurational entropy. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessArticle Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold
Entropy 2018, 20(4), 219; doi:10.3390/e20040219
Received: 25 February 2018 / Revised: 12 March 2018 / Accepted: 12 March 2018 / Published: 23 March 2018
Cited by 1 | PDF Full-text (1286 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD) matrices. The means associated with information divergences are derived and used
[...] Read more.
This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD) matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Logarithmic Sobolev Inequality and Exponential Convergence of a Markovian Semigroup in the Zygmund Space
Entropy 2018, 20(4), 220; doi:10.3390/e20040220
Received: 29 December 2017 / Revised: 19 March 2018 / Accepted: 19 March 2018 / Published: 23 March 2018
PDF Full-text (313 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the exponential convergence of a Markovian semigroup in the Zygmund space under the assumption of logarithmic Sobolev inequality. We show that the convergence rate is greater than the logarithmic Sobolev constant. To do this, we use the notion of entropy. We
[...] Read more.
We investigate the exponential convergence of a Markovian semigroup in the Zygmund space under the assumption of logarithmic Sobolev inequality. We show that the convergence rate is greater than the logarithmic Sobolev constant. To do this, we use the notion of entropy. We also give an example of a Laguerre operator. We determine the spectrum in the Orlicz space and discuss the relation between the logarithmic Sobolev constant and the spectral gap. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Figures

Figure 1

Open AccessArticle Information Dynamics of a Nonlinear Stochastic Nanopore System
Entropy 2018, 20(4), 221; doi:10.3390/e20040221
Received: 21 February 2018 / Revised: 19 March 2018 / Accepted: 21 March 2018 / Published: 23 March 2018
PDF Full-text (1678 KB) | HTML Full-text | XML Full-text
Abstract
Nanopores have become a subject of interest in the scientific community due to their potential uses in nanometer-scale laboratory and research applications, including infectious disease diagnostics and DNA sequencing. Additionally, they display behavioral similarity to molecular and cellular scale physiological processes. Recent advances
[...] Read more.
Nanopores have become a subject of interest in the scientific community due to their potential uses in nanometer-scale laboratory and research applications, including infectious disease diagnostics and DNA sequencing. Additionally, they display behavioral similarity to molecular and cellular scale physiological processes. Recent advances in information theory have made it possible to probe the information dynamics of nonlinear stochastic dynamical systems, such as autonomously fluctuating nanopore systems, which has enhanced our understanding of the physical systems they model. We present the results of local (LER) and specific entropy rate (SER) computations from a simulation study of an autonomously fluctuating nanopore system. We learn that both metrics show increases that correspond to fluctuations in the nanopore current, indicating fundamental changes in information generation surrounding these fluctuations. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Combining Generalized Renewal Processes with Non-Extensive Entropy-Based q-Distributions for Reliability Applications
Entropy 2018, 20(4), 223; doi:10.3390/e20040223
Received: 22 January 2018 / Revised: 12 March 2018 / Accepted: 24 March 2018 / Published: 25 March 2018
PDF Full-text (1420 KB) | HTML Full-text | XML Full-text
Abstract
The Generalized Renewal Process (GRP) is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull
[...] Read more.
The Generalized Renewal Process (GRP) is a probabilistic model for repairable systems that can represent the usual states of a system after a repair: as new, as old, or in a condition between new and old. It is often coupled with the Weibull distribution, widely used in the reliability context. In this paper, we develop novel GRP models based on probability distributions that stem from the Tsallis’ non-extensive entropy, namely the q-Exponential and the q-Weibull distributions. The q-Exponential and Weibull distributions can model decreasing, constant or increasing failure intensity functions. However, the power law behavior of the q-Exponential probability density function for specific parameter values is an advantage over the Weibull distribution when adjusting data containing extreme values. The q-Weibull probability distribution, in turn, can also fit data with bathtub-shaped or unimodal failure intensities in addition to the behaviors already mentioned. Therefore, the q-Exponential-GRP is an alternative for the Weibull-GRP model and the q-Weibull-GRP generalizes both. The method of maximum likelihood is used for their parameters’ estimation by means of a particle swarm optimization algorithm, and Monte Carlo simulations are performed for the sake of validation. The proposed models and algorithms are applied to examples involving reliability-related data of complex systems and the obtained results suggest GRP plus q-distributions are promising techniques for the analyses of repairable systems. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Figures

Figure 1

Open AccessArticle Information Thermodynamics of the Cell Signal Transduction as a Szilard Engine
Entropy 2018, 20(4), 224; doi:10.3390/e20040224
Received: 27 February 2018 / Revised: 20 March 2018 / Accepted: 22 March 2018 / Published: 26 March 2018
PDF Full-text (3202 KB) | HTML Full-text | XML Full-text
Abstract
A cell signaling system is in a non-equilibrium state, and it includes multistep biochemical signaling cascades (BSCs), which involve phosphorylation of signaling molecules, such as mitogen-activated protein kinase (MAPK) pathways. In this study, the author considered signal transduction description using information thermodynamic theory.
[...] Read more.
A cell signaling system is in a non-equilibrium state, and it includes multistep biochemical signaling cascades (BSCs), which involve phosphorylation of signaling molecules, such as mitogen-activated protein kinase (MAPK) pathways. In this study, the author considered signal transduction description using information thermodynamic theory. The ideal BSCs can be considered one type of the Szilard engine, and the presumed feedback controller, Maxwell’s demon, can extract the work during signal transduction. In this model, the mutual entropy and chemical potential of the signal molecules can be redefined by the extracted chemical work in a mechanicochemical model, Szilard engine, of BSC. In conclusion, signal transduction is computable using the information thermodynamic method. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle A Royal Road to Quantum Theory (or Thereabouts)
Entropy 2018, 20(4), 227; doi:10.3390/e20040227
Received: 15 January 2018 / Revised: 14 March 2018 / Accepted: 19 March 2018 / Published: 26 March 2018
PDF Full-text (373 KB) | HTML Full-text | XML Full-text
Abstract
This paper fails to derive quantum mechanics from a few simple postulates. However, it gets very close, and does so without much exertion. More precisely, I obtain a representation of finite-dimensional probabilistic systems in terms of Euclidean Jordan algebras, in a strikingly easy
[...] Read more.
This paper fails to derive quantum mechanics from a few simple postulates. However, it gets very close, and does so without much exertion. More precisely, I obtain a representation of finite-dimensional probabilistic systems in terms of Euclidean Jordan algebras, in a strikingly easy way, from simple assumptions. This provides a framework within which real, complex and quaternionic QM can play happily together and allows some (but not too much) room for more exotic alternatives. (This is a leisurely summary, based on recent lectures, of material from the papers arXiv:1206:2897 and arXiv:1507.06278, the latter joint work with Howard Barnum and Matthew Graydon. Some further ideas are also explored, developing the connection between conjugate systems and the possibility of forming stable measurement records and making connections between this approach and the categorical approach to quantum theory.) Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle An Economy Viewed as a Far-from-Equilibrium System from the Perspective of Algorithmic Information Theory
Entropy 2018, 20(4), 228; doi:10.3390/e20040228
Received: 8 February 2018 / Revised: 13 March 2018 / Accepted: 24 March 2018 / Published: 27 March 2018
PDF Full-text (309 KB) | HTML Full-text | XML Full-text
Abstract
This paper, using Algorithmic Information Theory (AIT), argues that once energy resources are considered, an economy, like an ecology, requires continuous energy to be sustained in a homeostatic state away from the decayed state of its (local) thermodynamic equilibrium. AIT identifies how economic
[...] Read more.
This paper, using Algorithmic Information Theory (AIT), argues that once energy resources are considered, an economy, like an ecology, requires continuous energy to be sustained in a homeostatic state away from the decayed state of its (local) thermodynamic equilibrium. AIT identifies how economic actions and natural laws create an ordered economy through what is seen as computations on a real world Universal Turing Machine (UTM) that can be simulated to within a constant on a laboratory UTM. The shortest, appropriately coded, programme to do this defines the system’s information or algorithmic entropy. The computational behaviour of many generations of primitive economic agents can create a more ordered and advanced economy, able to be specified by a relatively short algorithm. The approach allows information flows to be tracked in real-world computational processes where instructions carried in stored energy create order while ejecting disorder. Selection processes implement the Maximum Power Principle while the economy trends towards Maximum Entropy Production, as tools amplify human labour and interconnections create energy efficiency. The approach provides insights into how an advanced economy is a more ordered economy, and tools to investigate the concerns of the Bioeconomists over long term economic survival. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle On the Coherence of Probabilistic Relational Formalisms
Entropy 2018, 20(4), 229; doi:10.3390/e20040229
Received: 22 February 2018 / Revised: 23 March 2018 / Accepted: 24 March 2018 / Published: 27 March 2018
PDF Full-text (454 KB) | HTML Full-text | XML Full-text
Abstract
There are several formalisms that enhance Bayesian networks by including relations amongst individuals as modeling primitives. For instance, Probabilistic Relational Models (PRMs) use diagrams and relational databases to represent repetitive Bayesian networks, while Relational Bayesian Networks (RBNs) employ first-order probability formulas with the
[...] Read more.
There are several formalisms that enhance Bayesian networks by including relations amongst individuals as modeling primitives. For instance, Probabilistic Relational Models (PRMs) use diagrams and relational databases to represent repetitive Bayesian networks, while Relational Bayesian Networks (RBNs) employ first-order probability formulas with the same purpose. We examine the coherence checking problem for those formalisms; that is, the problem of guaranteeing that any grounding of a well-formed set of sentences does produce a valid Bayesian network. This is a novel version of de Finetti’s problem of coherence checking for probabilistic assessments. We show how to reduce the coherence checking problem in relational Bayesian networks to a validity problem in first-order logic augmented with a transitive closure operator and how to combine this logic-based approach with faster, but incomplete algorithms. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle An Approach for the Generation of an Nth-Order Chaotic System with Hyperbolic Sine
Entropy 2018, 20(4), 230; doi:10.3390/e20040230
Received: 12 January 2018 / Revised: 23 March 2018 / Accepted: 24 March 2018 / Published: 27 March 2018
PDF Full-text (1791 KB) | HTML Full-text | XML Full-text
Abstract
Chaotic systems with hyperbolic sine nonlinearity have attracted the attention of researchers in the last two years. This paper introduces a new approach for generating a class of simple chaotic systems with hyperbolic sine. With nth-order ordinary differential equations (ODEs), any desirable order
[...] Read more.
Chaotic systems with hyperbolic sine nonlinearity have attracted the attention of researchers in the last two years. This paper introduces a new approach for generating a class of simple chaotic systems with hyperbolic sine. With nth-order ordinary differential equations (ODEs), any desirable order of chaotic systems with hyperbolic sine nonlinearity can be easily constructed. Fourth-order, fifth-order, and tenth-order chaotic systems are taken as examples to verify the theory. To achieve simplicity of the electrical circuit, two back-to-back diodes represent hyperbolic sine nonlinearity without any multiplier or subcircuits. Thus, these systems can achieve both physical simplicity and analytic complexity at the same time. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle The Definition of Entropy for Quantum Unstable Systems: A View-Point Based on the Properties of Gamow States
Entropy 2018, 20(4), 231; doi:10.3390/e20040231
Received: 23 November 2017 / Revised: 16 February 2018 / Accepted: 1 March 2018 / Published: 28 March 2018
PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we review the concept of entropy in connection with the description of quantum unstable systems. We revise the conventional definition of entropy due to Boltzmann and extend it so as to include the presence of complex-energy states. After introducing a
[...] Read more.
In this paper, we review the concept of entropy in connection with the description of quantum unstable systems. We revise the conventional definition of entropy due to Boltzmann and extend it so as to include the presence of complex-energy states. After introducing a generalized basis of states which includes resonances, and working with amplitudes instead of probabilities, we found an expression for the entropy which exhibits real and imaginary components. We discuss the meaning of the imaginary part of the entropy on the basis of the similarities existing between thermal and time evolutions. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Open AccessArticle Energy Consumption of Air-Separation Adsorption Methods
Entropy 2018, 20(4), 232; doi:10.3390/e20040232
Received: 23 February 2018 / Revised: 23 March 2018 / Accepted: 27 March 2018 / Published: 28 March 2018
PDF Full-text (8441 KB) | HTML Full-text | XML Full-text
Abstract
Adsorption technology is currently one of the most popular methods of air separation. At relatively low energy expenditure, this allows oxygen to be obtained with sufficient purity for oxyfuel, metallurgy or medical applications. The adsorption process is dependent on several factors such as
[...] Read more.
Adsorption technology is currently one of the most popular methods of air separation. At relatively low energy expenditure, this allows oxygen to be obtained with sufficient purity for oxyfuel, metallurgy or medical applications. The adsorption process is dependent on several factors such as pressure, temperature, the concentration of adsorbed element in the gas phase, or the surface area of the phase boundary. The paper shows the calculation of the minimum energy needed for oxygen separation taking into account the advantages and disadvantages of the adsorption methods. The article shows how many times the energy consumption of a real oxygen-separation plant is higher than the theoretical energy consumption, and indicates which components of the adsoption installation can be further improved. The paper is supported by research conducted on an oxygen-separation installation at a semi-technical scale. Full article
Figures

Figure 1

Open AccessArticle A Mathematical Realization of Entropy through Neutron Slowing Down
Entropy 2018, 20(4), 233; doi:10.3390/e20040233
Received: 23 February 2018 / Revised: 16 March 2018 / Accepted: 20 March 2018 / Published: 28 March 2018
PDF Full-text (12300 KB) | HTML Full-text | XML Full-text
Abstract
The slowing down equation for elastic scattering of neutrons in an infinite homogeneous medium is solved analytically by decomposing the neutron energy spectrum into collision intervals. Since scattering physically smooths energy distributions by redistributing neutron energy uniformly, it is informative to observe how
[...] Read more.
The slowing down equation for elastic scattering of neutrons in an infinite homogeneous medium is solved analytically by decomposing the neutron energy spectrum into collision intervals. Since scattering physically smooths energy distributions by redistributing neutron energy uniformly, it is informative to observe how mathematics accommodates the scattering process, which increases entropy through disorder. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle The Second Law of Thermodynamics as a Force Law
Entropy 2018, 20(4), 234; doi:10.3390/e20040234
Received: 8 February 2018 / Revised: 22 March 2018 / Accepted: 27 March 2018 / Published: 28 March 2018
PDF Full-text (758 KB) | HTML Full-text | XML Full-text
Abstract
The second law of thermodynamics states the increase of entropy, ΔS>0 , for real processes from state A to state B at constant energy from chemistry over biological life and engines to cosmic events. The connection of entropy to information,
[...] Read more.
The second law of thermodynamics states the increase of entropy, Δ S > 0 , for real processes from state A to state B at constant energy from chemistry over biological life and engines to cosmic events. The connection of entropy to information, phase-space, and heat is helpful but does not immediately convince observers of the validity and basis of the second law. This gave grounds for finding a rigorous, but more easily acceptable reformulation. Here, we show using statistical mechanics that this principle is equivalent to a force law f > 0 in systems where mass centers and forces can be identified. The sign of this net force--the average mean force along a path from A to B--determines the direction of the process. The force law applies to a wide range of processes from machines to chemical reactions. The explanation of irreversibility by a driving force appears more plausible than the traditional formulation as it emphasizes the cause instead of the effect of motions. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Open AccessFeature PaperArticle Dynamical Pattern Representation of Cardiovascular Couplings Evoked by Head-up Tilt Test
Entropy 2018, 20(4), 235; doi:10.3390/e20040235
Received: 14 February 2018 / Revised: 23 March 2018 / Accepted: 23 March 2018 / Published: 28 March 2018
PDF Full-text (5726 KB) | HTML Full-text | XML Full-text
Abstract
Shannon entropy (ShE) is a recognised tool for the quantization of the temporal organization of time series. Transfer entropy (TE) provides insight into the dependence between coupled systems. Here, signals are analysed that were produced by the cardiovascular system when a healthy human
[...] Read more.
Shannon entropy (ShE) is a recognised tool for the quantization of the temporal organization of time series. Transfer entropy (TE) provides insight into the dependence between coupled systems. Here, signals are analysed that were produced by the cardiovascular system when a healthy human underwent a provocation test using the head-up tilt (HUT) protocol. The information provided by ShE and TE is evaluated from two aspects: that of the algorithmic stability and that of the recognised physiology of the cardiovascular response to the HUT test. To address both of these aspects, two types of symbolization of three-element subsequent values of a signal are considered: one, well established in heart rate research, referring to the variability in a signal, and a novel one, revealing primarily the dynamical trends. The interpretation of ShE shows a strong dependence on the method that was used in signal pre-processing. In particular, results obtained from normalized signals turn out to be less conclusive than results obtained from non-normalized signals. Systematic investigations based on surrogate data tests are employed to discriminate between genuine properties—in particular inter-system coupling—and random, incidental fluctuations. These properties appear to determine the occurrence of a high percentage of zero values of TE, which strongly limits the reliability of the couplings measured. Nevertheless, supported by statistical corroboration, we identify distinct timings when: (i) evoking cardiac impact on the vascular system, and (ii) evoking vascular impact on the cardiac system, within both the principal sub-systems of the baroreflex loop. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessArticle An Adaptive Learning Based Network Selection Approach for 5G Dynamic Environments
Entropy 2018, 20(4), 236; doi:10.3390/e20040236
Received: 8 January 2018 / Revised: 7 March 2018 / Accepted: 24 March 2018 / Published: 29 March 2018
PDF Full-text (1161 KB) | HTML Full-text | XML Full-text
Abstract
Networks will continue to become increasingly heterogeneous as we move toward 5G. Meanwhile, the intelligent programming of the core network makes the available radio resource be more changeable rather than static. In such a dynamic and heterogeneous network environment, how to help terminal
[...] Read more.
Networks will continue to become increasingly heterogeneous as we move toward 5G. Meanwhile, the intelligent programming of the core network makes the available radio resource be more changeable rather than static. In such a dynamic and heterogeneous network environment, how to help terminal users select optimal networks to access is challenging. Prior implementations of network selection are usually applicable for the environment with static radio resources, while they cannot handle the unpredictable dynamics in 5G network environments. To this end, this paper considers both the fluctuation of radio resources and the variation of user demand. We model the access network selection scenario as a multiagent coordination problem, in which a bunch of rationally terminal users compete to maximize their benefits with incomplete information about the environment (no prior knowledge of network resource and other users’ choices). Then, an adaptive learning based strategy is proposed, which enables users to adaptively adjust their selections in response to the gradually or abruptly changing environment. The system is experimentally shown to converge to Nash equilibrium, which also turns out to be both Pareto optimal and socially optimal. Extensive simulation results show that our approach achieves significantly better performance compared with two learning and non-learning based approaches in terms of load balancing, user payoff and the overall bandwidth utilization efficiency. In addition, the system has a good robustness performance under the condition with non-compliant terminal users. Full article
Figures

Figure 1

Open AccessArticle Axiomatic Information Thermodynamics
Entropy 2018, 20(4), 237; doi:10.3390/e20040237
Received: 1 February 2018 / Revised: 23 March 2018 / Accepted: 26 March 2018 / Published: 29 March 2018
PDF Full-text (652 KB) | HTML Full-text | XML Full-text
Abstract
We present an axiomatic framework for thermodynamics that incorporates information as a fundamental concept. The axioms describe both ordinary thermodynamic processes and those in which information is acquired, used and erased, as in the operation of Maxwell’s demon. This system, similar to previous
[...] Read more.
We present an axiomatic framework for thermodynamics that incorporates information as a fundamental concept. The axioms describe both ordinary thermodynamic processes and those in which information is acquired, used and erased, as in the operation of Maxwell’s demon. This system, similar to previous axiomatic systems for thermodynamics, supports the construction of conserved quantities and an entropy function governing state changes. Here, however, the entropy exhibits both information and thermodynamic aspects. Although our axioms are not based upon probabilistic concepts, a natural and highly useful concept of probability emerges from the entropy function itself. Our abstract system has many models, including both classical and quantum examples. Full article
Figures

Figure 1

Open AccessArticle Probabilistic Teleportation of Arbitrary Two-Qubit Quantum State via Non-Symmetric Quantum Channel
Entropy 2018, 20(4), 238; doi:10.3390/e20040238
Received: 17 December 2017 / Revised: 18 March 2018 / Accepted: 28 March 2018 / Published: 29 March 2018
PDF Full-text (1045 KB) | HTML Full-text | XML Full-text
Abstract
Quantum teleportation has significant meaning in quantum information. In particular, entangled states can also be used for perfectly teleporting the quantum state with some probability. This is more practical and efficient in practice. In this paper, we propose schemes to use non-symmetric quantum
[...] Read more.
Quantum teleportation has significant meaning in quantum information. In particular, entangled states can also be used for perfectly teleporting the quantum state with some probability. This is more practical and efficient in practice. In this paper, we propose schemes to use non-symmetric quantum channel combinations for probabilistic teleportation of an arbitrary two-qubit quantum state from sender to receiver. The non-symmetric quantum channel is composed of a two-qubit partially entangled state and a three-qubit partially entangled state, where partially entangled Greenberger–Horne–Zeilinger (GHZ) state and W state are considered, respectively. All schemes are presented in detail and the unitary operations required are given in concise formulas. Methods are provided for reducing classical communication cost and combining operations to simplify the manipulation. Moreover, our schemes are flexible and applicable in different situations. Full article
Figures

Figure 1

Open AccessArticle 2D Tsallis Entropy for Image Segmentation Based on Modified Chaotic Bat Algorithm
Entropy 2018, 20(4), 239; doi:10.3390/e20040239
Received: 11 March 2018 / Revised: 27 March 2018 / Accepted: 28 March 2018 / Published: 30 March 2018
PDF Full-text (1819 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Image segmentation is a significant step in image analysis and computer vision. Many entropy based approaches have been presented in this topic; among them, Tsallis entropy is one of the best performing methods. However, 1D Tsallis entropy does not consider make use of
[...] Read more.
Image segmentation is a significant step in image analysis and computer vision. Many entropy based approaches have been presented in this topic; among them, Tsallis entropy is one of the best performing methods. However, 1D Tsallis entropy does not consider make use of the spatial correlation information within the neighborhood results might be ruined by noise. Therefore, 2D Tsallis entropy is proposed to solve the problem, and results are compared with 1D Fisher, 1D maximum entropy, 1D cross entropy, 1D Tsallis entropy, fuzzy entropy, 2D Fisher, 2D maximum entropy and 2D cross entropy. On the other hand, due to the existence of huge computational costs, meta-heuristics algorithms like genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization algorithm (ACO) and differential evolution algorithm (DE) are used to accelerate the 2D Tsallis entropy thresholding method. In this paper, considering 2D Tsallis entropy as a constrained optimization problem, the optimal thresholds are acquired by maximizing the objective function using a modified chaotic Bat algorithm (MCBA). The proposed algorithm has been tested on some actual and infrared images. The results are compared with that of PSO, GA, ACO and DE and demonstrate that the proposed method outperforms other approaches involved in the paper, which is a feasible and effective option for image segmentation. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Exact Partial Information Decompositions for Gaussian Systems Based on Dependency Constraints
Entropy 2018, 20(4), 240; doi:10.3390/e20040240
Received: 9 March 2018 / Revised: 26 March 2018 / Accepted: 27 March 2018 / Published: 30 March 2018
PDF Full-text (1311 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The Partial Information Decomposition, introduced by Williams P. L. et al. (2010), provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method (Idep) has recently been proposed by James R. G. et al.
[...] Read more.
The Partial Information Decomposition, introduced by Williams P. L. et al. (2010), provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ( I dep ) has recently been proposed by James R. G. et al. (2017) for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the I dep approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the I dep PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the I dep PID with the minimum mutual information partial information decomposition ( I mmi ), which was discussed by Barrett A. B. (2015). The results obtained using I dep appear to be more intuitive than those given with other methods, such as I mmi , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the I mmi method generally produces larger estimates of redundancy and synergy than does the I dep method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Dynamic Model for a Uniform Microwave-Assisted Continuous Flow Process of Ethyl Acetate Production
Entropy 2018, 20(4), 241; doi:10.3390/e20040241
Received: 4 February 2018 / Revised: 17 March 2018 / Accepted: 30 March 2018 / Published: 2 April 2018
PDF Full-text (52206 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this work is to present a model of a reaction tube with cross structures in order to improve ethyl acetate production and microwave heating uniformity. A commercial finite element software, COMSOL Multiphysics 4.3a (Newton, MA, USA), is used to build
[...] Read more.
The aim of this work is to present a model of a reaction tube with cross structures in order to improve ethyl acetate production and microwave heating uniformity. A commercial finite element software, COMSOL Multiphysics 4.3a (Newton, MA, USA), is used to build the proposed model for a BJ-22 rectangular waveguide system. Maxwell’s equations, the heat conduction equation, reaction kinetics equation and Navier-Stokes equation are combined to describe the continuous flow process. The electric field intensity, the temperature, the concentration of water, the coefficient of variation (COV) and the mean temperature at different initial velocities are compared to obtain the best flow rate. Four different initial velocities are employed to discuss the effect of flow velocity on the heating uniformity and heating efficiency. The point temperatures are measured by optical fibers to verify the simulated results. The results show the electric field intensity distributions at different initial velocities have little difference, which means the initial velocity will have the decisive influence on the heating process. At lower velocity, the COV will be smaller, which means better heating uniformity. Meanwhile, the distance between each cross structure has great influence on the heating uniformity and heating efficiency, while the angle has little. The proposed model can be applied to large-scale production of microwave-assisted ethyl acetate production. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Entropy Generation Analysis of Laminar Flows of Water-Based Nanofluids in Horizontal Minitubes under Constant Heat Flux Conditions
Entropy 2018, 20(4), 242; doi:10.3390/e20040242
Received: 12 February 2018 / Revised: 9 March 2018 / Accepted: 21 March 2018 / Published: 2 April 2018
PDF Full-text (4868 KB) | HTML Full-text | XML Full-text
Abstract
During the last decade, second law analysis via entropy generation has become important in terms of entropy generation minimization (EGM), thermal engineering system design, irreversibility, and energy saving. In this study, heat transfer and entropy generation characteristics of flows of multi-walled carbon nanotube-based
[...] Read more.
During the last decade, second law analysis via entropy generation has become important in terms of entropy generation minimization (EGM), thermal engineering system design, irreversibility, and energy saving. In this study, heat transfer and entropy generation characteristics of flows of multi-walled carbon nanotube-based nanofluids were investigated in horizontal minitubes with outer and inner diameters of ~1067 and ~889 µm, respectively. Carbon nanotubes (CNTs) with outer diameter of 10–20 nm and length of 1–2 µm were used for nanofluid preparation, and water was considered as the base fluid. The entropy generation based on the experimental data, a significant parameter in thermal design system, was examined for CNTs/water nanofluids. The change in the entropy generation was only seen at low mass fractions (0.25 wt.% and 0.5 wt.%). Moreover, to have more insight on the entropy generation of nanofluids based on the experimental data, a further analysis was performed on Al2O3 and TiO2 nanoparticles/water nanofluids from the experimental database of the previous study of the authors. The corresponding results disclosed a remarkable increase in the entropy generation rate when Al2O3 and TiO2 nanoparticles were added to the base fluid. Full article
Figures

Figure 1

Open AccessArticle Compression of a Deep Competitive Network Based on Mutual Information for Underwater Acoustic Targets Recognition
Entropy 2018, 20(4), 243; doi:10.3390/e20040243
Received: 24 January 2018 / Revised: 20 March 2018 / Accepted: 27 March 2018 / Published: 2 April 2018
PDF Full-text (4462 KB) | HTML Full-text | XML Full-text
Abstract
The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and
[...] Read more.
The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1) Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2) Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Entropy-Based Video Steganalysis of Motion Vectors
Entropy 2018, 20(4), 244; doi:10.3390/e20040244
Received: 7 February 2018 / Revised: 29 March 2018 / Accepted: 30 March 2018 / Published: 2 April 2018
PDF Full-text (8000 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new method is proposed for motion vector steganalysis using the entropy value and its combination with the features of the optimized motion vector. In this method, the entropy of blocks is calculated to determine their texture and the precision
[...] Read more.
In this paper, a new method is proposed for motion vector steganalysis using the entropy value and its combination with the features of the optimized motion vector. In this method, the entropy of blocks is calculated to determine their texture and the precision of their motion vectors. Then, by using a fuzzy cluster, the blocks are clustered into the blocks with high and low texture, while the membership function of each block to a high texture class indicates the texture of that block. These membership functions are used to weight the effective features that are extracted by reconstructing the motion estimation equations. Characteristics of the results indicate that the use of entropy and the irregularity of each block increases the precision of the final video classification into cover and stego classes. Full article
Figures

Figure 1

Open AccessArticle Multi-Graph Multi-Label Learning Based on Entropy
Entropy 2018, 20(4), 245; doi:10.3390/e20040245
Received: 25 March 2018 / Revised: 30 March 2018 / Accepted: 30 March 2018 / Published: 2 April 2018
PDF Full-text (2979 KB) | HTML Full-text | XML Full-text
Abstract
Recently, Multi-Graph Learning was proposed as the extension of Multi-Instance Learning and has achieved some successes. However, to the best of our knowledge, currently, there is no study working on Multi-Graph Multi-Label Learning, where each object is represented as a bag containing
[...] Read more.
Recently, Multi-Graph Learning was proposed as the extension of Multi-Instance Learning and has achieved some successes. However, to the best of our knowledge, currently, there is no study working on Multi-Graph Multi-Label Learning, where each object is represented as a bag containing a number of graphs and each bag is marked with multiple class labels. It is an interesting problem existing in many applications, such as image classification, medicinal analysis and so on. In this paper, we propose an innovate algorithm to address the problem. Firstly, it uses more precise structures, multiple Graphs, instead of Instances to represent an image so that the classification accuracy could be improved. Then, it uses multiple labels as the output to eliminate the semantic ambiguity of the image. Furthermore, it calculates the entropy to mine the informative subgraphs instead of just mining the frequent subgraphs, which enables selecting the more accurate features for the classification. Lastly, since the current algorithms cannot directly deal with graph-structures, we degenerate the Multi-Graph Multi-Label Learning into the Multi-Instance Multi-Label Learning in order to solve it by MIML-ELM (Improving Multi-Instance Multi-Label Learning by Extreme Learning Machine). The performance study shows that our algorithm outperforms the competitors in terms of both effectiveness and efficiency. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle Diffusion Maximum Correntropy Criterion Based Robust Spectrum Sensing in Non-Gaussian Noise Environments
Entropy 2018, 20(4), 246; doi:10.3390/e20040246
Received: 21 March 2018 / Revised: 29 March 2018 / Accepted: 30 March 2018 / Published: 3 April 2018
PDF Full-text (1555 KB) | HTML Full-text | XML Full-text
Abstract
Spectrum sensing is the most important task in cognitive radio (CR). In this paper, a new robust distributed spectrum sensing approach, called diffusion maximum correntropy criterion (DMCC)-based robust spectrum sensing, is proposed for CR in the presence of non-Gaussian noise or impulsive noise.
[...] Read more.
Spectrum sensing is the most important task in cognitive radio (CR). In this paper, a new robust distributed spectrum sensing approach, called diffusion maximum correntropy criterion (DMCC)-based robust spectrum sensing, is proposed for CR in the presence of non-Gaussian noise or impulsive noise. The proposed distributed scheme, which does not need any central processing unit, is characterized by an adaptive diffusion model. The maximum correntropy criterion, which is insensitive to impulsive interference, is introduced to deal with the effect of non-Gaussian noise. Simulation results show that the DMCC-based spectrum sensing algorithm has an excellent robust property with respect to non-Gaussian noise. It is also observed that the new method displays a considerably better detection performance than its predecessor (i.e., diffusion least mean square (DLMS)) in impulsive noise. Moreover, the mean and variance convergence analysis of the proposed algorithm are also carried out. Full article
Figures

Figure 1

Open AccessArticle On the Contact Geometry and the Poisson Geometry of the Ideal Gas
Entropy 2018, 20(4), 247; doi:10.3390/e20040247
Received: 13 February 2018 / Revised: 31 March 2018 / Accepted: 2 April 2018 / Published: 3 April 2018
PDF Full-text (248 KB) | HTML Full-text | XML Full-text
Abstract
We elaborate on existing notions of contact geometry and Poisson geometry as applied to the classical ideal gas. Specifically, we observe that it is possible to describe its dynamics using a 3-dimensional contact submanifold of the standard 5-dimensional contact manifold used in the
[...] Read more.
We elaborate on existing notions of contact geometry and Poisson geometry as applied to the classical ideal gas. Specifically, we observe that it is possible to describe its dynamics using a 3-dimensional contact submanifold of the standard 5-dimensional contact manifold used in the literature. This reflects the fact that the internal energy of the ideal gas depends exclusively on its temperature. We also present a Poisson algebra of thermodynamic operators for a quantum-like description of the classical ideal gas. The central element of this Poisson algebra is proportional to Boltzmann’s constant. A Hilbert space of states is identified and a system of wave equations governing the wavefunction is found. Expectation values for the operators representing pressure, volume and temperature are found to satisfy the classical equations of state. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Open AccessArticle Hedging for the Regime-Switching Price Model Based on Non-Extensive Statistical Mechanics
Entropy 2018, 20(4), 248; doi:10.3390/e20040248
Received: 13 March 2018 / Revised: 1 April 2018 / Accepted: 3 April 2018 / Published: 3 April 2018
PDF Full-text (740 KB) | HTML Full-text | XML Full-text
Abstract
To describe the movement of asset prices accurately, we employ the non-extensive statistical mechanics and the semi-Markov process to establish an asset price model. The model can depict the peak and fat tail characteristics of returns and the regime-switching phenomenon of macroeconomic system.
[...] Read more.
To describe the movement of asset prices accurately, we employ the non-extensive statistical mechanics and the semi-Markov process to establish an asset price model. The model can depict the peak and fat tail characteristics of returns and the regime-switching phenomenon of macroeconomic system. Moreover, we use the risk-minimizing method to study the hedging problem of contingent claims and obtain the explicit solutions of the optimal hedging strategies. Full article
(This article belongs to the Special Issue Nonadditive Entropies and Complex Systems)
Open AccessArticle Simulation Study on the Application of the Generalized Entropy Concept in Artificial Neural Networks
Entropy 2018, 20(4), 249; doi:10.3390/e20040249
Received: 25 January 2018 / Revised: 23 March 2018 / Accepted: 30 March 2018 / Published: 3 April 2018
PDF Full-text (5782 KB) | HTML Full-text | XML Full-text
Abstract
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have
[...] Read more.
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q -generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors. Full article
Figures

Figure 1

Open AccessArticle Fluid-Fluid Interfaces of Multi-Component Mixtures in Local Equilibrium
Entropy 2018, 20(4), 250; doi:10.3390/e20040250
Received: 18 February 2018 / Revised: 20 March 2018 / Accepted: 3 April 2018 / Published: 4 April 2018
PDF Full-text (258 KB) | HTML Full-text | XML Full-text
Abstract
We derive in a new way that the intensive properties of a fluid-fluid Gibbs interface are independent of the location of the dividing surface. When the system is out of global equilibrium, this finding is not trivial: In a one-component fluid, it can
[...] Read more.
We derive in a new way that the intensive properties of a fluid-fluid Gibbs interface are independent of the location of the dividing surface. When the system is out of global equilibrium, this finding is not trivial: In a one-component fluid, it can be used to obtain the interface temperature from the surface tension. In other words, the surface equation of state can serve as a thermometer for the liquid-vapor interface in a one-component fluid. In a multi-component fluid, one needs the surface tension and the relative adsorptions to obtain the interface temperature and chemical potentials. A consistent set of thermodynamic properties of multi-component surfaces are presented. They can be used to construct fluid-fluid boundary conditions during transport. These boundary conditions have a bearing on all thermodynamic modeling on transport related to phase transitions. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Open AccessArticle A Novel Fractional-Order Chaotic Phase Synchronization Model for Visual Selection and Shifting
Entropy 2018, 20(4), 251; doi:10.3390/e20040251
Received: 3 March 2018 / Revised: 1 April 2018 / Accepted: 2 April 2018 / Published: 4 April 2018
PDF Full-text (13877 KB) | HTML Full-text | XML Full-text
Abstract
Visual information processing is one of the fields of cognitive informatics. In this paper, a two-layer fractional-order chaotic network, which can simulate the mechanism of visual selection and shifting, is established. Unlike other object selection models, the proposed model introduces control units to
[...] Read more.
Visual information processing is one of the fields of cognitive informatics. In this paper, a two-layer fractional-order chaotic network, which can simulate the mechanism of visual selection and shifting, is established. Unlike other object selection models, the proposed model introduces control units to select object. The first chaotic network layer of the model is used to implement image segmentation. A control layer is added as the second layer, consisting of a central neuron, which controls object selection and shifting. To implement visual selection and shifting, a strategy is proposed that can achieve different subnets corresponding to the objects in the first layer synchronizing with the central neuron at different time. The central unit acting as the central nervous system synchronizes with different subnets (hybrid systems), implementing the mechanism of visual selection and shifting in the human system. The proposed model corresponds better with the human visual system than the typical model of visual information encoding and transmission and provides new possibilities for further analysis of the mechanisms of the human cognitive system. The reasonability of the proposed model is verified by experiments using artificial and natural images. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Modeling of the Atomic Diffusion Coefficient in Nanostructured Materials
Entropy 2018, 20(4), 252; doi:10.3390/e20040252
Received: 12 February 2018 / Revised: 23 March 2018 / Accepted: 3 April 2018 / Published: 5 April 2018
PDF Full-text (1430 KB) | HTML Full-text | XML Full-text
Abstract
A formula has been established, which is based on the size-dependence of a metal’s melting point, to elucidate the atomic diffusion coefficient of nanostructured materials by considering the role of grain-boundary energy. When grain size is decreased, a decrease in the atomic diffusion
[...] Read more.
A formula has been established, which is based on the size-dependence of a metal’s melting point, to elucidate the atomic diffusion coefficient of nanostructured materials by considering the role of grain-boundary energy. When grain size is decreased, a decrease in the atomic diffusion activation energy and an increase in the corresponding diffusion coefficient can be observed. Interestingly, variations in the atomic diffusion activation energy of nanostructured materials are small relative to nanoparticles, depending on the size of the grain boundary energy. Our theoretical prediction is in accord with the computer simulation and experimental results of the metals described. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Figures

Figure 1

Open AccessArticle Thermodynamically Constrained Averaging Theory: Principles, Model Hierarchies, and Deviation Kinetic Energy Extensions
Entropy 2018, 20(4), 253; doi:10.3390/e20040253
Received: 4 January 2018 / Revised: 22 March 2018 / Accepted: 3 April 2018 / Published: 5 April 2018
PDF Full-text (531 KB) | HTML Full-text | XML Full-text
Abstract
The thermodynamically constrained averaging theory (TCAT) is a comprehensive theory used to formulate hierarchies of multiphase, multiscale models that are closed based upon the second law of thermodynamics. The rate of entropy production is posed in terms of the product of fluxes and
[...] Read more.
The thermodynamically constrained averaging theory (TCAT) is a comprehensive theory used to formulate hierarchies of multiphase, multiscale models that are closed based upon the second law of thermodynamics. The rate of entropy production is posed in terms of the product of fluxes and forces of dissipative processes. The attractive features of TCAT include consistency across disparate length scales; thermodynamic consistency across scales; the inclusion of interfaces and common curves as well as phases; the development of kinematic equations to provide closure relations for geometric extent measures; and a structured approach to model building. The elements of the TCAT approach are shown; the ways in which each of these attractive features emerge from the TCAT approach are illustrated; and a review of the hierarchies of models that have been formulated is provided. Because the TCAT approach is mathematically involved, we illustrate how this approach can be applied by leveraging existing components of the theory that can be applied to a wide range of applications. This can result in a substantial reduction in formulation effort compared to a complete derivation while yielding identical results. Lastly, we note the previous neglect of the deviation kinetic energy, which is not important in slow porous media flows, formulate the required equations to extend the theory, and comment on applications for which the new components would be especially useful. This work should serve to make TCAT more accessible for applications, thereby enabling higher fidelity models for applications such as turbulent multiphase flows. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm
Entropy 2018, 20(4), 254; doi:10.3390/e20040254
Received: 30 January 2018 / Revised: 29 March 2018 / Accepted: 3 April 2018 / Published: 5 April 2018
PDF Full-text (24608 KB) | HTML Full-text | XML Full-text
Abstract
Aim: Currently, identifying multiple sclerosis (MS) by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way.
[...] Read more.
Aim: Currently, identifying multiple sclerosis (MS) by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle Does Income Diversification Benefit the Sustainable Development of Chinese Listed Banks? Analysis Based on Entropy and the Herfindahl–Hirschman Index
Entropy 2018, 20(4), 255; doi:10.3390/e20040255
Received: 26 February 2018 / Revised: 30 March 2018 / Accepted: 4 April 2018 / Published: 6 April 2018
PDF Full-text (10522 KB) | HTML Full-text | XML Full-text
Abstract
We collected data pertaining to Chinese listed commercial banks from 2008 to 2016 and found that the competition between banks is becoming increasingly fierce. Commercial banks have actively carried out diversification strategies for greater returns, and the financial reports show that profits are
[...] Read more.
We collected data pertaining to Chinese listed commercial banks from 2008 to 2016 and found that the competition between banks is becoming increasingly fierce. Commercial banks have actively carried out diversification strategies for greater returns, and the financial reports show that profits are increasingly coming from the non-interest income benefits of diversification strategies. However, diversification comes with risk. We built a panel threshold model and investigated the effect of income diversification on a bank’s profitability and risk. Diversification was first measured by the Herfindahl–Hirschman index (HHI), and the results show that there is a nonlinear relationship between diversification and profitability or risk does exist. We introduced an interesting index based on the entropy to test the robustness of our model and found that a threshold effect exists in both our models, which is statistically significant. We believe the combination of the entropy index (ENTI) and the HHI enables more efficient study of the relationship between diversification and profitability or risk more efficiently. Bankers and their customers have increasingly been interested in income diversification, and they value risk as well. We suggest that banks of different sizes should adopt the corresponding diversification strategy to achieve sustainable development. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Information Geometry for Radar Target Detection with Total Jensen–Bregman Divergence
Entropy 2018, 20(4), 256; doi:10.3390/e20040256
Received: 16 February 2018 / Revised: 27 March 2018 / Accepted: 6 April 2018 / Published: 6 April 2018
PDF Full-text (1133 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a radar target detection algorithm based on information geometry. In particular, the correlation of sample data is modeled as a Hermitian positive-definite (HPD) matrix. Moreover, a class of total Jensen–Bregman divergences, including the total Jensen square loss, the total Jensen
[...] Read more.
This paper proposes a radar target detection algorithm based on information geometry. In particular, the correlation of sample data is modeled as a Hermitian positive-definite (HPD) matrix. Moreover, a class of total Jensen–Bregman divergences, including the total Jensen square loss, the total Jensen log-determinant divergence, and the total Jensen von Neumann divergence, are proposed to be used as the distance-like function on the space of HPD matrices. On basis of these divergences, definitions of their corresponding median matrices are given. Finally, a decision rule of target detection is made by comparing the total Jensen-Bregman divergence between the median of reference cells and the matrix of cell under test with a given threshold. The performance analysis on both simulated and real radar data confirm the superiority of the proposed detection method over its conventional counterparts and existing ones. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Sparse Power-Law Network Model for Reliable Statistical Predictions Based on Sampled Data
Entropy 2018, 20(4), 257; doi:10.3390/e20040257
Received: 2 March 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 7 April 2018
PDF Full-text (681 KB) | HTML Full-text | XML Full-text
Abstract
A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not
[...] Read more.
A projective network model is a model that enables predictions to be made based on a subsample of the network data, with the predictions remaining unchanged if a larger sample is taken into consideration. An exchangeable model is a model that does not depend on the order in which nodes are sampled. Despite a large variety of non-equilibrium (growing) and equilibrium (static) sparse complex network models that are widely used in network science, how to reconcile sparseness (constant average degree) with the desired statistical properties of projectivity and exchangeability is currently an outstanding scientific problem. Here we propose a network process with hidden variables which is projective and can generate sparse power-law networks. Despite the model not being exchangeable, it can be closely related to exchangeable uncorrelated networks as indicated by its information theory characterization and its network entropy. The use of the proposed network process as a null model is here tested on real data, indicating that the model offers a promising avenue for statistical network modelling. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Figures

Figure 1

Open AccessArticle Information Geometry for Covariance Estimation in Heterogeneous Clutter with Total Bregman Divergence
Entropy 2018, 20(4), 258; doi:10.3390/e20040258
Received: 31 January 2018 / Revised: 22 March 2018 / Accepted: 6 April 2018 / Published: 8 April 2018
PDF Full-text (1088 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a covariance matrix estimation method based on information geometry in a heterogeneous clutter. In particular, the problem of covariance estimation is reformulated as the computation of geometric median for covariance matrices estimated by the secondary data set. A new class
[...] Read more.
This paper presents a covariance matrix estimation method based on information geometry in a heterogeneous clutter. In particular, the problem of covariance estimation is reformulated as the computation of geometric median for covariance matrices estimated by the secondary data set. A new class of total Bregman divergence is presented on the Riemanian manifold of Hermitian positive-definite (HPD) matrix, which is the foundation of information geometry. On the basis of this divergence, total Bregman divergence medians are derived instead of the sample covariance matrix (SCM) of the secondary data. Unlike the SCM, resorting to the knowledge of statistical characteristics of the sample data, the geometric structure of matrix space is considered in our proposed estimators, and then the performance can be improved in a heterogeneous clutter. At the analysis stage, numerical results are given to validate the detection performance of an adaptive normalized matched filter with our estimator compared with existing alternatives. Full article
(This article belongs to the Special Issue Radar and Information Theory)
Figures

Figure 1

Open AccessArticle An Efficient Computational Technique for Fractal Vehicular Traffic Flow
Entropy 2018, 20(4), 259; doi:10.3390/e20040259
Received: 13 February 2018 / Revised: 23 March 2018 / Accepted: 3 April 2018 / Published: 9 April 2018
PDF Full-text (266 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method.
[...] Read more.
In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method. Some illustrative examples are taken to describe the success of the suggested techniques. The results derived with the aid of the suggested schemes reveal that the present schemes are very efficient for obtaining the non-differentiable solution to fractal vehicular traffic flow problem. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Open AccessArticle Shannon Entropy of Binary Wavelet Packet Subbands and Its Application in Bearing Fault Extraction
Entropy 2018, 20(4), 260; doi:10.3390/e20040260
Received: 28 February 2018 / Revised: 4 April 2018 / Accepted: 4 April 2018 / Published: 9 April 2018
PDF Full-text (10707 KB) | HTML Full-text | XML Full-text
Abstract
The fast spectrum kurtosis (FSK) algorithm can adaptively identify and select the resonant frequency band and extract the fault feature via the envelope demodulation method. However, the FSK method has some limitations due to its susceptibility to noise and random knocks. To overcome
[...] Read more.
The fast spectrum kurtosis (FSK) algorithm can adaptively identify and select the resonant frequency band and extract the fault feature via the envelope demodulation method. However, the FSK method has some limitations due to its susceptibility to noise and random knocks. To overcome this shortage, a new method is proposed in this paper. Firstly, we use the binary wavelet packet transform (BWPT) instead of the finite impulse response (FIR) filter bank as the frequency band segmentation method. Following this, the Shannon entropy of each frequency band is calculated. The appropriate center frequency and bandwidth are chosen for filtering by using the inverse of the Shannon entropy as the index. Finally, the envelope spectrum of the filtered signal is analyzed and the faulty feature information is obtained from the envelope spectrum. Through simulation and experimental verification, we found that Shannon entropy is—to some extent—better than kurtosis as a frequency-selective index, and that the Shannon entropy of the binary wavelet packet transform method is more accurate for fault feature extraction. Full article
Figures

Figure 1

Open AccessArticle A Novel Entropy-Based Centrality Approach for Identifying Vital Nodes in Weighted Networks
Entropy 2018, 20(4), 261; doi:10.3390/e20040261
Received: 20 March 2018 / Revised: 30 March 2018 / Accepted: 7 April 2018 / Published: 9 April 2018
PDF Full-text (4232 KB) | HTML Full-text | XML Full-text
Abstract
Measuring centrality has recently attracted increasing attention, with algorithms ranging from those that simply calculate the number of immediate neighbors and the shortest paths to those that are complicated iterative refinement processes and objective dynamical approaches. Indeed, vital nodes identification allows us to
[...] Read more.
Measuring centrality has recently attracted increasing attention, with algorithms ranging from those that simply calculate the number of immediate neighbors and the shortest paths to those that are complicated iterative refinement processes and objective dynamical approaches. Indeed, vital nodes identification allows us to understand the roles that different nodes play in the structure of a network. However, quantifying centrality in complex networks with various topological structures is not an easy task. In this paper, we introduce a novel definition of entropy-based centrality, which can be applicable to weighted directed networks. By design, the total power of a node is divided into two parts, including its local power and its indirect power. The local power can be obtained by integrating the structural entropy, which reveals the communication activity and popularity of each node, and the interaction frequency entropy, which indicates its accessibility. In addition, the process of influence propagation can be captured by the two-hop subnetworks, resulting in the indirect power. In order to evaluate the performance of the entropy-based centrality, we use four weighted real-world networks with various instance sizes, degree distributions, and densities. Correspondingly, these networks are adolescent health, Bible, United States (US) airports, and Hep-th, respectively. Extensive analytical results demonstrate that the entropy-based centrality outperforms degree centrality, betweenness centrality, closeness centrality, and the Eigenvector centrality. Full article
Figures

Figure 1

Open AccessArticle On a Robust MaxEnt Process Regression Model with Sample-Selection
Entropy 2018, 20(4), 262; doi:10.3390/e20040262
Received: 2 February 2018 / Revised: 3 April 2018 / Accepted: 7 April 2018 / Published: 9 April 2018
PDF Full-text (437 KB) | HTML Full-text | XML Full-text
Abstract
In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt) process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function
[...] Read more.
In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt) process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR) model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR) model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance. Full article
Figures

Figure 1

Open AccessArticle Multichannel Signals Reconstruction Based on Tunable Q-Factor Wavelet Transform-Morphological Component Analysis and Sparse Bayesian Iteration for Rotating Machines
Entropy 2018, 20(4), 263; doi:10.3390/e20040263
Received: 12 March 2018 / Revised: 3 April 2018 / Accepted: 3 April 2018 / Published: 10 April 2018
PDF Full-text (17753 KB) | HTML Full-text | XML Full-text
Abstract
High-speed remote transmission and large-capacity data storage are difficult issues in signals acquisition of rotating machines condition monitoring. To address these concerns, a novel multichannel signals reconstruction approach based on tunable Q-factor wavelet transform-morphological component analysis (TQWT-MCA) and sparse Bayesian iteration algorithm
[...] Read more.
High-speed remote transmission and large-capacity data storage are difficult issues in signals acquisition of rotating machines condition monitoring. To address these concerns, a novel multichannel signals reconstruction approach based on tunable Q-factor wavelet transform-morphological component analysis (TQWT-MCA) and sparse Bayesian iteration algorithm combined with step-impulse dictionary is proposed under the frame of compressed sensing (CS). To begin with, to prevent the periodical impulses loss and effectively separate periodical impulses from the external noise and additive interference components, the TQWT-MCA method is introduced to divide the raw vibration signal into low-resonance component (LRC, i.e., periodical impulses) and high-resonance component (HRC), thus, the periodical impulses are preserved effectively. Then, according to the amplitude range of generated LRC, the step-impulse dictionary atom is designed to match the physical structure of periodical impulses. Furthermore, the periodical impulses and HRC are reconstructed by the sparse Bayesian iteration combined with step-impulse dictionary, respectively, finally, the final reconstructed raw signals are obtained by adding the LRC and HRC, meanwhile, the fidelity of the final reconstructed signals is tested by the envelop spectrum and error analysis, respectively. In this work, the proposed algorithm is applied to simulated signal and engineering multichannel signals of a gearbox with multiple faults. Experimental results demonstrate that the proposed approach significantly improves the reconstructive accuracy compared with the state-of-the-art methods such as non-convex Lq (q = 0.5) regularization, spatiotemporal sparse Bayesian learning (SSBL) and L1-norm, etc. Additionally, the processing time, i.e., speed of storage and transmission has increased dramatically, more importantly, the fault characteristics of the gearbox with multiple faults are detected and saved, i.e., the bearing outer race fault frequency at 170.7 Hz and its harmonics at 341.3 Hz, ball fault frequency at 7.344 Hz and its harmonics at 15.0 Hz, and the gear fault frequency at 23.36 Hz and its harmonics at 47.42 Hz are identified in the envelope spectrum. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting
Entropy 2018, 20(4), 264; doi:10.3390/e20040264
Received: 27 February 2018 / Revised: 30 March 2018 / Accepted: 7 April 2018 / Published: 10 April 2018
PDF Full-text (1755 KB) | HTML Full-text | XML Full-text
Abstract
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series.
[...] Read more.
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Exergy Analysis and Human Body Thermal Comfort Conditions: Evaluation of Different Body Compositions
Entropy 2018, 20(4), 265; doi:10.3390/e20040265
Received: 20 February 2018 / Revised: 5 April 2018 / Accepted: 8 April 2018 / Published: 10 April 2018
PDF Full-text (1000 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on studying the effects of muscle and fat percentages on the exergy behavior of the human body under several environmental conditions. The main objective is to relate the thermal comfort indicators with exergy rates, resulting in a Second Law perspective
[...] Read more.
This article focuses on studying the effects of muscle and fat percentages on the exergy behavior of the human body under several environmental conditions. The main objective is to relate the thermal comfort indicators with exergy rates, resulting in a Second Law perspective to evaluate thermal environment. A phenomenological model is proposed of the human body with four layers: core, muscle, fat and skin. The choice of a simplified model is justified by the facility to variate the amount of mass in each tissue without knowing how it spreads around the body. After validated, the model was subjected to a set of environmental conditions and body compositions. The results obtained indicate that the area normalization (Watts per square meter) may be used as a safe generalization for the exergy transfer to environment. Moreover, the destroyed exergy itself is sufficient to evaluate the thermal sensation when the model is submitted to environmental temperatures lower than that considered for the thermal neutrality condition (and, in this text, the thermal comfort) . Nevertheless, for environments with temperatures higher than the calculated for the thermal neutrality, the combination of destroyed exergy and the rate of exergy transferred to the environment should be used to properly evaluate thermal comfort. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Nonclassicality by Local Gaussian Unitary Operations for Gaussian States
Entropy 2018, 20(4), 266; doi:10.3390/e20040266
Received: 19 January 2018 / Revised: 16 March 2018 / Accepted: 6 April 2018 / Published: 11 April 2018
PDF Full-text (1276 KB) | HTML Full-text | XML Full-text
Abstract
A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it.
[...] Read more.
A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it. For any bipartite Gaussian state ρ A B , we always have 0 N ( ρ A B ) < 1 , where the upper bound 1 is sharp. An explicit formula of N for ( 1 + 1 ) -mode Gaussian states and an estimate of N for ( n + m ) -mode Gaussian states are presented. A criterion of entanglement is established in terms of this correlation. The quantum correlation N is also compared with entanglement, Gaussian discord and Gaussian geometric discord. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Nash Bargaining Game-Theoretic Framework for Power Control in Distributed Multiple-Radar Architecture Underlying Wireless Communication System
Entropy 2018, 20(4), 267; doi:10.3390/e20040267
Received: 31 January 2018 / Revised: 29 March 2018 / Accepted: 6 April 2018 / Published: 11 April 2018
PDF Full-text (1010 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel Nash bargaining solution (NBS)-based cooperative game-theoretic framework for power control in a distributed multiple-radar architecture underlying a wireless communication system. Our primary objective is to minimize the total power consumption of the distributed multiple-radar system (DMRS) with the
[...] Read more.
This paper presents a novel Nash bargaining solution (NBS)-based cooperative game-theoretic framework for power control in a distributed multiple-radar architecture underlying a wireless communication system. Our primary objective is to minimize the total power consumption of the distributed multiple-radar system (DMRS) with the protection of wireless communication user’s transmission, while guaranteeing each radar’s target detection requirement. A unified cooperative game-theoretic framework is proposed for the optimization problem, where interference power constraints (IPCs) are imposed to protect the communication user’s transmission, and a minimum signal-to-interference-plus-noise ratio (SINR) requirement is employed to provide reliable target detection for each radar. The existence, uniqueness and fairness of the NBS to this cooperative game are proven. An iterative Nash bargaining power control algorithm with low computational complexity and fast convergence is developed and is shown to converge to a Pareto-optimal equilibrium for the cooperative game model. Numerical simulations and analyses are further presented to highlight the advantages and testify to the efficiency of our proposed cooperative game algorithm. It is demonstrated that the distributed algorithm is effective for power control and could protect the communication system with limited implementation overhead. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Distance Entropy Cartography Characterises Centrality in Complex Networks
Entropy 2018, 20(4), 268; doi:10.3390/e20040268
Received: 28 February 2018 / Revised: 4 April 2018 / Accepted: 5 April 2018 / Published: 11 April 2018
PDF Full-text (811 KB) | HTML Full-text | XML Full-text
Abstract
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network
[...] Read more.
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network models. By coupling distance entropy information with closeness centrality, we introduce a network cartography which allows one to reduce the degeneracy of ranking based on closeness alone. We apply this methodology to the empirical multiplex lexical network encoding the linguistic relationships known to English speaking toddlers. We show that the distance entropy cartography better predicts how children learn words compared to closeness centrality. Our results highlight the importance of distance entropy for gaining insights from distance patterns in complex networks. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Figures

Figure 1

Open AccessArticle A Decentralized Receiver in Gaussian Interference
Entropy 2018, 20(4), 269; doi:10.3390/e20040269
Received: 1 February 2018 / Revised: 6 April 2018 / Accepted: 9 April 2018 / Published: 11 April 2018
PDF Full-text (411 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Bounds are developed on the maximum communications rate between a transmitter and a fusion node aided by a cluster of distributed receivers with limited resources for cooperation, all in the presence of an additive Gaussian interferer. The receivers cannot communicate with one another
[...] Read more.
Bounds are developed on the maximum communications rate between a transmitter and a fusion node aided by a cluster of distributed receivers with limited resources for cooperation, all in the presence of an additive Gaussian interferer. The receivers cannot communicate with one another and can only convey processed versions of their observations to the fusion center through a Local Array Network (LAN) with limited total throughput. The effectiveness of each bound’s approach for mitigating a strong interferer is assessed over a wide range of channels. It is seen that, if resources are shared effectively, even a simple quantize-and-forward strategy can mitigate an interferer 20 dB stronger than the signal in a diverse range of spatially Ricean channels. Monte-Carlo experiments for the bounds reveal that, while achievable rates are stable when varying the receiver’s observed scattered-path to line-of-sight signal power, the receivers must adapt how they share resources in response to this change. The bounds analyzed are proven to be achievable and are seen to be tight with capacity when LAN resources are either ample or limited. Full article
(This article belongs to the Section Information Theory)
Figures

Open AccessArticle Non-Hermitian Operator Modelling of Basic Cancer Cell Dynamics
Entropy 2018, 20(4), 270; doi:10.3390/e20040270
Received: 22 March 2018 / Revised: 31 March 2018 / Accepted: 8 April 2018 / Published: 11 April 2018
PDF Full-text (1261 KB) | HTML Full-text | XML Full-text
Abstract
We propose a dynamical system of tumor cells proliferation based on operatorial methods. The approach we propose is quantum-like: we use ladder and number operators to describe healthy and tumor cells birth and death, and the evolution is ruled by a non-hermitian
[...] Read more.
We propose a dynamical system of tumor cells proliferation based on operatorial methods. The approach we propose is quantum-like: we use ladder and number operators to describe healthy and tumor cells birth and death, and the evolution is ruled by a non-hermitian Hamiltonian which includes, in a non reversible way, the basic biological mechanisms we consider for the system. We show that this approach is rather efficient in describing some processes of the cells. We further add some medical treatment, described by adding a suitable term in the Hamiltonian, which controls and limits the growth of tumor cells, and we propose an optimal approach to stop, and reverse, this growth. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessArticle BROJA-2PID: A Robust Estimator for Bivariate Partial Information Decomposition
Entropy 2018, 20(4), 271; doi:10.3390/e20040271
Received: 22 February 2018 / Revised: 27 March 2018 / Accepted: 9 April 2018 / Published: 11 April 2018
PDF Full-text (754 KB) | HTML Full-text | XML Full-text
Abstract
Makkeh, Theis, and Vicente found that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decomposition (BROJA PID) measure. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model.
[...] Read more.
Makkeh, Theis, and Vicente found that Cone Programming model is the most robust to compute the Bertschinger et al. partial information decomposition (BROJA PID) measure. We developed a production-quality robust software that computes the BROJA PID measure based on the Cone Programming model. In this paper, we prove the important property of strong duality for the Cone Program and prove an equivalence between the Cone Program and the original Convex problem. Then, we describe in detail our software, explain how to use it, and perform some experiments comparing it to other estimators. Finally, we show that the software can be extended to compute some quantities of a trivaraite PID measure. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle R-Norm Entropy and R-Norm Divergence in Fuzzy Probability Spaces
Entropy 2018, 20(4), 272; doi:10.3390/e20040272
Received: 12 March 2018 / Revised: 4 April 2018 / Accepted: 9 April 2018 / Published: 11 April 2018
PDF Full-text (352 KB) | HTML Full-text | XML Full-text
Abstract
In the presented article, we define the R-norm entropy and the conditional R-norm entropy of partitions of a given fuzzy probability space and study the properties of the suggested entropy measures. In addition, we introduce the concept of R-norm divergence
[...] Read more.
In the presented article, we define the R-norm entropy and the conditional R-norm entropy of partitions of a given fuzzy probability space and study the properties of the suggested entropy measures. In addition, we introduce the concept of R-norm divergence of fuzzy P-measures and we derive fundamental properties of this quantity. Specifically, it is shown that the Shannon entropy and the conditional Shannon entropy of fuzzy partitions can be derived from the R-norm entropy and conditional R-norm entropy of fuzzy partitions, respectively, as the limiting cases for R going to 1; the Kullback–Leibler divergence of fuzzy P-measures may be inferred from the R-norm divergence of fuzzy P-measures as the limiting case for R going to 1. We also provide numerical examples that illustrate the results. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle KL Divergence-Based Fuzzy Cluster Ensemble for Image Segmentation
Entropy 2018, 20(4), 273; doi:10.3390/e20040273
Received: 31 January 2018 / Revised: 13 March 2018 / Accepted: 28 March 2018 / Published: 12 April 2018
PDF Full-text (1967 KB) | HTML Full-text | XML Full-text
Abstract
Ensemble clustering combines different basic partitions of a dataset into a more stable and robust one. Thus, cluster ensemble plays a significant role in applications like image segmentation. However, existing ensemble methods have a few demerits, including the lack of diversity of basic
[...] Read more.
Ensemble clustering combines different basic partitions of a dataset into a more stable and robust one. Thus, cluster ensemble plays a significant role in applications like image segmentation. However, existing ensemble methods have a few demerits, including the lack of diversity of basic partitions and the low accuracy caused by data noise. In this paper, to get over these difficulties, we propose an efficient fuzzy cluster ensemble method based on Kullback–Leibler divergence or simply, the KL divergence. The data are first classified with distinct fuzzy clustering methods. Then, the soft clustering results are aggregated by a fuzzy KL divergence-based objective function. Moreover, for image segmentation problems, we utilize the local spatial information in the cluster ensemble algorithm to suppress the effect of noise. Experiment results reveal that the proposed methods outperform many other methods in synthetic and real image-segmentation problems. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
Entropy 2018, 20(4), 274; doi:10.3390/e20040274
Received: 22 March 2018 / Revised: 5 April 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (1150 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that
[...] Read more.
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Effect of Solidification on Microstructure and Properties of FeCoNi(AlSi)0.2 High-Entropy Alloy Under Strong Static Magnetic Field
Entropy 2018, 20(4), 275; doi:10.3390/e20040275
Received: 19 March 2018 / Revised: 30 March 2018 / Accepted: 30 March 2018 / Published: 12 April 2018
PDF Full-text (21209 KB) | HTML Full-text | XML Full-text
Abstract
Strong static magnetic field (SSMF) is a unique way to regulate the microstructure and improve the properties of materials. FeCoNi(AlSi)0.2 alloy is a novel class of soft magnetic materials (SMMs) designed based on high-entropy alloy (HEA) concepts. In this study, a strong
[...] Read more.
Strong static magnetic field (SSMF) is a unique way to regulate the microstructure and improve the properties of materials. FeCoNi(AlSi)0.2 alloy is a novel class of soft magnetic materials (SMMs) designed based on high-entropy alloy (HEA) concepts. In this study, a strong static magnetic field is introduced to tune the microstructure, mechanical, electrical and magnetic properties of FeCoNi(AlSi)0.2 high-entropy alloy. Results indicate that, with the increasing magnetic field intensity, the Vickers hardness and the saturation magnetization (Ms) increase firstly, and then decrease and reach the maximum at 5T, while the yield strength, the residual magnetization (Mr) and the coercivity (Hc) take the opposite trend. The resistivity values (ρ) are found to be enhanced by the increasing magnetic field intensity. The main reasons for the magnetic field on the above effects are interpreted by microstructure evolution (phase species and volume fraction), atomic-level structure and defects (vacancy and dislocation density). Full article
Figures

Figure 1

Open AccessArticle Admissible Consensus for Descriptor Multi-Agent Systems with Exogenous Disturbances
Entropy 2018, 20(4), 276; doi:10.3390/e20040276
Received: 23 March 2018 / Revised: 10 April 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (603 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we study the admissible consensus for descriptor multi-agent systems (MASs) with exogenous disturbances that are generated by some linear systems. The topology among agents is represented by a directed graph. For solving the admissible consensus problem, the exogenous disturbance observer
[...] Read more.
In this paper, we study the admissible consensus for descriptor multi-agent systems (MASs) with exogenous disturbances that are generated by some linear systems. The topology among agents is represented by a directed graph. For solving the admissible consensus problem, the exogenous disturbance observer and distributed control protocol are proposed. With the help of the graph theory and the generalized Riccati equation, some conditions for admissible consensus of descriptor MASs with exogenous disturbances are obtained. Finally, we provide a numerical simulation to effectively illustrate the results we have reached before. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Exact Expressions of Spin-Spin Correlation Functions of the Two-Dimensional Rectangular Ising Model on a Finite Lattice
Entropy 2018, 20(4), 277; doi:10.3390/e20040277
Received: 8 February 2018 / Revised: 29 March 2018 / Accepted: 10 April 2018 / Published: 12 April 2018
PDF Full-text (404 KB) | HTML Full-text | XML Full-text
Abstract
We employ the spinor analysis method to evaluate exact expressions of spin-spin correlation functions of the two-dimensional rectangular Ising model on a finite lattice, special process enables us to actually carry out the calculation process. We first present some exact expressions of correlation
[...] Read more.
We employ the spinor analysis method to evaluate exact expressions of spin-spin correlation functions of the two-dimensional rectangular Ising model on a finite lattice, special process enables us to actually carry out the calculation process. We first present some exact expressions of correlation functions of the model with periodic-periodic boundary conditions on a finite lattice. The corresponding forms in the thermodynamic limit are presented, which show the short-range order. Then, we present the exact expression of the correlation function of the two farthest pair of spins in a column of the model with periodic-free boundary conditions on a finite lattice. Again, the corresponding form in the thermodynamic limit is discussed, from which the long-range order clearly emerges as the temperature decreases. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Contextuality Analysis of the Double Slit Experiment (with a Glimpse into Three Slits)
Entropy 2018, 20(4), 278; doi:10.3390/e20040278
Received: 31 January 2018 / Revised: 26 March 2018 / Accepted: 9 April 2018 / Published: 12 April 2018
PDF Full-text (1195 KB) | HTML Full-text | XML Full-text
Abstract
The Contextuality-by-Default theory is illustrated on contextuality analysis of the idealized double-slit experiment. The experiment is described by a system of contextually labeled binary random variables each of which answers the question: Has the particle hit the detector, having passed through a given
[...] Read more.
The Contextuality-by-Default theory is illustrated on contextuality analysis of the idealized double-slit experiment. The experiment is described by a system of contextually labeled binary random variables each of which answers the question: Has the particle hit the detector, having passed through a given slit (left or right) in a given state (open or closed)? This system of random variables is a cyclic system of rank 4, formally the same as the system describing the Einsten-Podolsky-Rosen-Bell paradigm with signaling. Unlike the latter, however, the system describing the double-slit experiment is always noncontextual, i.e., the context-dependence in it is entirely explainable in terms of direct influences of contexts (closed-open arrangements of the slits) upon the marginal distributions of the random variables involved. The analysis presented is entirely within the framework of abstract classical probability theory (with contextually labeled random variables). The only physical constraint used in the analysis is that a particle cannot pass through a closed slit. The noncontextuality of the double-slit system does not generalize to systems describing experiments with more than two slits: in an abstract triple-slit system, almost any set of observable detection probabilities is compatible with both a contextual scenario and a noncontextual scenario of the particle passing though various combinations of open and closed slits (although the issue of physical realizability of these scenarios remains open). Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessArticle Relation between Self-Organization and Wear Mechanisms of Diamond Films
Entropy 2018, 20(4), 279; doi:10.3390/e20040279
Received: 1 February 2018 / Revised: 5 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018
PDF Full-text (10538 KB) | HTML Full-text | XML Full-text
Abstract
The study deals with tribological properties of diamond films that were tested under reciprocal sliding conditions against Si3N4 balls. Adhesive and abrasive wear are explained in terms of nonequilibrium thermodynamic model of friction and wear. Surface roughness alteration and film
[...] Read more.
The study deals with tribological properties of diamond films that were tested under reciprocal sliding conditions against Si3N4 balls. Adhesive and abrasive wear are explained in terms of nonequilibrium thermodynamic model of friction and wear. Surface roughness alteration and film deformation induce instabilities in the tribological system, therefore self-organization can occur. Instabilities can lead to an increase of the real contact area between the ball and film, resulting in the seizure between the sliding counterparts (degenerative case of self-organization). However, the material cannot withstand the stress and collapses due to high friction forces, thus this regime of sliding corresponds to the adhesive wear. In contrast, a decrease of the real contact area leads to the decrease of the coefficient of friction (constructive self-organization). However, it results in a contact pressure increase on the top of asperities within the contact zone, followed by material collapse, i.e., abrasive wear. Mentioned wear mechanisms should be distinguished from the self-lubricating properties of diamond due to the formation of a carbonaceous layer. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Towards Experiments to Test Violation of the Original Bell Inequality
Entropy 2018, 20(4), 280; doi:10.3390/e20040280
Received: 16 February 2018 / Revised: 29 March 2018 / Accepted: 11 April 2018 / Published: 13 April 2018
PDF Full-text (257 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to attract the attention of experimenters to the original Bell (OB) inequality that was shadowed by the common consideration of the Clauser–Horne–Shimony–Holt (CHSH) inequality. There are two reasons to test the OB inequality and not the CHSH
[...] Read more.
The aim of this paper is to attract the attention of experimenters to the original Bell (OB) inequality that was shadowed by the common consideration of the Clauser–Horne–Shimony–Holt (CHSH) inequality. There are two reasons to test the OB inequality and not the CHSH inequality. First of all, the OB inequality is a straightforward consequence to the Einstein–Podolsky–Rosen (EPR) argumentation. In addition, only this inequality is directly related to the EPR–Bohr debate. The second distinguishing feature of the OB inequality was emphasized by Itamar Pitowsky. He pointed out that the OB inequality provides a higher degree of violations of classicality than the CHSH inequality. For the CHSH inequality, the fraction of the quantum (Tsirelson) bound Q CHSH = 2 2 to the classical bound C CHSH = 2 , i.e., F CHSH = Q CHSH C CHSH = 2 is less than the fraction of the quantum bound for the OB inequality Q OB = 3 2 to the classical bound C OB = 1 , i.e., F OB = Q OB C OB = 3 2 . Thus, by violating the OB inequality, it is possible to approach a higher degree of deviation from classicality. The main problem is that the OB inequality is derived under the assumption of perfect (anti-) correlations. However, the last few years have been characterized by the amazing development of quantum technologies. Nowadays, there exist sources producing, with very high probability, the pairs of photons in the singlet state. Moreover, the efficiency of photon detectors was improved tremendously. In any event, one can start by proceeding with the fair sampling assumption. Another possibility is to use the scheme of the Hensen et al. experiment for entangled electrons. Here, the detection efficiency is very high. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Open AccessArticle Cross Mean Annual Runoff Pseudo-Elasticity of Entropy for Quaternary Catchments of the Upper Vaal Catchment in South Africa
Entropy 2018, 20(4), 281; doi:10.3390/e20040281
Received: 2 November 2017 / Revised: 21 December 2017 / Accepted: 21 December 2017 / Published: 13 April 2018
PDF Full-text (4549 KB) | HTML Full-text | XML Full-text
Abstract
This study focuses preliminarily on the intra-tertiary catchment (TC) assessment of cross MAR pseudo-elasticity of entropy, which determines the impact of changes in MAR for a quaternary catchment (QC) on the entropy of another (other) QC(s). The TCs of the Upper Vaal catchment
[...] Read more.
This study focuses preliminarily on the intra-tertiary catchment (TC) assessment of cross MAR pseudo-elasticity of entropy, which determines the impact of changes in MAR for a quaternary catchment (QC) on the entropy of another (other) QC(s). The TCs of the Upper Vaal catchment were used preliminarily for this assessment and surface water resources (WR) of South Africa of 1990 (WR90), of 2005 (WR2005) and of 2012 (WR2012) data sets were used. The TCs are grouped into three secondary catchments, i.e., downstream of Vaal Dam, upstrream of Vaal dam and Wilge. It is revealed that, there are linkages in terms of mean annual runoff (MAR) between QCs; which could be complements (negative cross elasticity) or substitutes (positive cross elasticity). It is shown that cross MAR pseudo-elasticity can be translated into correlation strength between QC pairs; i.e., high cross elasticity (low catchment resilience) and low cross elasticity (high catchment resilience). Implicitly, catchment resilience is shown to be associated with the risk of vulnerability (or sustainability level) of water resources, in terms of MAR, which is generally low (or high). Besides, for each TC, the dominance (of complements or substitutes) and the global highest cross MAR elasticity are determined. The overall average cross MAR elasticity of QCs for each TC was shown to be in the zone of tolerable entropy, hence the zone of functioning resilience. This could assure that water resources remained fairly sustainable in TCs that form the secondary catchments of the Upper Vaal. Cross MAR pseudo-elasticity concept could be further extended to an intra-secondary catchment assessment. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle A Symmetric Plaintext-Related Color Image Encryption System Based on Bit Permutation
Entropy 2018, 20(4), 282; doi:10.3390/e20040282
Received: 23 February 2018 / Revised: 10 April 2018 / Accepted: 11 April 2018 / Published: 13 April 2018
PDF Full-text (9174 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a variety of chaos-based image encryption algorithms adopting the traditional permutation-diffusion structure have been suggested. Most of these algorithms cannot resist the powerful chosen-plaintext attack and chosen-ciphertext attack efficiently for less sensitivity to plain-image. This paper presents a symmetric color image encryption
[...] Read more.
Recently, a variety of chaos-based image encryption algorithms adopting the traditional permutation-diffusion structure have been suggested. Most of these algorithms cannot resist the powerful chosen-plaintext attack and chosen-ciphertext attack efficiently for less sensitivity to plain-image. This paper presents a symmetric color image encryption system based on plaintext-related random access bit-permutation mechanism (PRRABPM). In the proposed scheme, a new random access bit-permutation mechanism is used to shuffle 3D bit matrix transformed from an original color image, making the RGB components of the color image interact with each other. Furthermore, the key streams used in random access bit-permutation mechanism operation are extremely dependent on plain image in an ingenious way. Therefore, the encryption system is sensitive to tiny differences in key and original images, which means that it can efficiently resist chosen-plaintext attack and chosen-ciphertext attack. In the diffusion stage, the previous encrypted pixel is used to encrypt the current pixel. The simulation results show that even though the permutation-diffusion operation in our encryption scheme is performed only one time, the proposed algorithm has favorable security performance. Considering real-time applications, the encryption speed can be further improved. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle TRSWA-BP Neural Network for Dynamic Wind Power Forecasting Based on Entropy Evaluation
Entropy 2018, 20(4), 283; doi:10.3390/e20040283
Received: 13 March 2018 / Revised: 5 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018
PDF Full-text (2322 KB) | HTML Full-text | XML Full-text
Abstract
The performance evaluation of wind power forecasting under commercially operating circumstances is critical to a wide range of decision-making situations, yet difficult because of its stochastic nature. This paper firstly introduces a novel TRSWA-BP neural network, of which learning process is based on
[...] Read more.
The performance evaluation of wind power forecasting under commercially operating circumstances is critical to a wide range of decision-making situations, yet difficult because of its stochastic nature. This paper firstly introduces a novel TRSWA-BP neural network, of which learning process is based on an efficiency tabu, real-coded, small-world optimization algorithm (TRSWA). In order to deal with the strong volatility and stochastic behavior of the wind power sequence, three forecasting models of the TRSWA-BP are presented, which are combined with EMD (empirical mode decomposition), PSR (phase space reconstruction), and EMD-based PSR. The error sequences of the above methods are then proved to have non-Gaussian properties, and a novel criterion of normalized Renyi’s quadratic entropy (NRQE) is proposed, which can evaluate their dynamic predicted accuracy. Finally, illustrative predictions of the next 1, 4, 6, and 24 h time-scales are examined by historical wind power data, under different evaluations. From the results, we can observe that not only do the proposed models effectively revise the error due to the fluctuation and multi-fractal property of wind power, but also that the NRQE can reserve its feasible assessment upon the stochastic predicted error. Full article
Figures

Figure 1

Open AccessArticle A Co-Opetitive Automated Negotiation Model for Vertical Allied Enterprises Teams and Stakeholders
Entropy 2018, 20(4), 286; doi:10.3390/e20040286
Received: 18 February 2018 / Revised: 2 April 2018 / Accepted: 11 April 2018 / Published: 14 April 2018
PDF Full-text (6068 KB) | HTML Full-text | XML Full-text
Abstract
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit
[...] Read more.
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit conflicts among members, so that the functionality required by stakeholders can be fulfilled. As an effective solution, automated negotiation for the vertical allied enterprises team and stakeholder will sufficiently make use of emerging team advantages and significantly reduce the profit conflicts in teams with grouping decisions rather than unilateral decisions by some leader. In this paper, an automated negotiation model is designed to describe both the collaborative game process among the team members and the competitive negotiation process between the allied team and the stakeholder. Considering the co-competitiveness of the vertical allied team, the designed model helps the team members making decision for their own sake, and the team counter-offers for the ongoing negotiation are generated with non-cooperative game process, where the profit derived from negotiation result is distributed with Shapley value method according to contribution or importance contributed by each team member. Finally, a case study is given to testify the effectiveness of the designed model. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle Centered and Averaged Fuzzy Entropy to Improve Fuzzy Entropy Precision
Entropy 2018, 20(4), 287; doi:10.3390/e20040287
Received: 16 March 2018 / Revised: 13 April 2018 / Accepted: 13 April 2018 / Published: 15 April 2018
PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number
[...] Read more.
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number of samples in the data under study. The longer the signal, the better it is. Nevertheless, long signals are often difficult to obtain in real applications. This is why we herein propose a new FuzzyEn that presents better precision than the standard FuzzyEn. This is performed by increasing the number of samples used in the computation of the entropy measure, without changing the length of the time series. Thus, for the comparisons of the patterns, the mean value is no longer a constraint. Moreover, translated patterns are not the only ones considered: reflected, inversed, and glide-reflected patterns are also taken into account. The new measure (so-called centered and averaged FuzzyEn) is applied to synthetic and biomedical signals. The results show that the centered and averaged FuzzyEn leads to more precise results than the standard FuzzyEn: the relative percentile range is reduced compared to the standard sample entropy and fuzzy entropy measures. The centered and averaged FuzzyEn could now be used in other applications to compare its performances to those of other already-existing entropy measures. Full article
Figures

Figure 1

Open AccessArticle Statistical Reasoning: Choosing and Checking the Ingredients, Inferences Based on a Measure of Statistical Evidence with Some Applications
Entropy 2018, 20(4), 289; doi:10.3390/e20040289
Received: 17 February 2018 / Revised: 5 April 2018 / Accepted: 11 April 2018 / Published: 16 April 2018
PDF Full-text (406 KB) | HTML Full-text | XML Full-text
Abstract
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for
[...] Read more.
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for bias, checking for prior-data conflict and estimation and hypothesis assessment inferences based on a measure of evidence. A long-standing anomalous example is resolved by this approach to inference and an application is made to a practical problem of considerable importance, which, among other novel aspects of the analysis, involves the development of a relevant elicitation algorithm. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessArticle Optimization of CNN through Novel Training Strategy for Visual Classification Problems
Entropy 2018, 20(4), 290; doi:10.3390/e20040290
Received: 31 January 2018 / Revised: 30 March 2018 / Accepted: 14 April 2018 / Published: 17 April 2018
PDF Full-text (8832 KB) | HTML Full-text | XML Full-text
Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the
[...] Read more.
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset. Full article
Figures

Figure 1

Open AccessArticle Measurement-Device Independency Analysis of Continuous-Variable Quantum Digital Signature
Entropy 2018, 20(4), 291; doi:10.3390/e20040291
Received: 22 March 2018 / Revised: 15 April 2018 / Accepted: 16 April 2018 / Published: 17 April 2018
PDF Full-text (767 KB) | HTML Full-text | XML Full-text
Abstract
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols.
[...] Read more.
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols. Considering the importance of quantum digital signature in quantum cryptography, in this paper, we attempt to analyze the measurement-device independency of continuous-variable quantum digital signature, especially continuous-variable quantum homomorphic signature. Firstly, we calculate the upper bound of the error rate of a protocol. If it is negligible on condition that all measurement devices are untrusted, the protocol is deemed to be measurement-device-independent. Then, we simplify the calculation by using the characteristics of continuous variables and prove the measurement-device independency of the protocol according to the calculation result. In addition, the proposed analysis method can be extended to other quantum cryptographic protocols besides continuous-variable quantum homomorphic signature. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Generalized Weyl–Heisenberg Algebra, Qudit Systems and Entanglement Measure of Symmetric States via Spin Coherent States
Entropy 2018, 20(4), 292; doi:10.3390/e20040292
Received: 26 March 2018 / Revised: 13 April 2018 / Accepted: 13 April 2018 / Published: 17 April 2018
PDF Full-text (332 KB) | HTML Full-text | XML Full-text
Abstract
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and
[...] Read more.
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and entangled states of a system of N = d 1 symmetric qubit states. Using the decomposition property of Dicke states, it is shown that the separable states coincide with the Perelomov coherent states associated with the generalized Weyl–Heisenberg algebra considered in this paper. In the so-called Majorana scheme, the qudit (d-level) states are represented by N points on the Bloch sphere; roughly speaking, it can be said that a qudit (in a d-dimensional space) is describable by a N-qubit vector (in a N-dimensional space). In such a scheme, the permanent of the matrix describing the overlap between the N qubits makes it possible to measure the entanglement between the N qubits forming the qudit. This is confirmed by a Fubini–Study metric analysis. A new parameter, proportional to the permanent and called perma-concurrence, is introduced for characterizing the entanglement of a symmetric qudit arising from N qubits. For d = 3 ( N = 2 ), this parameter constitutes an alternative to the concurrence for two qubits. Other examples are given for d = 4 and 5. A connection between Majorana stars and zeros of a Bargmmann function for qudits closes this article. Full article
(This article belongs to the Special Issue Entropy and Information in the Foundation of Quantum Physics)
Open AccessArticle Entropy Production on the Gravity-Driven Flow with Free Surface Down an Inclined Plane Subjected to Constant Temperature
Entropy 2018, 20(4), 293; doi:10.3390/e20040293
Received: 22 March 2018 / Revised: 13 April 2018 / Accepted: 16 April 2018 / Published: 17 April 2018
PDF Full-text (3077 KB) | HTML Full-text | XML Full-text
Abstract
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of
[...] Read more.
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of a falling film have an important role in the entropy production. When the layer shows an unstable evolution, the entropy production by fluid friction is much larger than that of the film with a stable flat interface. As the heat transfers actively from the free surface to the ambient air, the temperature gradient inside flowing films becomes large and the entropy generation by heat transfer increases. The contribution of fluid friction on the volumetric averaged entropy production is larger than that of heat transfer at moderate and high viscous dissipation parameters. Full article
(This article belongs to the Special Issue Entropy Production in Turbulent Flow)
Figures

Figure 1

Open AccessFeature PaperArticle A Lenient Causal Arrow of Time?
Entropy 2018, 20(4), 294; doi:10.3390/e20040294
Received: 29 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (586 KB) | HTML Full-text | XML Full-text
Abstract
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark
[...] Read more.
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark contrast with the many different possible locality conditions which have been studied and elaborated upon. In the present work, some retrocausal toy-models which reproduce the predictions of quantum mechanics for Bell-type correlations are reviewed. It is pointed out that a certain toy-model which is ostensibly superdeterministic—based on denying the free-variable status of some of quantum mechanics’ input parameters—actually contains within it a complete retrocausal toy-model. Occam’s razor thus indicates that the superdeterministic point of view is superfluous. A challenge is to generalize the retrocausal toy-models to a full theory—a reformulation of quantum mechanics—in which the standard causal arrow of time would be replaced by a more lenient one: an arrow of time applicable only to macroscopically-available information. In discussing such a reformulation, one finds that many of the perplexing features of quantum mechanics could arise naturally, especially in the context of stochastic theories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle A Novel Algorithm to Improve Digital Chaotic Sequence Complexity through CCEMD and PE
Entropy 2018, 20(4), 295; doi:10.3390/e20040295
Received: 18 March 2018 / Revised: 10 April 2018 / Accepted: 12 April 2018 / Published: 18 April 2018
PDF Full-text (4821 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further,
[...] Read more.
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further, we designed a new algorithm based on complementary ensemble empirical mode decomposition (CEEMD) and permutation entropy (PE) that can effectively enhance digital chaotic sequence complexity. In addition, an image encryption experiment was performed with post-processing of the chaotic binary sequences by the new algorithm. The experimental results show good performance of the chaotic binary sequence. Full article
Figures

Figure 1a

Open AccessArticle Image Clustering with Optimization Algorithms and Color Space
Entropy 2018, 20(4), 296; doi:10.3390/e20040296
Received: 19 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (30445 KB) | HTML Full-text | XML Full-text
Abstract
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the
[...] Read more.
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the number of thresholds. However, the determination of multiple thresholds is a typical issue. Moreover, the conventional thresholding algorithms could not be used in color image segmentation. In this study, a new color image clustering algorithm with multilevel thresholding has been presented and, it has been shown how to use the multilevel thresholding techniques for color image clustering. Thus, initially, threshold selection techniques such as the Otsu and Kapur methods were employed for each color channel separately. The objective functions of both approaches have been integrated with the forest optimization algorithm (FOA) and particle swarm optimization (PSO) algorithm. In the next stage, thresholds determined by optimization algorithms were used to divide color space into small cubes or prisms. Each sub-cube or prism created in the color space was evaluated as a cluster. As the volume of prisms affects the homogeneity of the clusters created, multiple thresholds were employed to reduce the sizes of the sub-cubes. The performance of the proposed method was tested with different images. It was observed that the results obtained were more efficient than conventional methods. Full article
Figures

Figure 1

Open AccessArticle Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; doi:10.3390/e20040297
Received: 10 July 2017 / Revised: 6 April 2018 / Accepted: 10 April 2018 / Published: 18 April 2018
PDF Full-text (529 KB) | HTML Full-text | XML Full-text
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The
[...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Calculation of Configurational Entropy in Complex Landscapes
Entropy 2018, 20(4), 298; doi:10.3390/e20040298
Received: 22 December 2017 / Revised: 4 April 2018 / Accepted: 11 April 2018 / Published: 19 April 2018
PDF Full-text (2449 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal
[...] Read more.
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal to the logarithm of the number of ways a lattice with a given dimensionality and number of classes can be arranged to produce the same total amount of edge between cells of different classes. However, that work seemed to also suggest that the feasibility of applying this method to real landscapes was limited due to intractably large numbers of possible arrangements of raster cells in large landscapes. Here I extend that work by showing that: (1) the proportion of arrangements rather than the number with a given amount of edge length provides a means to calculate unbiased relative configurational entropy, obviating the need to compute all possible configurations of a landscape lattice; (2) the edge lengths of randomized landscape mosaics are normally distributed, following the central limit theorem; and (3) given this normal distribution it is possible to fit parametric probability density functions to estimate the expected proportion of randomized configurations that have any given edge length, enabling the calculation of configurational entropy on any landscape regardless of size or number of classes. I evaluate the boundary limits (4) for this normal approximation for small landscapes with a small proportion of a minority class and show it holds under all realistic landscape conditions. I further (5) demonstrate that this relationship holds for a sample of real landscapes that vary in size, patch richness, and evenness of area in each cover type, and (6) I show that the mean and standard deviation of the normally distributed edge lengths can be predicted nearly perfectly as a function of the size, patch richness and diversity of a landscape. Finally, (7) I show that the configurational entropy of a landscape is highly related to the dimensionality of the landscape, the number of cover classes, the evenness of landscape composition across classes, and landscape heterogeneity. These advances provide a means for researchers to directly estimate the frequency distribution of all possible macrostates of any observed landscape, and then directly calculate the relative configurational entropy of the observed macrostate, and to understand the ecological meaning of different amounts of configurational entropy. These advances enable scientists to take configurational entropy from a concept to an applied tool to measure and compare the disorder of real landscapes with an objective and unbiased measure based on entropy and the second law. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Figures

Figure 1

Open AccessArticle Quantum Nonlocality and Quantum Correlations in the Stern–Gerlach Experiment
Entropy 2018, 20(4), 299; doi:10.3390/e20040299
Received: 27 February 2018 / Revised: 11 April 2018 / Accepted: 12 April 2018 / Published: 19 April 2018
PDF Full-text (953 KB) | HTML Full-text | XML Full-text
Abstract
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or
[...] Read more.
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or comprehended in the current literature. Hence, the main aim of this paper is to demonstrate that the SGE possesses a quantum nonlocal character that has not previously been visualized or presented before. Accordingly, to show the nonlocality into the SGE, we calculate the quantum correlations C ( z , θ ) by redefining the Banaszek–Wódkiewicz correlation in terms of the Wigner operator, that is C ( z , θ ) = Ψ | W ^ ( z , p z ) σ ^ ( θ ) | Ψ , where W ^ ( z , p z ) is the Wigner operator, σ ^ ( θ ) is the Pauli spin operator in an arbitrary direction θ and | Ψ is the quantum state given by an entangled state of the external degree of freedom and the eigenstates of the spin. We show that this correlation function for the SGE violates the Clauser–Horne–Shimony–Holt Bell inequality. Thus, this feature of the SGE might be interesting for both the teaching of quantum mechanics and to investigate the phenomenon of quantum nonlocality. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessArticle Information-Length Scaling in a Generalized One-Dimensional Lloyd’s Model
Entropy 2018, 20(4), 300; doi:10.3390/e20040300
Received: 27 December 2017 / Revised: 29 March 2018 / Accepted: 8 April 2018 / Published: 20 April 2018
PDF Full-text (333 KB) | HTML Full-text | XML Full-text
Abstract
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P(ϵ)1/ϵ1+α with α
[...] Read more.
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P ( ϵ ) 1 / ϵ 1 + α with α ( 0 , 2 ] ; where ϵ are the on-site random energies. Our model serves as a generalization of 1D Lloyd’s model, which corresponds to α = 1 . In particular, we demonstrate that the information length β of the eigenfunctions follows the scaling law β = γ x / ( 1 + γ x ) , with x = ξ / L and γ γ ( α ) . Here, ξ is the eigenfunction localization length (that we extract from the scaling of Landauer’s conductance) and L is the wire length. We also report that for α = 2 the properties of the 1D Anderson model are effectively reproduced. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessFeature PaperArticle Extended Thermodynamics of Rarefied Polyatomic Gases: 15-Field Theory Incorporating Relaxation Processes of Molecular Rotation and Vibration
Entropy 2018, 20(4), 301; doi:10.3390/e20040301
Received: 3 April 2018 / Revised: 17 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
PDF Full-text (402 KB) | HTML Full-text | XML Full-text
Abstract
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The
[...] Read more.
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The theory is justified, at mesoscopic level, by a generalized Boltzmann equation in which the distribution function depends on two internal variables that take into account the energy exchange among the different molecular modes of a gas, that is, translational, rotational, and vibrational modes. By adopting the generalized Bhatnagar, Gross and Krook (BGK)-type collision term, we derive explicitly the closed system of field equations with the use of the Maximum Entropy Principle (MEP). The NSF theory is derived from the RET theory as a limiting case of small relaxation times via the Maxwellian iteration. The relaxation times introduced in the theory are shown to be related to the shear and bulk viscosities and heat conductivity. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Open AccessArticle Location-Aware Incentive Mechanism for Traffic Offloading in Heterogeneous Networks: A Stackelberg Game Approach
Entropy 2018, 20(4), 302; doi:10.3390/e20040302
Received: 28 February 2018 / Revised: 1 April 2018 / Accepted: 4 April 2018 / Published: 20 April 2018
PDF Full-text (1230 KB) | HTML Full-text | XML Full-text
Abstract
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading.
[...] Read more.
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading. A location-aware incentive mechanism is therefore designed to incentivize small cells to serve macrocell users. Instead of taking the amount of resources shared as the basis of the reward division, the performance promotion brought to the macro network is taken. Meanwhile, in order to ensure the superiority of small cell users, the significance of them weighs heavier than macrocell users instead of being treated equally. The offloading problem is formulated as a Stackelberg game where the macro cell base station is the leader and small cells are followers. The Stackelberg equilibrium of the game is proved to be existing and unique. It is also proved to be the optimum of the proposed problem. Simulation and numerical results verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessArticle The Conservation of Average Entropy Production Rate in a Model of Signal Transduction: Information Thermodynamics Based on the Fluctuation Theorem
Entropy 2018, 20(4), 303; doi:10.3390/e20040303
Received: 17 March 2018 / Revised: 18 April 2018 / Accepted: 19 April 2018 / Published: 21 April 2018
PDF Full-text (871 KB) | HTML Full-text | XML Full-text
Abstract
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined
[...] Read more.
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined using information theory. By applying the fluctuation theorem for reversible signal steps, the transition probability was described using the average entropy production rate. Specifically, when the signal event number during the cascade was maximized, the average entropy production rate was found to be conserved during the entire cascade. This approach provides a quantitative means of analyzing signal transduction and identifies an effective cascade for a signaling network. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle The Power Law Characteristics of Stock Price Jump Intervals: An Empirical and Computational Experimental Study
Entropy 2018, 20(4), 304; doi:10.3390/e20040304
Received: 22 March 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 21 April 2018
PDF Full-text (5984 KB) | HTML Full-text | XML Full-text
Abstract
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in
[...] Read more.
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in which an agent’s investment strategies, risk appetite, learning ability, adaptability, and dynamic changes are considered to create a dynamically changing environment. An analysis of these data packets from the ASM simulation indicates that, with the learning mechanism, the ASM reflects the kurtosis, fat-tailed distribution characteristics commonly observed in real markets. Data packets obtained from simulating the ASM for 5010 periods are incorporated into a regression analysis. Analysis results indicate that the JDMPL effectively characterizes the stock price jumps in the market. The results also support the hypothesis that the time interval of stock price jumps is consistent with the power law and indicate that the diversity and dynamic changes of agents’ investment strategies are the reasons for the discontinuity in the changes of stock prices. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
Entropy 2018, 20(4), 305; doi:10.3390/e20040305
Received: 22 January 2018 / Revised: 5 April 2018 / Accepted: 17 April 2018 / Published: 23 April 2018
PDF Full-text (544 KB) | HTML Full-text | XML Full-text
Abstract
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution
[...] Read more.
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy. Full article
Figures

Figure 1

Open AccessArticle Balancing Non-Equilibrium Driving with Nucleotide Selectivity at Kinetic Checkpoints in Polymerase Fidelity Control
Entropy 2018, 20(4), 306; doi:10.3390/e20040306
Received: 27 February 2018 / Revised: 17 April 2018 / Accepted: 21 April 2018 / Published: 23 April 2018
PDF Full-text (20659 KB) | HTML Full-text | XML Full-text
Abstract
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium
[...] Read more.
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium to achieve low error rates. Considering that each cycle consists of multiple kinetic steps with different transition rates, one expects that the kinetic modulations by polymerases are not evenly conducted at each step. We show that accelerations at different kinetic steps impact quite differently to the overall elongation characteristics. In particular, for forward transitions that discriminate cognate and non-cognate nucleotide species to serve as kinetic selection checkpoints, the transition cannot be accelerated too quickly nor retained too slowly to obtain low error rates, as balancing is needed between the nucleotide selectivity and the non-equilibrium driving. Such a balance is not the same as the speed-accuracy tradeoff in which high accuracy is always obtained at sacrifice of speed. For illustration purposes, we used three-state and five-state models of nucleotide addition in the polymerase elongation and show how the non-equilibrium steady state characteristics change upon variations on stepwise forward or backward kinetics. Notably, by using the multi-step elongation schemes and parameters from T7 RNA polymerase transcription elongation, we demonstrate that individual transitions serving as selection checkpoints need to proceed at moderate rates in order to sustain the necessary non-equilibrium drives as well as to allow nucleotide selections for an optimal error control. We also illustrate why rate-limiting conformational transitions of the enzyme likely play a significant role in the error reduction. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Review

Jump to: Editorial, Research, Other

Open AccessReview Methods and Challenges in Shot Boundary Detection: A Review
Entropy 2018, 20(4), 214; doi:10.3390/e20040214
Received: 25 January 2018 / Revised: 18 February 2018 / Accepted: 27 February 2018 / Published: 23 March 2018
PDF Full-text (17694 KB) | HTML Full-text | XML Full-text
Abstract
The recent increase in the number of videos available in cyberspace is due to the availability of multimedia devices, highly developed communication technologies, and low-cost storage devices. These videos are simply stored in databases through text annotation. Content-based video browsing and retrieval are
[...] Read more.
The recent increase in the number of videos available in cyberspace is due to the availability of multimedia devices, highly developed communication technologies, and low-cost storage devices. These videos are simply stored in databases through text annotation. Content-based video browsing and retrieval are inefficient due to the method used to store videos in databases. Video databases are large in size and contain voluminous information, and these characteristics emphasize the need for automated video structure analyses. Shot boundary detection (SBD) is considered a substantial process of video browsing and retrieval. SBD aims to detect transition and their boundaries between consecutive shots; hence, shots with rich information are used in the content-based video indexing and retrieval. This paper presents a review of an extensive set for SBD approaches and their development. The advantages and disadvantages of each approach are comprehensively explored. The developed algorithms are discussed, and challenges and recommendations are presented. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessReview Fluctuations, Finite-Size Effects and the Thermodynamic Limit in Computer Simulations: Revisiting the Spatial Block Analysis Method
Entropy 2018, 20(4), 222; doi:10.3390/e20040222
Received: 1 March 2018 / Revised: 21 March 2018 / Accepted: 22 March 2018 / Published: 24 March 2018
PDF Full-text (899 KB) | HTML Full-text | XML Full-text
Abstract
The spatial block analysis (SBA) method has been introduced to efficiently extrapolate thermodynamic quantities from finite-size computer simulations of a large variety of physical systems. In the particular case of simple liquids and liquid mixtures, by subdividing the simulation box into blocks of
[...] Read more.
The spatial block analysis (SBA) method has been introduced to efficiently extrapolate thermodynamic quantities from finite-size computer simulations of a large variety of physical systems. In the particular case of simple liquids and liquid mixtures, by subdividing the simulation box into blocks of increasing size and calculating volume-dependent fluctuations of the number of particles, it is possible to extrapolate the bulk isothermal compressibility and Kirkwood–Buff integrals in the thermodynamic limit. Only by explicitly including finite-size effects, ubiquitous in computer simulations, into the SBA method, the extrapolation to the thermodynamic limit can be achieved. In this review, we discuss two of these finite-size effects in the context of the SBA method due to (i) the statistical ensemble and (ii) the finite integration domains used in computer simulations. To illustrate the method, we consider prototypical liquids and liquid mixtures described by truncated and shifted Lennard–Jones (TSLJ) potentials. Furthermore, we show some of the most recent developments of the SBA method, in particular its use to calculate chemical potentials of liquids in a wide range of density/concentration conditions. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessReview Software Defined Networks in Wireless Sensor Architectures
Entropy 2018, 20(4), 225; doi:10.3390/e20040225
Received: 30 January 2018 / Revised: 8 March 2018 / Accepted: 19 March 2018 / Published: 26 March 2018
PDF Full-text (2414 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network
[...] Read more.
Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1