Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 4 (April 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) What are the conceptual foundations of thermodynamics? Mathematicians have explored this question [...] Read more.
View options order results:
result details:
Displaying articles 1-96
Export citation of selected articles as:
Open AccessEditorial Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work
Entropy 2018, 20(4), 307; https://doi.org/10.3390/e20040307
Received: 19 April 2018 / Revised: 19 April 2018 / Accepted: 19 April 2018 / Published: 23 April 2018
Cited by 1 | PDF Full-text (386 KB) | HTML Full-text | XML Full-text
Abstract
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source
[...] Read more.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field. Full article
Figures

Graphical abstract

Open AccessArticle Balancing Non-Equilibrium Driving with Nucleotide Selectivity at Kinetic Checkpoints in Polymerase Fidelity Control
Entropy 2018, 20(4), 306; https://doi.org/10.3390/e20040306
Received: 27 February 2018 / Revised: 17 April 2018 / Accepted: 21 April 2018 / Published: 23 April 2018
PDF Full-text (20711 KB) | HTML Full-text | XML Full-text
Abstract
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium
[...] Read more.
High fidelity gene transcription and replication require kinetic discrimination of nucleotide substrate species by RNA and DNA polymerases under chemical non-equilibrium conditions. It is known that sufficiently large free energy driving force is needed for each polymerization or elongation cycle to maintain far-from-equilibrium to achieve low error rates. Considering that each cycle consists of multiple kinetic steps with different transition rates, one expects that the kinetic modulations by polymerases are not evenly conducted at each step. We show that accelerations at different kinetic steps impact quite differently to the overall elongation characteristics. In particular, for forward transitions that discriminate cognate and non-cognate nucleotide species to serve as kinetic selection checkpoints, the transition cannot be accelerated too quickly nor retained too slowly to obtain low error rates, as balancing is needed between the nucleotide selectivity and the non-equilibrium driving. Such a balance is not the same as the speed-accuracy tradeoff in which high accuracy is always obtained at sacrifice of speed. For illustration purposes, we used three-state and five-state models of nucleotide addition in the polymerase elongation and show how the non-equilibrium steady state characteristics change upon variations on stepwise forward or backward kinetics. Notably, by using the multi-step elongation schemes and parameters from T7 RNA polymerase transcription elongation, we demonstrate that individual transitions serving as selection checkpoints need to proceed at moderate rates in order to sustain the necessary non-equilibrium drives as well as to allow nucleotide selections for an optimal error control. We also illustrate why rate-limiting conformational transitions of the enzyme likely play a significant role in the error reduction. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle On the Reduction of Computational Complexity of Deep Convolutional Neural Networks
Entropy 2018, 20(4), 305; https://doi.org/10.3390/e20040305
Received: 22 January 2018 / Revised: 5 April 2018 / Accepted: 17 April 2018 / Published: 23 April 2018
PDF Full-text (574 KB) | HTML Full-text | XML Full-text
Abstract
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution
[...] Read more.
Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom–Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy. Full article
Figures

Figure 1

Open AccessArticle The Power Law Characteristics of Stock Price Jump Intervals: An Empirical and Computational Experimental Study
Entropy 2018, 20(4), 304; https://doi.org/10.3390/e20040304
Received: 22 March 2018 / Revised: 17 April 2018 / Accepted: 18 April 2018 / Published: 21 April 2018
Cited by 1 | PDF Full-text (6106 KB) | HTML Full-text | XML Full-text
Abstract
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in
[...] Read more.
For the first time, the power law characteristics of stock price jump intervals have been empirically found generally in stock markets. The classical jump-diffusion model is described as the jump-diffusion model with power law (JDMPL). An artificial stock market (ASM) is designed in which an agent’s investment strategies, risk appetite, learning ability, adaptability, and dynamic changes are considered to create a dynamically changing environment. An analysis of these data packets from the ASM simulation indicates that, with the learning mechanism, the ASM reflects the kurtosis, fat-tailed distribution characteristics commonly observed in real markets. Data packets obtained from simulating the ASM for 5010 periods are incorporated into a regression analysis. Analysis results indicate that the JDMPL effectively characterizes the stock price jumps in the market. The results also support the hypothesis that the time interval of stock price jumps is consistent with the power law and indicate that the diversity and dynamic changes of agents’ investment strategies are the reasons for the discontinuity in the changes of stock prices. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle The Conservation of Average Entropy Production Rate in a Model of Signal Transduction: Information Thermodynamics Based on the Fluctuation Theorem
Entropy 2018, 20(4), 303; https://doi.org/10.3390/e20040303
Received: 17 March 2018 / Revised: 18 April 2018 / Accepted: 19 April 2018 / Published: 21 April 2018
Cited by 2 | PDF Full-text (871 KB) | HTML Full-text | XML Full-text
Abstract
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined
[...] Read more.
Cell signal transduction is a non-equilibrium process characterized by the reaction cascade. This study aims to quantify and compare signal transduction cascades using a model of signal transduction. The signal duration was found to be linked to step-by-step transition probability, which was determined using information theory. By applying the fluctuation theorem for reversible signal steps, the transition probability was described using the average entropy production rate. Specifically, when the signal event number during the cascade was maximized, the average entropy production rate was found to be conserved during the entire cascade. This approach provides a quantitative means of analyzing signal transduction and identifies an effective cascade for a signaling network. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Location-Aware Incentive Mechanism for Traffic Offloading in Heterogeneous Networks: A Stackelberg Game Approach
Entropy 2018, 20(4), 302; https://doi.org/10.3390/e20040302
Received: 28 February 2018 / Revised: 1 April 2018 / Accepted: 4 April 2018 / Published: 20 April 2018
PDF Full-text (1230 KB) | HTML Full-text | XML Full-text
Abstract
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading.
[...] Read more.
This article investigates the traffic offloading problem in the heterogeneous network. The location of small cells is considered as an important factor in two aspects: the amount of resources they share for offloaded macrocell users and the performance enhancement they bring after offloading. A location-aware incentive mechanism is therefore designed to incentivize small cells to serve macrocell users. Instead of taking the amount of resources shared as the basis of the reward division, the performance promotion brought to the macro network is taken. Meanwhile, in order to ensure the superiority of small cell users, the significance of them weighs heavier than macrocell users instead of being treated equally. The offloading problem is formulated as a Stackelberg game where the macro cell base station is the leader and small cells are followers. The Stackelberg equilibrium of the game is proved to be existing and unique. It is also proved to be the optimum of the proposed problem. Simulation and numerical results verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Extended Thermodynamics of Rarefied Polyatomic Gases: 15-Field Theory Incorporating Relaxation Processes of Molecular Rotation and Vibration
Entropy 2018, 20(4), 301; https://doi.org/10.3390/e20040301
Received: 3 April 2018 / Revised: 17 April 2018 / Accepted: 17 April 2018 / Published: 20 April 2018
PDF Full-text (402 KB) | HTML Full-text | XML Full-text
Abstract
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The
[...] Read more.
After summarizing the present status of Rational Extended Thermodynamics (RET) of gases, which is an endeavor to generalize the Navier–Stokes and Fourier (NSF) theory of viscous heat-conducting fluids, we develop the molecular RET theory of rarefied polyatomic gases with 15 independent fields. The theory is justified, at mesoscopic level, by a generalized Boltzmann equation in which the distribution function depends on two internal variables that take into account the energy exchange among the different molecular modes of a gas, that is, translational, rotational, and vibrational modes. By adopting the generalized Bhatnagar, Gross and Krook (BGK)-type collision term, we derive explicitly the closed system of field equations with the use of the Maximum Entropy Principle (MEP). The NSF theory is derived from the RET theory as a limiting case of small relaxation times via the Maxwellian iteration. The relaxation times introduced in the theory are shown to be related to the shear and bulk viscosities and heat conductivity. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Open AccessArticle Information-Length Scaling in a Generalized One-Dimensional Lloyd’s Model
Entropy 2018, 20(4), 300; https://doi.org/10.3390/e20040300
Received: 27 December 2017 / Revised: 29 March 2018 / Accepted: 8 April 2018 / Published: 20 April 2018
PDF Full-text (372 KB) | HTML Full-text | XML Full-text
Abstract
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P(ϵ)1/ϵ1+α with α
[...] Read more.
We perform a detailed numerical study of the localization properties of the eigenfunctions of one-dimensional (1D) tight-binding wires with on-site disorder characterized by long-tailed distributions: For large ϵ , P ( ϵ ) 1 / ϵ 1 + α with α ( 0 , 2 ] ; where ϵ are the on-site random energies. Our model serves as a generalization of 1D Lloyd’s model, which corresponds to α = 1 . In particular, we demonstrate that the information length β of the eigenfunctions follows the scaling law β = γ x / ( 1 + γ x ) , with x = ξ / L and γ γ ( α ) . Here, ξ is the eigenfunction localization length (that we extract from the scaling of Landauer’s conductance) and L is the wire length. We also report that for α = 2 the properties of the 1D Anderson model are effectively reproduced. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Quantum Nonlocality and Quantum Correlations in the Stern–Gerlach Experiment
Entropy 2018, 20(4), 299; https://doi.org/10.3390/e20040299
Received: 27 February 2018 / Revised: 11 April 2018 / Accepted: 12 April 2018 / Published: 19 April 2018
PDF Full-text (953 KB) | HTML Full-text | XML Full-text
Abstract
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or
[...] Read more.
The Stern–Gerlach experiment (SGE) is one of the foundational experiments in quantum physics. It has been used in both the teaching and the development of quantum mechanics. However, for various reasons, some of its quantum features and implications are not fully addressed or comprehended in the current literature. Hence, the main aim of this paper is to demonstrate that the SGE possesses a quantum nonlocal character that has not previously been visualized or presented before. Accordingly, to show the nonlocality into the SGE, we calculate the quantum correlations C ( z , θ ) by redefining the Banaszek–Wódkiewicz correlation in terms of the Wigner operator, that is C ( z , θ ) = Ψ | W ^ ( z , p z ) σ ^ ( θ ) | Ψ , where W ^ ( z , p z ) is the Wigner operator, σ ^ ( θ ) is the Pauli spin operator in an arbitrary direction θ and | Ψ is the quantum state given by an entangled state of the external degree of freedom and the eigenstates of the spin. We show that this correlation function for the SGE violates the Clauser–Horne–Shimony–Holt Bell inequality. Thus, this feature of the SGE might be interesting for both the teaching of quantum mechanics and to investigate the phenomenon of quantum nonlocality. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessFeature PaperArticle Calculation of Configurational Entropy in Complex Landscapes
Entropy 2018, 20(4), 298; https://doi.org/10.3390/e20040298
Received: 22 December 2017 / Revised: 4 April 2018 / Accepted: 11 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (2493 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal
[...] Read more.
Entropy and the second law of thermodynamics are fundamental concepts that underlie all natural processes and patterns. Recent research has shown how the entropy of a landscape mosaic can be calculated using the Boltzmann equation, with the entropy of a lattice mosaic equal to the logarithm of the number of ways a lattice with a given dimensionality and number of classes can be arranged to produce the same total amount of edge between cells of different classes. However, that work seemed to also suggest that the feasibility of applying this method to real landscapes was limited due to intractably large numbers of possible arrangements of raster cells in large landscapes. Here I extend that work by showing that: (1) the proportion of arrangements rather than the number with a given amount of edge length provides a means to calculate unbiased relative configurational entropy, obviating the need to compute all possible configurations of a landscape lattice; (2) the edge lengths of randomized landscape mosaics are normally distributed, following the central limit theorem; and (3) given this normal distribution it is possible to fit parametric probability density functions to estimate the expected proportion of randomized configurations that have any given edge length, enabling the calculation of configurational entropy on any landscape regardless of size or number of classes. I evaluate the boundary limits (4) for this normal approximation for small landscapes with a small proportion of a minority class and show it holds under all realistic landscape conditions. I further (5) demonstrate that this relationship holds for a sample of real landscapes that vary in size, patch richness, and evenness of area in each cover type, and (6) I show that the mean and standard deviation of the normally distributed edge lengths can be predicted nearly perfectly as a function of the size, patch richness and diversity of a landscape. Finally, (7) I show that the configurational entropy of a landscape is highly related to the dimensionality of the landscape, the number of cover classes, the evenness of landscape composition across classes, and landscape heterogeneity. These advances provide a means for researchers to directly estimate the frequency distribution of all possible macrostates of any observed landscape, and then directly calculate the relative configurational entropy of the observed macrostate, and to understand the ecological meaning of different amounts of configurational entropy. These advances enable scientists to take configurational entropy from a concept to an applied tool to measure and compare the disorder of real landscapes with an objective and unbiased measure based on entropy and the second law. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
Figures

Figure 1

Open AccessArticle Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297
Received: 10 July 2017 / Revised: 6 April 2018 / Accepted: 10 April 2018 / Published: 18 April 2018
Cited by 1 | PDF Full-text (529 KB) | HTML Full-text | XML Full-text
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The
[...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Figures

Figure 1

Open AccessArticle Image Clustering with Optimization Algorithms and Color Space
Entropy 2018, 20(4), 296; https://doi.org/10.3390/e20040296
Received: 19 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
Cited by 1 | PDF Full-text (30472 KB) | HTML Full-text | XML Full-text
Abstract
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the
[...] Read more.
In image clustering, it is desired that pixels assigned in the same class must be the same or similar. In other words, the homogeneity of a cluster must be high. In gray scale image segmentation, the specified goal is achieved by increasing the number of thresholds. However, the determination of multiple thresholds is a typical issue. Moreover, the conventional thresholding algorithms could not be used in color image segmentation. In this study, a new color image clustering algorithm with multilevel thresholding has been presented and, it has been shown how to use the multilevel thresholding techniques for color image clustering. Thus, initially, threshold selection techniques such as the Otsu and Kapur methods were employed for each color channel separately. The objective functions of both approaches have been integrated with the forest optimization algorithm (FOA) and particle swarm optimization (PSO) algorithm. In the next stage, thresholds determined by optimization algorithms were used to divide color space into small cubes or prisms. Each sub-cube or prism created in the color space was evaluated as a cluster. As the volume of prisms affects the homogeneity of the clusters created, multiple thresholds were employed to reduce the sizes of the sub-cubes. The performance of the proposed method was tested with different images. It was observed that the results obtained were more efficient than conventional methods. Full article
Figures

Figure 1

Open AccessArticle A Novel Algorithm to Improve Digital Chaotic Sequence Complexity through CCEMD and PE
Entropy 2018, 20(4), 295; https://doi.org/10.3390/e20040295
Received: 18 March 2018 / Revised: 10 April 2018 / Accepted: 12 April 2018 / Published: 18 April 2018
Cited by 1 | PDF Full-text (4904 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further,
[...] Read more.
In this paper, a three-dimensional chaotic system with a hidden attractor is introduced. The complex dynamic behaviors of the system are analyzed with a Poincaré cross section, and the equilibria and initial value sensitivity are analyzed by the method of numerical simulation. Further, we designed a new algorithm based on complementary ensemble empirical mode decomposition (CEEMD) and permutation entropy (PE) that can effectively enhance digital chaotic sequence complexity. In addition, an image encryption experiment was performed with post-processing of the chaotic binary sequences by the new algorithm. The experimental results show good performance of the chaotic binary sequence. Full article
Figures

Figure 1a

Open AccessFeature PaperArticle A Lenient Causal Arrow of Time?
Entropy 2018, 20(4), 294; https://doi.org/10.3390/e20040294
Received: 29 March 2018 / Revised: 13 April 2018 / Accepted: 15 April 2018 / Published: 18 April 2018
PDF Full-text (616 KB) | HTML Full-text | XML Full-text
Abstract
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark
[...] Read more.
One of the basic assumptions underlying Bell’s theorem is the causal arrow of time, having to do with temporal order rather than spatial separation. Nonetheless, the physical assumptions regarding causality are seldom studied in this context, and often even go unmentioned, in stark contrast with the many different possible locality conditions which have been studied and elaborated upon. In the present work, some retrocausal toy-models which reproduce the predictions of quantum mechanics for Bell-type correlations are reviewed. It is pointed out that a certain toy-model which is ostensibly superdeterministic—based on denying the free-variable status of some of quantum mechanics’ input parameters—actually contains within it a complete retrocausal toy-model. Occam’s razor thus indicates that the superdeterministic point of view is superfluous. A challenge is to generalize the retrocausal toy-models to a full theory—a reformulation of quantum mechanics—in which the standard causal arrow of time would be replaced by a more lenient one: an arrow of time applicable only to macroscopically-available information. In discussing such a reformulation, one finds that many of the perplexing features of quantum mechanics could arise naturally, especially in the context of stochastic theories. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Entropy Production on the Gravity-Driven Flow with Free Surface Down an Inclined Plane Subjected to Constant Temperature
Entropy 2018, 20(4), 293; https://doi.org/10.3390/e20040293
Received: 22 March 2018 / Revised: 13 April 2018 / Accepted: 16 April 2018 / Published: 17 April 2018
PDF Full-text (3077 KB) | HTML Full-text | XML Full-text
Abstract
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of
[...] Read more.
The long-wave approximation of a falling film down an inclined plane with constant temperature is used to investigate the volumetric averaged entropy production. The velocity and temperature fields are numerically computed by the evolution equation at the deformable free interface. The dynamics of a falling film have an important role in the entropy production. When the layer shows an unstable evolution, the entropy production by fluid friction is much larger than that of the film with a stable flat interface. As the heat transfers actively from the free surface to the ambient air, the temperature gradient inside flowing films becomes large and the entropy generation by heat transfer increases. The contribution of fluid friction on the volumetric averaged entropy production is larger than that of heat transfer at moderate and high viscous dissipation parameters. Full article
(This article belongs to the Special Issue Entropy Production in Turbulent Flow)
Figures

Figure 1

Open AccessArticle Generalized Weyl–Heisenberg Algebra, Qudit Systems and Entanglement Measure of Symmetric States via Spin Coherent States
Entropy 2018, 20(4), 292; https://doi.org/10.3390/e20040292
Received: 26 March 2018 / Revised: 13 April 2018 / Accepted: 13 April 2018 / Published: 17 April 2018
PDF Full-text (380 KB) | HTML Full-text | XML Full-text
Abstract
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and
[...] Read more.
A relation is established in the present paper between Dicke states in a d-dimensional space and vectors in the representation space of a generalized Weyl–Heisenberg algebra of finite dimension d. This provides a natural way to deal with the separable and entangled states of a system of N = d 1 symmetric qubit states. Using the decomposition property of Dicke states, it is shown that the separable states coincide with the Perelomov coherent states associated with the generalized Weyl–Heisenberg algebra considered in this paper. In the so-called Majorana scheme, the qudit (d-level) states are represented by N points on the Bloch sphere; roughly speaking, it can be said that a qudit (in a d-dimensional space) is describable by a N-qubit vector (in a N-dimensional space). In such a scheme, the permanent of the matrix describing the overlap between the N qubits makes it possible to measure the entanglement between the N qubits forming the qudit. This is confirmed by a Fubini–Study metric analysis. A new parameter, proportional to the permanent and called perma-concurrence, is introduced for characterizing the entanglement of a symmetric qudit arising from N qubits. For d = 3 ( N = 2 ), this parameter constitutes an alternative to the concurrence for two qubits. Other examples are given for d = 4 and 5. A connection between Majorana stars and zeros of a Bargmmann function for qudits closes this article. Full article
(This article belongs to the Special Issue Entropy and Information in the Foundation of Quantum Physics)
Open AccessArticle Measurement-Device Independency Analysis of Continuous-Variable Quantum Digital Signature
Entropy 2018, 20(4), 291; https://doi.org/10.3390/e20040291
Received: 22 March 2018 / Revised: 15 April 2018 / Accepted: 16 April 2018 / Published: 17 April 2018
PDF Full-text (784 KB) | HTML Full-text | XML Full-text
Abstract
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols.
[...] Read more.
With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols. Considering the importance of quantum digital signature in quantum cryptography, in this paper, we attempt to analyze the measurement-device independency of continuous-variable quantum digital signature, especially continuous-variable quantum homomorphic signature. Firstly, we calculate the upper bound of the error rate of a protocol. If it is negligible on condition that all measurement devices are untrusted, the protocol is deemed to be measurement-device-independent. Then, we simplify the calculation by using the characteristics of continuous variables and prove the measurement-device independency of the protocol according to the calculation result. In addition, the proposed analysis method can be extended to other quantum cryptographic protocols besides continuous-variable quantum homomorphic signature. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Optimization of CNN through Novel Training Strategy for Visual Classification Problems
Entropy 2018, 20(4), 290; https://doi.org/10.3390/e20040290
Received: 31 January 2018 / Revised: 30 March 2018 / Accepted: 14 April 2018 / Published: 17 April 2018
Cited by 1 | PDF Full-text (8832 KB) | HTML Full-text | XML Full-text
Abstract
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the
[...] Read more.
The convolution neural network (CNN) has achieved state-of-the-art performance in many computer vision applications e.g., classification, recognition, detection, etc. However, the global optimization of CNN training is still a problem. Fast classification and training play a key role in the development of the CNN. We hypothesize that the smoother and optimized the training of a CNN goes, the more efficient the end result becomes. Therefore, in this paper, we implement a modified resilient backpropagation (MRPROP) algorithm to improve the convergence and efficiency of CNN training. Particularly, a tolerant band is introduced to avoid network overtraining, which is incorporated with the global best concept for weight updating criteria to allow the training algorithm of the CNN to optimize its weights more swiftly and precisely. For comparison, we present and analyze four different training algorithms for CNN along with MRPROP, i.e., resilient backpropagation (RPROP), Levenberg–Marquardt (LM), conjugate gradient (CG), and gradient descent with momentum (GDM). Experimental results showcase the merit of the proposed approach on a public face and skin dataset. Full article
Figures

Figure 1

Open AccessArticle Statistical Reasoning: Choosing and Checking the Ingredients, Inferences Based on a Measure of Statistical Evidence with Some Applications
Entropy 2018, 20(4), 289; https://doi.org/10.3390/e20040289
Received: 17 February 2018 / Revised: 5 April 2018 / Accepted: 11 April 2018 / Published: 16 April 2018
PDF Full-text (406 KB) | HTML Full-text | XML Full-text
Abstract
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for
[...] Read more.
The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for bias, checking for prior-data conflict and estimation and hypothesis assessment inferences based on a measure of evidence. A long-standing anomalous example is resolved by this approach to inference and an application is made to a practical problem of considerable importance, which, among other novel aspects of the analysis, involves the development of a relevant elicitation algorithm. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1

Open AccessEditorial Transfer Entropy
Entropy 2018, 20(4), 288; https://doi.org/10.3390/e20040288
Received: 12 April 2018 / Revised: 12 April 2018 / Accepted: 13 April 2018 / Published: 16 April 2018
PDF Full-text (181 KB) | HTML Full-text | XML Full-text
Abstract
Statistical relationships among the variables of a complex system reveal a lot about its physical behavior[...] Full article
(This article belongs to the Special Issue Transfer Entropy) Printed Edition available
Open AccessArticle Centered and Averaged Fuzzy Entropy to Improve Fuzzy Entropy Precision
Entropy 2018, 20(4), 287; https://doi.org/10.3390/e20040287
Received: 16 March 2018 / Revised: 13 April 2018 / Accepted: 13 April 2018 / Published: 15 April 2018
PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number
[...] Read more.
Several entropy measures are now widely used to analyze real-world time series. Among them, we can cite approximate entropy, sample entropy and fuzzy entropy (FuzzyEn), the latter one being probably the most efficient among the three. However, FuzzyEn precision depends on the number of samples in the data under study. The longer the signal, the better it is. Nevertheless, long signals are often difficult to obtain in real applications. This is why we herein propose a new FuzzyEn that presents better precision than the standard FuzzyEn. This is performed by increasing the number of samples used in the computation of the entropy measure, without changing the length of the time series. Thus, for the comparisons of the patterns, the mean value is no longer a constraint. Moreover, translated patterns are not the only ones considered: reflected, inversed, and glide-reflected patterns are also taken into account. The new measure (so-called centered and averaged FuzzyEn) is applied to synthetic and biomedical signals. The results show that the centered and averaged FuzzyEn leads to more precise results than the standard FuzzyEn: the relative percentile range is reduced compared to the standard sample entropy and fuzzy entropy measures. The centered and averaged FuzzyEn could now be used in other applications to compare its performances to those of other already-existing entropy measures. Full article
Figures

Figure 1

Open AccessArticle A Co-Opetitive Automated Negotiation Model for Vertical Allied Enterprises Teams and Stakeholders
Entropy 2018, 20(4), 286; https://doi.org/10.3390/e20040286
Received: 18 February 2018 / Revised: 2 April 2018 / Accepted: 11 April 2018 / Published: 14 April 2018
PDF Full-text (6099 KB) | HTML Full-text | XML Full-text
Abstract
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit
[...] Read more.
Upstream and downstream of supply chain enterprises often form a tactic vertical alliance to enhance their operational efficiency and maintain their competitive edges in the market. Hence, it is critical for an alliance to collaborate over their internal resources and resolve the profit conflicts among members, so that the functionality required by stakeholders can be fulfilled. As an effective solution, automated negotiation for the vertical allied enterprises team and stakeholder will sufficiently make use of emerging team advantages and significantly reduce the profit conflicts in teams with grouping decisions rather than unilateral decisions by some leader. In this paper, an automated negotiation model is designed to describe both the collaborative game process among the team members and the competitive negotiation process between the allied team and the stakeholder. Considering the co-competitiveness of the vertical allied team, the designed model helps the team members making decision for their own sake, and the team counter-offers for the ongoing negotiation are generated with non-cooperative game process, where the profit derived from negotiation result is distributed with Shapley value method according to contribution or importance contributed by each team member. Finally, a case study is given to testify the effectiveness of the designed model. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Figures

Figure 1

Open AccessComment Maximum Entropy and Theory Construction: A Reply to Favretti
Entropy 2018, 20(4), 285; https://doi.org/10.3390/e20040285
Received: 26 December 2017 / Revised: 5 March 2018 / Accepted: 5 March 2018 / Published: 14 April 2018
Cited by 1 | PDF Full-text (167 KB) | HTML Full-text | XML Full-text
Abstract
In the maximum entropy theory of ecology (METE), the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy
[...] Read more.
In the maximum entropy theory of ecology (METE), the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Open AccessLetter Dimensional Lifting through the Generalized Gram–Schmidt Process
Entropy 2018, 20(4), 284; https://doi.org/10.3390/e20040284
Received: 26 March 2018 / Revised: 10 April 2018 / Accepted: 10 April 2018 / Published: 14 April 2018
PDF Full-text (229 KB) | HTML Full-text | XML Full-text
Abstract
A new way of orthogonalizing ensembles of vectors by “lifting” them to higher dimensions is introduced. This method can potentially be utilized for solving quantum decision and computing problems. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Open AccessArticle TRSWA-BP Neural Network for Dynamic Wind Power Forecasting Based on Entropy Evaluation
Entropy 2018, 20(4), 283; https://doi.org/10.3390/e20040283
Received: 13 March 2018 / Revised: 5 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018
PDF Full-text (2321 KB) | HTML Full-text | XML Full-text
Abstract
The performance evaluation of wind power forecasting under commercially operating circumstances is critical to a wide range of decision-making situations, yet difficult because of its stochastic nature. This paper firstly introduces a novel TRSWA-BP neural network, of which learning process is based on
[...] Read more.
The performance evaluation of wind power forecasting under commercially operating circumstances is critical to a wide range of decision-making situations, yet difficult because of its stochastic nature. This paper firstly introduces a novel TRSWA-BP neural network, of which learning process is based on an efficiency tabu, real-coded, small-world optimization algorithm (TRSWA). In order to deal with the strong volatility and stochastic behavior of the wind power sequence, three forecasting models of the TRSWA-BP are presented, which are combined with EMD (empirical mode decomposition), PSR (phase space reconstruction), and EMD-based PSR. The error sequences of the above methods are then proved to have non-Gaussian properties, and a novel criterion of normalized Renyi’s quadratic entropy (NRQE) is proposed, which can evaluate their dynamic predicted accuracy. Finally, illustrative predictions of the next 1, 4, 6, and 24 h time-scales are examined by historical wind power data, under different evaluations. From the results, we can observe that not only do the proposed models effectively revise the error due to the fluctuation and multi-fractal property of wind power, but also that the NRQE can reserve its feasible assessment upon the stochastic predicted error. Full article
Figures

Figure 1

Open AccessArticle A Symmetric Plaintext-Related Color Image Encryption System Based on Bit Permutation
Entropy 2018, 20(4), 282; https://doi.org/10.3390/e20040282
Received: 23 February 2018 / Revised: 10 April 2018 / Accepted: 11 April 2018 / Published: 13 April 2018
Cited by 2 | PDF Full-text (9174 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a variety of chaos-based image encryption algorithms adopting the traditional permutation-diffusion structure have been suggested. Most of these algorithms cannot resist the powerful chosen-plaintext attack and chosen-ciphertext attack efficiently for less sensitivity to plain-image. This paper presents a symmetric color image encryption
[...] Read more.
Recently, a variety of chaos-based image encryption algorithms adopting the traditional permutation-diffusion structure have been suggested. Most of these algorithms cannot resist the powerful chosen-plaintext attack and chosen-ciphertext attack efficiently for less sensitivity to plain-image. This paper presents a symmetric color image encryption system based on plaintext-related random access bit-permutation mechanism (PRRABPM). In the proposed scheme, a new random access bit-permutation mechanism is used to shuffle 3D bit matrix transformed from an original color image, making the RGB components of the color image interact with each other. Furthermore, the key streams used in random access bit-permutation mechanism operation are extremely dependent on plain image in an ingenious way. Therefore, the encryption system is sensitive to tiny differences in key and original images, which means that it can efficiently resist chosen-plaintext attack and chosen-ciphertext attack. In the diffusion stage, the previous encrypted pixel is used to encrypt the current pixel. The simulation results show that even though the permutation-diffusion operation in our encryption scheme is performed only one time, the proposed algorithm has favorable security performance. Considering real-time applications, the encryption speed can be further improved. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Cross Mean Annual Runoff Pseudo-Elasticity of Entropy for Quaternary Catchments of the Upper Vaal Catchment in South Africa
Entropy 2018, 20(4), 281; https://doi.org/10.3390/e20040281
Received: 2 November 2017 / Revised: 21 December 2017 / Accepted: 21 December 2017 / Published: 13 April 2018
Cited by 1 | PDF Full-text (4549 KB) | HTML Full-text | XML Full-text
Abstract
This study focuses preliminarily on the intra-tertiary catchment (TC) assessment of cross MAR pseudo-elasticity of entropy, which determines the impact of changes in MAR for a quaternary catchment (QC) on the entropy of another (other) QC(s). The TCs of the Upper Vaal catchment
[...] Read more.
This study focuses preliminarily on the intra-tertiary catchment (TC) assessment of cross MAR pseudo-elasticity of entropy, which determines the impact of changes in MAR for a quaternary catchment (QC) on the entropy of another (other) QC(s). The TCs of the Upper Vaal catchment were used preliminarily for this assessment and surface water resources (WR) of South Africa of 1990 (WR90), of 2005 (WR2005) and of 2012 (WR2012) data sets were used. The TCs are grouped into three secondary catchments, i.e., downstream of Vaal Dam, upstrream of Vaal dam and Wilge. It is revealed that, there are linkages in terms of mean annual runoff (MAR) between QCs; which could be complements (negative cross elasticity) or substitutes (positive cross elasticity). It is shown that cross MAR pseudo-elasticity can be translated into correlation strength between QC pairs; i.e., high cross elasticity (low catchment resilience) and low cross elasticity (high catchment resilience). Implicitly, catchment resilience is shown to be associated with the risk of vulnerability (or sustainability level) of water resources, in terms of MAR, which is generally low (or high). Besides, for each TC, the dominance (of complements or substitutes) and the global highest cross MAR elasticity are determined. The overall average cross MAR elasticity of QCs for each TC was shown to be in the zone of tolerable entropy, hence the zone of functioning resilience. This could assure that water resources remained fairly sustainable in TCs that form the secondary catchments of the Upper Vaal. Cross MAR pseudo-elasticity concept could be further extended to an intra-secondary catchment assessment. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Towards Experiments to Test Violation of the Original Bell Inequality
Entropy 2018, 20(4), 280; https://doi.org/10.3390/e20040280
Received: 16 February 2018 / Revised: 29 March 2018 / Accepted: 11 April 2018 / Published: 13 April 2018
PDF Full-text (285 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to attract the attention of experimenters to the original Bell (OB) inequality that was shadowed by the common consideration of the Clauser–Horne–Shimony–Holt (CHSH) inequality. There are two reasons to test the OB inequality and not the CHSH
[...] Read more.
The aim of this paper is to attract the attention of experimenters to the original Bell (OB) inequality that was shadowed by the common consideration of the Clauser–Horne–Shimony–Holt (CHSH) inequality. There are two reasons to test the OB inequality and not the CHSH inequality. First of all, the OB inequality is a straightforward consequence to the Einstein–Podolsky–Rosen (EPR) argumentation. In addition, only this inequality is directly related to the EPR–Bohr debate. The second distinguishing feature of the OB inequality was emphasized by Itamar Pitowsky. He pointed out that the OB inequality provides a higher degree of violations of classicality than the CHSH inequality. For the CHSH inequality, the fraction of the quantum (Tsirelson) bound Q CHSH = 2 2 to the classical bound C CHSH = 2 , i.e., F CHSH = Q CHSH C CHSH = 2 is less than the fraction of the quantum bound for the OB inequality Q OB = 3 2 to the classical bound C OB = 1 , i.e., F OB = Q OB C OB = 3 2 . Thus, by violating the OB inequality, it is possible to approach a higher degree of deviation from classicality. The main problem is that the OB inequality is derived under the assumption of perfect (anti-) correlations. However, the last few years have been characterized by the amazing development of quantum technologies. Nowadays, there exist sources producing, with very high probability, the pairs of photons in the singlet state. Moreover, the efficiency of photon detectors was improved tremendously. In any event, one can start by proceeding with the fair sampling assumption. Another possibility is to use the scheme of the Hensen et al. experiment for entangled electrons. Here, the detection efficiency is very high. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Open AccessArticle Relation between Self-Organization and Wear Mechanisms of Diamond Films
Entropy 2018, 20(4), 279; https://doi.org/10.3390/e20040279
Received: 1 February 2018 / Revised: 5 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018
Cited by 1 | PDF Full-text (10579 KB) | HTML Full-text | XML Full-text
Abstract
The study deals with tribological properties of diamond films that were tested under reciprocal sliding conditions against Si3N4 balls. Adhesive and abrasive wear are explained in terms of nonequilibrium thermodynamic model of friction and wear. Surface roughness alteration and film
[...] Read more.
The study deals with tribological properties of diamond films that were tested under reciprocal sliding conditions against Si3N4 balls. Adhesive and abrasive wear are explained in terms of nonequilibrium thermodynamic model of friction and wear. Surface roughness alteration and film deformation induce instabilities in the tribological system, therefore self-organization can occur. Instabilities can lead to an increase of the real contact area between the ball and film, resulting in the seizure between the sliding counterparts (degenerative case of self-organization). However, the material cannot withstand the stress and collapses due to high friction forces, thus this regime of sliding corresponds to the adhesive wear. In contrast, a decrease of the real contact area leads to the decrease of the coefficient of friction (constructive self-organization). However, it results in a contact pressure increase on the top of asperities within the contact zone, followed by material collapse, i.e., abrasive wear. Mentioned wear mechanisms should be distinguished from the self-lubricating properties of diamond due to the formation of a carbonaceous layer. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Contextuality Analysis of the Double Slit Experiment (with a Glimpse into Three Slits)
Entropy 2018, 20(4), 278; https://doi.org/10.3390/e20040278
Received: 31 January 2018 / Revised: 26 March 2018 / Accepted: 9 April 2018 / Published: 12 April 2018
Cited by 1 | PDF Full-text (1390 KB) | HTML Full-text | XML Full-text
Abstract
The Contextuality-by-Default theory is illustrated on contextuality analysis of the idealized double-slit experiment. The experiment is described by a system of contextually labeled binary random variables each of which answers the question: Has the particle hit the detector, having passed through a given
[...] Read more.
The Contextuality-by-Default theory is illustrated on contextuality analysis of the idealized double-slit experiment. The experiment is described by a system of contextually labeled binary random variables each of which answers the question: Has the particle hit the detector, having passed through a given slit (left or right) in a given state (open or closed)? This system of random variables is a cyclic system of rank 4, formally the same as the system describing the Einsten-Podolsky-Rosen-Bell paradigm with signaling. Unlike the latter, however, the system describing the double-slit experiment is always noncontextual, i.e., the context-dependence in it is entirely explainable in terms of direct influences of contexts (closed-open arrangements of the slits) upon the marginal distributions of the random variables involved. The analysis presented is entirely within the framework of abstract classical probability theory (with contextually labeled random variables). The only physical constraint used in the analysis is that a particle cannot pass through a closed slit. The noncontextuality of the double-slit system does not generalize to systems describing experiments with more than two slits: in an abstract triple-slit system, almost any set of observable detection probabilities is compatible with both a contextual scenario and a noncontextual scenario of the particle passing though various combinations of open and closed slits (although the issue of physical realizability of these scenarios remains open). Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Back to Top