Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 3 (March 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story Free energy transduction can be obtained when external oscillating fields couple to internal [...] Read more.
View options order results:
result details:
Displaying articles 1-48
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle A LiBr-H2O Absorption Refrigerator Incorporating a Thermally Activated Solution Pumping Mechanism
Entropy 2017, 19(3), 90; doi:10.3390/e19030090
Received: 30 November 2016 / Revised: 6 February 2017 / Accepted: 21 February 2017 / Published: 26 February 2017
PDF Full-text (1095 KB) | HTML Full-text | XML Full-text
Abstract
This paper provides an illustrated description of a proposed LiBr-H2O vapour absorption refrigerator which uses a thermally activated solution pumping mechanism that combines controlled variations in generator vapour pressure with changes it produces in static-head pressure difference to circulate the absorbent
[...] Read more.
This paper provides an illustrated description of a proposed LiBr-H2O vapour absorption refrigerator which uses a thermally activated solution pumping mechanism that combines controlled variations in generator vapour pressure with changes it produces in static-head pressure difference to circulate the absorbent solution between the generator and absorber vessels. The proposed system is different and potentially more efficient than a bubble pump system previously proposed and avoids the need for an electrically powered circulation pump found in most conventional LiBr absorption refrigerators. The paper goes on to provide a sample set of calculations that show that the coefficient of performance values of the proposed cycle are similar to those found for conventional cycles. The theoretical results compare favourably with some preliminary experimental results, which are also presented for the first time in this paper. The paper ends by proposing an outline design for an innovative steam valve, which is a key component needed to control the solution pumping mechanism. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Figures

Figure 1

Open AccessArticle Normalized Unconditional ϵ-Security of Private-Key Encryption
Entropy 2017, 19(3), 100; doi:10.3390/e19030100
Received: 12 January 2017 / Revised: 16 February 2017 / Accepted: 1 March 2017 / Published: 7 March 2017
PDF Full-text (702 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we introduce two normalized versions of non-perfect security for private-key encryption: one version in the framework of Shannon entropy, another version in the framework of Kolmogorov complexity. We prove the lower bound on either key entropy or key size for
[...] Read more.
In this paper we introduce two normalized versions of non-perfect security for private-key encryption: one version in the framework of Shannon entropy, another version in the framework of Kolmogorov complexity. We prove the lower bound on either key entropy or key size for these models and study the relations between these normalized security notions. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Open AccessArticle On the Entropy of Deformed Phase Space Black Hole and the Cosmological Constant
Entropy 2017, 19(3), 91; doi:10.3390/e19030091
Received: 27 November 2016 / Revised: 18 February 2017 / Accepted: 22 February 2017 / Published: 28 February 2017
PDF Full-text (215 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we study the effects of noncommutative phase space deformations on the Schwarzschild black hole. This idea has been previously studied in Friedmann–Robertson–Walker (FRW) cosmology, where this “noncommutativity” provides a simple mechanism that can explain the origin of the cosmological constant.
[...] Read more.
In this paper we study the effects of noncommutative phase space deformations on the Schwarzschild black hole. This idea has been previously studied in Friedmann–Robertson–Walker (FRW) cosmology, where this “noncommutativity” provides a simple mechanism that can explain the origin of the cosmological constant. In this paper, we obtain the same relationship between the cosmological constant and the deformation parameter that appears in deformed phase space cosmology, but in the context of the deformed phase space black holes. This was achieved by comparing the entropy of the deformed Schwarzschild black hole with the entropy of the Schwarzschild–de Sitter black hole. Full article
(This article belongs to the Section Astrophysics and Cosmology)
Open AccessArticle “Over-Learning” Phenomenon of Wavelet Neural Networks in Remote Sensing Image Classifications with Different Entropy Error Functions
Entropy 2017, 19(3), 101; doi:10.3390/e19030101
Received: 14 November 2016 / Revised: 27 February 2017 / Accepted: 27 February 2017 / Published: 8 March 2017
PDF Full-text (8692 KB) | HTML Full-text | XML Full-text
Abstract
Artificial neural networks are widely applied for prediction, function simulation, and data classification. Among these applications, the wavelet neural network is widely used in image classification problems due to its advantages of high approximation capabilities, fault-tolerant capabilities, learning capacity, its ability to effectively
[...] Read more.
Artificial neural networks are widely applied for prediction, function simulation, and data classification. Among these applications, the wavelet neural network is widely used in image classification problems due to its advantages of high approximation capabilities, fault-tolerant capabilities, learning capacity, its ability to effectively overcome local minimization issues, and so on. The error function of a network is critical to determine the convergence, stability, and classification accuracy of a neural network. The selection of the error function directly determines the network’s performance. Different error functions will correspond with different minimum error values in training samples. With the decrease of network errors, the accuracy of the image classification is increased. However, if the image classification accuracy is difficult to improve upon, or is even decreased with the decreasing of the errors, then this indicates that the network has an “over-learning” phenomenon, which is closely related to the selection of the function errors. With regards to remote sensing data, it has not yet been reported whether there have been studies conducted regarding the “over-learning” phenomenon, as well as the relationship between the “over-learning” phenomenon and error functions. This study takes SAR, hyper-spectral, high-resolution, and multi-spectral images as data sources, in order to comprehensively and systematically analyze the possibility of an “over-learning” phenomenon in the remote sensing images from the aspects of image characteristics and neural network. Then, this study discusses the impact of three typical entropy error functions (NB, CE, and SH) on the “over-learning” phenomenon of a network. The experimental results show that the “over-learning” phenomenon may be caused only when there is a strong separability between the ground features, a low image complexity, a small image size, and a large number of hidden nodes. The SH entropy error function in that case will show a good “over-learning” resistance ability. However, for remote sensing image classification, the “over-learning” phenomenon will not be easily caused in most cases, due to the complexity of the image itself, and the diversity of the ground features. In that case, the NB and CE entropy error network mainly show a good stability. Therefore, a blind selection of a SH entropy error function with a high “over-learning” resistance ability from the wavelet neural network classification of the remote sensing image will only decrease the classification accuracy of the remote sensing image. It is therefore recommended to use an NB or CE entropy error function with a stable learning effect. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Emergence of Distinct Spatial Patterns in Cellular Automata with Inertia: A Phase Transition-Like Behavior
Entropy 2017, 19(3), 102; doi:10.3390/e19030102
Received: 24 December 2016 / Revised: 20 February 2017 / Accepted: 28 February 2017 / Published: 7 March 2017
PDF Full-text (5142 KB) | HTML Full-text | XML Full-text
Abstract
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state Sn
[...] Read more.
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state S n (=−1,0,+1) of each CA lattice cell n (which we call inertia). Considering ensembles of initial lattices, we study the average properties of the CA final stationary configuration structures resulting from the system time evolution. Assuming the inertia a (proper) control parameter, we identify qualitative changes in the CA spatial patterns resembling usual phase transitions. Interestingly, some of the observed features may be associated with continuous transitions (critical phenomena). However, certain quantities seem to present jumps, typical of discontinuous transitions. We argue that these apparent contradictory findings can be attributed to the inertia parameter’s discrete character. Along the work, we also briefly discuss a few potential applications for the present CA formulation. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Use of Accumulated Entropies for Automated Detection of Congestive Heart Failure in Flexible Analytic Wavelet Transform Framework Based on Short-Term HRV Signals
Entropy 2017, 19(3), 92; doi:10.3390/e19030092
Received: 25 January 2017 / Revised: 15 February 2017 / Accepted: 16 February 2017 / Published: 27 February 2017
Cited by 2 | PDF Full-text (428 KB) | HTML Full-text | XML Full-text
Abstract
In the present work, an automated method to diagnose Congestive Heart Failure (CHF) using Heart Rate Variability (HRV) signals is proposed. This method is based on Flexible Analytic Wavelet Transform (FAWT), which decomposes the HRV signals into different sub-band signals. Further, Accumulated Fuzzy
[...] Read more.
In the present work, an automated method to diagnose Congestive Heart Failure (CHF) using Heart Rate Variability (HRV) signals is proposed. This method is based on Flexible Analytic Wavelet Transform (FAWT), which decomposes the HRV signals into different sub-band signals. Further, Accumulated Fuzzy Entropy (AFEnt) and Accumulated Permutation Entropy (APEnt) are computed over cumulative sums of these sub-band signals. This provides complexity analysis using fuzzy and permutation entropies at different frequency scales. We have extracted 20 features from these signals obtained at different frequency scales of HRV signals. The Bhattacharyya ranking method is used to rank the extracted features from the HRV signals of three different lengths (500, 1000 and 2000 samples). These ranked features are fed to the Least Squares Support Vector Machine (LS-SVM) classifier. Our proposed system has obtained a sensitivity of 98.07%, specificity of 98.33% and accuracy of 98.21% for the 500-sample length of HRV signals. Our system yielded a sensitivity of 97.95%, specificity of 98.07% and accuracy of 98.01% for HRV signals of a length of 1000 samples and a sensitivity of 97.76%, specificity of 97.67% and accuracy of 97.71% for signals corresponding to the 2000-sample length of HRV signals. Our automated system can aid clinicians in the accurate detection of CHF using HRV signals. It can be installed in hospitals, polyclinics and remote villages where there is no access to cardiologists. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessArticle Analysis of the Temporal Structure Evolution of Physical Systems with the Self-Organising Tree Algorithm (SOTA): Application for Validating Neural Network Systems on Adaptive Optics Data before On-Sky Implementation
Entropy 2017, 19(3), 103; doi:10.3390/e19030103
Received: 18 January 2017 / Revised: 24 February 2017 / Accepted: 5 March 2017 / Published: 7 March 2017
PDF Full-text (1135 KB) | HTML Full-text | XML Full-text
Abstract
Adaptive optics reconstructors are needed to remove the effects of atmospheric distortion in optical systems of large telescopes. The use of reconstructors based on neural networks has been proved successful in recent times. Some of their properties require a specific characterization. A procedure,
[...] Read more.
Adaptive optics reconstructors are needed to remove the effects of atmospheric distortion in optical systems of large telescopes. The use of reconstructors based on neural networks has been proved successful in recent times. Some of their properties require a specific characterization. A procedure, based in time series clustering algorithms, is presented to characterize the relationship between temporal structure of inputs and outputs, through analyzing the data provided by the system. This procedure is used to compare the performance of a reconstructor based in Artificial Neural Networks, with one that shows promising results, but is still in development, in order to corroborate its suitability previously to its implementation in real applications. Also, this procedure could be applied with other physical systems that also have evolution in time. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle An Entropy-Assisted Shielding Function in DDES Formulation for the SST Turbulence Model
Entropy 2017, 19(3), 93; doi:10.3390/e19030093
Received: 12 November 2016 / Revised: 7 February 2017 / Accepted: 23 February 2017 / Published: 27 February 2017
PDF Full-text (5212 KB) | HTML Full-text | XML Full-text
Abstract
The intent of shielding functions in delayed detached-eddy simulation methods (DDES) is to preserve the wall boundary layers as Reynolds-averaged Navier–Strokes (RANS) mode, avoiding possible modeled stress depletion (MSD) or even unphysical separation due to grid refinement. An entropy function fs is
[...] Read more.
The intent of shielding functions in delayed detached-eddy simulation methods (DDES) is to preserve the wall boundary layers as Reynolds-averaged Navier–Strokes (RANS) mode, avoiding possible modeled stress depletion (MSD) or even unphysical separation due to grid refinement. An entropy function fs is introduced to construct a DDES formulation for the k-ω shear stress transport (SST) model, whose performance is extensively examined on a range of attached and separated flows (flat-plate flow, circular cylinder flow, and supersonic cavity-ramp flow). Two more forms of shielding functions are also included for comparison: one that uses the blending function F2 of SST, the other which adopts the recalibrated shielding function fd_cor of the DDES version based on the Spalart-Allmaras (SA) model. In general, all of the shielding functions do not impair the vortex in fully separated flows. However, for flows including attached boundary layer, both F2 and the recalibrated fd_cor are found to be too conservative to resolve the unsteady flow content. On the other side, fs is proposed on the theory of energy dissipation and independent on from any particular turbulence model, showing the generic priority by properly balancing the need of reserving the RANS modeled regions for wall boundary layers and generating the unsteady turbulent structures in detached areas. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Complexity and Vulnerability Analysis of the C. Elegans Gap Junction Connectome
Entropy 2017, 19(3), 104; doi:10.3390/e19030104
Received: 30 December 2016 / Revised: 24 February 2017 / Accepted: 3 March 2017 / Published: 8 March 2017
PDF Full-text (976 KB) | HTML Full-text | XML Full-text
Abstract
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to
[...] Read more.
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to be caused by a relatively small set of connections involving command interneurons. We describe a method which progressively deletes these “complexity-causing” connections, and find that when these are eliminated, the network becomes significantly less complex than a random network. Furthermore, this result implicates the previously-identified set of neurons from the synaptic network’s “rich club” as the structural components encoding the network’s excess complexity. This study and our method thus support a view of the gap junction Connectome as consisting of a rather low-complexity network component whose symmetry is broken by the unique connectivities of singularly important rich club neurons, sharply increasing the complexity of the network. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Numerical Study of the Magnetic Field Effects on the Heat Transfer and Entropy Generation Aspects of a Power Law Fluid over an Axisymmetric Stretching Plate Structure
Entropy 2017, 19(3), 94; doi:10.3390/e19030094
Received: 16 December 2016 / Revised: 12 February 2017 / Accepted: 15 February 2017 / Published: 1 March 2017
PDF Full-text (3432 KB) | HTML Full-text | XML Full-text
Abstract
Numerical investigation of the effects of magnetic field strength, thermal radiation, Joule heating, and viscous heating on a forced convective flow of a non-Newtonian, incompressible power law fluid in an axisymmetric stretching sheet with variable temperature wall is accomplished. The power law shear
[...] Read more.
Numerical investigation of the effects of magnetic field strength, thermal radiation, Joule heating, and viscous heating on a forced convective flow of a non-Newtonian, incompressible power law fluid in an axisymmetric stretching sheet with variable temperature wall is accomplished. The power law shear thinning viscosity-shear rate model for the anisotropic solutions and the Rosseland approximation for the thermal radiation through a highly absorbing medium are considered. The temperature dependent heat sources, Joule heating, and viscous heating are considered as the source terms in the energy balance. The non-dimensional boundary layer equations are solved numerically in terms of similarity variable. A parameter study on the Nusselt number, viscous components of entropy generation, and thermal components of entropy generation in fluid is performed as a function of thermal radiation parameter (0 to 2), Brinkman number (0 to 10), Prandtl number (0 to 10), Hartmann number (0 to 1), power law index (0 to 1), and heat source coefficient (0 to 0.1). Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle On the Complexity Reduction of Coding WSS Vector Processes by Using a Sequence of Block Circulant Matrices
Entropy 2017, 19(3), 95; doi:10.3390/e19030095
Received: 20 December 2016 / Revised: 23 February 2017 / Accepted: 27 February 2017 / Published: 2 March 2017
PDF Full-text (276 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices.
[...] Read more.
In the present paper, we obtain a result on the rate-distortion function (RDF) of wide sense stationary (WSS) vector processes that allows us to reduce the complexity of coding those processes. To achieve this result, we propose a sequence of block circulant matrices. In addition, we use the proposed sequence to reduce the complexity of filtering WSS vector processes. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Brownian Dynamics Computational Model of Protein Diffusion in Crowded Media with Dextran Macromolecules as Obstacles
Entropy 2017, 19(3), 105; doi:10.3390/e19030105
Received: 22 December 2016 / Revised: 1 March 2017 / Accepted: 3 March 2017 / Published: 9 March 2017
PDF Full-text (912 KB) | HTML Full-text | XML Full-text
Abstract
The high concentration of macromolecules (i.e., macromolecular crowding) in cellular environments leads to large quantitative effects on the dynamic and equilibrium biological properties. These effects have been experimentally studied using inert macromolecules to mimic a realistic cellular medium. In this paper, two different
[...] Read more.
The high concentration of macromolecules (i.e., macromolecular crowding) in cellular environments leads to large quantitative effects on the dynamic and equilibrium biological properties. These effects have been experimentally studied using inert macromolecules to mimic a realistic cellular medium. In this paper, two different experimental in vitro systems of diffusing proteins which use dextran macromolecules as obstacles are computationally analyzed. A new model for dextran macromolecules based on effective radii accounting for macromolecular compression induced by crowding is proposed. The obtained results for the diffusion coefficient and the anomalous diffusion exponent exhibit good qualitative and generally good quantitative agreement with experiments. Volume fraction and hydrodynamic interactions are found to be crucial to describe the diffusion coefficient decrease in crowded media. However, no significant influence of the hydrodynamic interactions in the anomalous diffusion exponent is found. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Effect of a Magnetic Quadrupole Field on Entropy Generation in Thermomagnetic Convection of Paramagnetic Fluid with and without a Gravitational Field
Entropy 2017, 19(3), 96; doi:10.3390/e19030096
Received: 2 January 2017 / Revised: 26 February 2017 / Accepted: 28 February 2017 / Published: 3 March 2017
PDF Full-text (5150 KB) | HTML Full-text | XML Full-text
Abstract
Entropy generation for a paramagnetic fluid in a square enclosure with thermomagnetic convection is numerically investigated under the influence of a magnetic quadrupole field. The magnetic field is calculated using the scalar magnetic potential approach. The finite-volume method is applied to solve the
[...] Read more.
Entropy generation for a paramagnetic fluid in a square enclosure with thermomagnetic convection is numerically investigated under the influence of a magnetic quadrupole field. The magnetic field is calculated using the scalar magnetic potential approach. The finite-volume method is applied to solve the coupled equation for flow, energy, and entropy generation. Simulations are conducted to obtain streamlines, isotherms, Nusselt numbers, entropy generation, and the Bejan number for various magnetic forces (1 ≤ γ ≤ 100) and Rayleigh numbers (104Ra ≤ 106). In the absence of gravity, the total entropy generation increases with the increasing magnetic field number, but the average Bejan number decreases. In the gravitational field, the total entropy generation respects the insensitive trend to the change of the magnetic force for low Rayleigh numbers, while it changes significantly for high Rayleigh numbers. When the magnetic field enhances, the share of viscous dissipation in energy losses keeps growing. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle On Quantum Collapse as a Basis for the Second Law of Thermodynamics
Entropy 2017, 19(3), 106; doi:10.3390/e19030106
Received: 24 December 2016 / Revised: 25 February 2017 / Accepted: 7 March 2017 / Published: 9 March 2017
PDF Full-text (233 KB) | HTML Full-text | XML Full-text
Abstract
It was first suggested by David Z. Albert that the existence of a real, physical non-unitary process (i.e., “collapse”) at the quantum level would yield a complete explanation for the Second Law of Thermodynamics (i.e., the increase in entropy over time). The contribution
[...] Read more.
It was first suggested by David Z. Albert that the existence of a real, physical non-unitary process (i.e., “collapse”) at the quantum level would yield a complete explanation for the Second Law of Thermodynamics (i.e., the increase in entropy over time). The contribution of such a process would be to provide a physical basis for the ontological indeterminacy needed to derive the irreversible Second Law against a backdrop of otherwise reversible, deterministic physical laws. An alternative understanding of the source of this possible quantum “collapse” or non-unitarity is presented herein, in terms of the Transactional Interpretation (TI). The present model provides a specific physical justification for Boltzmann’s often-criticized assumption of molecular randomness (Stosszahlansatz), thereby changing its status from an ad hoc postulate to a theoretically grounded result, without requiring any change to the basic quantum theory. In addition, it is argued that TI provides an elegant way of reconciling, via indeterministic collapse, the time-reversible Liouville evolution with the time-irreversible evolution inherent in so-called “master equations” that specify the changes in occupation of the various possible states in terms of the transition rates between them. The present model is contrasted with the Ghirardi–Rimini–Weber (GRW) “spontaneous collapse” theory previously suggested for this purpose by Albert. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Open AccessArticle Motion Sequence Decomposition-Based Hybrid Entropy Feature and Its Application to Fault Diagnosis of a High-Speed Automatic Mechanism
Entropy 2017, 19(3), 86; doi:10.3390/e19030086
Received: 16 December 2016 / Revised: 11 February 2017 / Accepted: 14 February 2017 / Published: 24 February 2017
PDF Full-text (3174 KB) | HTML Full-text | XML Full-text
Abstract
High-speed automatic weapons play an important role in the field of national defense. However, current research on reliability analysis of automaton principally relies on simulations due to the fact that experimental data are difficult to collect in real life. Different from rotating machinery,
[...] Read more.
High-speed automatic weapons play an important role in the field of national defense. However, current research on reliability analysis of automaton principally relies on simulations due to the fact that experimental data are difficult to collect in real life. Different from rotating machinery, a high-speed automaton needs to accomplish complex motion consisting of a series of impacts. In addition to strong noise, the impacts generated by different components of the automaton will interfere with each other. There is no effective approach to cope with this in the fault diagnosis of automatic mechanisms. This paper proposes a motion sequence decomposition approach combining modern signal processing techniques to develop an effective approach to fault detection in high-speed automatons. We first investigate the entire working procedure of the automatic mechanism and calculate the corresponding action times of travel involved. The vibration signal collected from the shooting experiment is then divided into a number of impacts corresponding to action orders. Only the segment generated by a faulty component is isolated from the original impacts according to the action time of the component. Wavelet packet decomposition (WPD) is first applied on the resulting signals for investigation of energy distribution, and the components with higher energy are selected for feature extraction. Three information entropy features are utilized to distinguish various states of the automaton using empirical mode decomposition (EMD). A gray-wolf optimization (GWO) algorithm is introduced as an alternative to improve the performance of the support vector machine (SVM) classifier. We carry out shooting experiments to collect vibration data for demonstration of the proposed work. Experimental results show that the proposed work in this paper is effective for fault diagnosis of a high-speed automaton and can be applied in real applications. Moreover, the GWO is able to provide a competitive diagnosis result compared with the genetic algorithm (GA) and the particle swarm optimization (PSO) algorithm. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Taxis of Artificial Swimmers in a Spatio-Temporally Modulated Activation Medium
Entropy 2017, 19(3), 97; doi:10.3390/e19030097
Received: 19 January 2017 / Revised: 23 February 2017 / Accepted: 27 February 2017 / Published: 3 March 2017
PDF Full-text (803 KB) | HTML Full-text | XML Full-text
Abstract
Contrary to microbial taxis, where a tactic response to external stimuli is controlled by complex chemical pathways acting like sensor-actuator loops, taxis of artificial microswimmers is a purely stochastic effect associated with a non-uniform activation of the particles’ self-propulsion. We study the tactic
[...] Read more.
Contrary to microbial taxis, where a tactic response to external stimuli is controlled by complex chemical pathways acting like sensor-actuator loops, taxis of artificial microswimmers is a purely stochastic effect associated with a non-uniform activation of the particles’ self-propulsion. We study the tactic response of such swimmers in a spatio-temporally modulated activating medium by means of both numerical and analytical techniques. In the opposite limits of very fast and very slow rotational particle dynamics, we obtain analytic approximations that closely reproduce the numerical description. A swimmer drifts on average either parallel or anti-parallel to the propagation direction of the activating pulses, depending on their speed and width. The drift in line with the pulses is solely determined by the finite persistence length of the active Brownian motion performed by the swimmer, whereas the drift in the opposite direction results from the combination of the ballistic and diffusive properties of the swimmer’s dynamics. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Entropy, Topological Theories and Emergent Quantum Mechanics
Entropy 2017, 19(3), 87; doi:10.3390/e19030087
Received: 29 November 2016 / Accepted: 21 February 2017 / Published: 23 February 2017
Cited by 1 | PDF Full-text (235 KB) | HTML Full-text | XML Full-text
Abstract
The classical thermostatics of equilibrium processes is shown to possess a quantum mechanical dual theory with a finite dimensional Hilbert space of quantum states. Specifically, the kernel of a certain Hamiltonian operator becomes the Hilbert space of quasistatic quantum mechanics. The relation of
[...] Read more.
The classical thermostatics of equilibrium processes is shown to possess a quantum mechanical dual theory with a finite dimensional Hilbert space of quantum states. Specifically, the kernel of a certain Hamiltonian operator becomes the Hilbert space of quasistatic quantum mechanics. The relation of thermostatics to topological field theory is also discussed in the context of the approach of the emergence of quantum theory, where the concept of entropy plays a key role. Full article
Open AccessArticle Physical Intelligence and Thermodynamic Computing
Entropy 2017, 19(3), 107; doi:10.3390/e19030107
Received: 22 January 2017 / Accepted: 6 March 2017 / Published: 9 March 2017
PDF Full-text (3660 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality.
[...] Read more.
This paper proposes that intelligent processes can be completely explained by thermodynamic principles. They can equally be described by information-theoretic principles that, from the standpoint of the required optimizations, are functionally equivalent. The underlying theory arises from two axioms regarding distinguishability and causality. Their consequence is a theory of computation that applies to the only two kinds of physical processes possible—those that reconstruct the past and those that control the future. Dissipative physical processes fall into the first class, whereas intelligent ones comprise the second. The first kind of process is exothermic and the latter is endothermic. Similarly, the first process dumps entropy and energy to its environment, whereas the second reduces entropy while requiring energy to operate. It is shown that high intelligence efficiency and high energy efficiency are synonymous. The theory suggests the usefulness of developing a new computing paradigm called Thermodynamic Computing to engineer intelligent processes. The described engineering formalism for the design of thermodynamic computers is a hybrid combination of information theory and thermodynamics. Elements of the engineering formalism are introduced in the reverse-engineer of a cortical neuron. The cortical neuron provides perhaps the simplest and most insightful example of a thermodynamic computer possible. It can be seen as a basic building block for constructing more intelligent thermodynamic circuits. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Entropy Generation Analysis and Performance Evaluation of Turbulent Forced Convective Heat Transfer to Nanofluids
Entropy 2017, 19(3), 108; doi:10.3390/e19030108
Received: 21 January 2017 / Revised: 26 February 2017 / Accepted: 8 March 2017 / Published: 11 March 2017
PDF Full-text (2486 KB) | HTML Full-text | XML Full-text
Abstract
The entropy generation analysis of fully turbulent convective heat transfer to nanofluids in a circular tube is investigated numerically using the Reynolds Averaged Navier–Stokes (RANS) model. The nanofluids with particle concentration of 0%, 1%, 2%, 4% and 6% are treated as single phases
[...] Read more.
The entropy generation analysis of fully turbulent convective heat transfer to nanofluids in a circular tube is investigated numerically using the Reynolds Averaged Navier–Stokes (RANS) model. The nanofluids with particle concentration of 0%, 1%, 2%, 4% and 6% are treated as single phases of effective properties. The uniform heat flux is enforced at the tube wall. To confirm the validity of the numerical approach, the results have been compared with empirical correlations and analytical formula. The self-similarity profiles of local entropy generation are also studied, in which the peak values of entropy generation by direct dissipation, turbulent dissipation, mean temperature gradients and fluctuating temperature gradients for different Reynolds number as well as different particle concentration are observed. In addition, the effects of Reynolds number, volume fraction of nanoparticles and heat flux on total entropy generation and Bejan number are discussed. In the results, the intersection points of total entropy generation for water and four nanofluids are observed, when the entropy generation decrease before the intersection and increase after the intersection as the particle concentration increases. Finally, by definition of Ep, which combines the first law and second law of thermodynamics and attributed to evaluate the real performance of heat transfer processes, the optimal Reynolds number Reop corresponding to the best performance and the advisable Reynolds number Read providing the appropriate Reynolds number range for nanofluids in convective heat transfer can be determined. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Systematic Analysis of the Non-Extensive Statistical Approach in High Energy Particle Collisions—Experiment vs. Theory
Entropy 2017, 19(3), 88; doi:10.3390/e19030088
Received: 4 February 2017 / Revised: 21 February 2017 / Accepted: 22 February 2017 / Published: 24 February 2017
Cited by 1 | PDF Full-text (538 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of high-energy particle collisions is an excellent testbed for the non-extensive statistical approach. In these reactions we are far from the thermodynamical limit. In small colliding systems, such as electron-positron or nuclear collisions, the number of particles is several orders of
[...] Read more.
The analysis of high-energy particle collisions is an excellent testbed for the non-extensive statistical approach. In these reactions we are far from the thermodynamical limit. In small colliding systems, such as electron-positron or nuclear collisions, the number of particles is several orders of magnitude smaller than the Avogadro number; therefore, finite-size and fluctuation effects strongly influence the final-state one-particle energy distributions. Due to the simple characterization, the description of the identified hadron spectra with the Boltzmann–Gibbs thermodynamical approach is insufficient. These spectra can be described very well with Tsallis–Pareto distributions instead, derived from non-extensive thermodynamics. Using the q-entropy formula, we interpret the microscopic physics in terms of the Tsallis q and T parameters. In this paper we give a view on these parameters, analyzing identified hadron spectra from recent years in a wide center-of-mass energy range. We demonstrate that the fitted Tsallis-parameters show dependency on the center-of-mass energy and particle species (mass). Our findings are described well by a QCD (Quantum Chromodynamics) inspired parton evolution ansatz. Based on this comprehensive study, apart from the evolution, both mesonic and baryonic components found to be non-extensive ( q > 1 ), besides the mass ordered hierarchy observed in the parameter T. We also study and compare in details the theory-obtained parameters for the case of PYTHIA8 Monte Carlo Generator, perturbative QCD and quark coalescence models. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Figures

Figure 1

Open AccessArticle Optimization of Alpha-Beta Log-Det Divergences and their Application in the Spatial Filtering of Two Class Motor Imagery Movements
Entropy 2017, 19(3), 89; doi:10.3390/e19030089
Received: 13 December 2016 / Revised: 7 February 2017 / Accepted: 22 February 2017 / Published: 25 February 2017
PDF Full-text (684 KB) | HTML Full-text | XML Full-text
Abstract
The Alpha-Beta Log-Det divergences for positive definite matrices are flexible divergences that are parameterized by two real constants and are able to specialize several relevant classical cases like the squared Riemannian metric, the Steins loss, the S-divergence, etc. A novel classification criterion based
[...] Read more.
The Alpha-Beta Log-Det divergences for positive definite matrices are flexible divergences that are parameterized by two real constants and are able to specialize several relevant classical cases like the squared Riemannian metric, the Steins loss, the S-divergence, etc. A novel classification criterion based on these divergences is optimized to address the problem of classification of the motor imagery movements. This research paper is divided into three main sections in order to address the above mentioned problem: (1) Firstly, it is proven that a suitable scaling of the class conditional covariance matrices can be used to link the Common Spatial Pattern (CSP) solution with a predefined number of spatial filters for each class and its representation as a divergence optimization problem by making their different filter selection policies compatible; (2) A closed form formula for the gradient of the Alpha-Beta Log-Det divergences is derived that allows to perform optimization as well as easily use it in many practical applications; (3) Finally, in similarity with the work of Samek et al. 2014, which proposed the robust spatial filtering of the motor imagery movements based on the beta-divergence, the optimization of the Alpha-Beta Log-Det divergences is applied to this problem. The resulting subspace algorithm provides a unified framework for testing the performance and robustness of the several divergences in different scenarios. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Formulation of Exergy Cost Analysis to Graph-Based Thermal Network Models
Entropy 2017, 19(3), 109; doi:10.3390/e19030109
Received: 12 January 2017 / Accepted: 8 March 2017 / Published: 10 March 2017
PDF Full-text (1862 KB) | HTML Full-text | XML Full-text
Abstract
Information from exergy cost analysis can be effectively used in the design and management of modern district heating networks (DHNs) since it allows to properly account for the irreversibilities in energy conversion and distribution. Nevertheless, this requires the development of suitable graph-based approaches
[...] Read more.
Information from exergy cost analysis can be effectively used in the design and management of modern district heating networks (DHNs) since it allows to properly account for the irreversibilities in energy conversion and distribution. Nevertheless, this requires the development of suitable graph-based approaches that are able to effectively consider the network topology and the variations of the physical properties of the heating fluid on a time-dependent basis. In this work, a formulation of exergetic costs suitable for large graph-based networks is proposed, which is consistent with the principles of exergetic costing. In particular, the approach is more compact in comparison to straightforward approaches of exergetic cost formulation available in the literature, especially when applied to fluid networks. Moreover, the proposed formulation is specifically considering transient operating conditions, which is a crucial feature and a necessity for the analysis of future DHNs. Results show that transient effects of the thermodynamic behavior are not negligible for exergy cost analysis, while this work offers a coherent approach to quantify them. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Figures

Figure 1

Open AccessArticle Tunable-Q Wavelet Transform Based Multivariate Sub-Band Fuzzy Entropy with Application to Focal EEG Signal Analysis
Entropy 2017, 19(3), 99; doi:10.3390/e19030099
Received: 24 December 2016 / Accepted: 24 February 2017 / Published: 3 March 2017
Cited by 3 | PDF Full-text (2925 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyses the complexity of multivariate electroencephalogram (EEG) signals in different frequency scales for the analysis and classification of focal and non-focal EEG signals. The proposed multivariate sub-band entropy measure has been built based on tunable-Q wavelet transform (TQWT). In the field
[...] Read more.
This paper analyses the complexity of multivariate electroencephalogram (EEG) signals in different frequency scales for the analysis and classification of focal and non-focal EEG signals. The proposed multivariate sub-band entropy measure has been built based on tunable-Q wavelet transform (TQWT). In the field of multivariate entropy analysis, recent studies have performed analysis of biomedical signals with a multi-level filtering approach. This approach has become a useful tool for measuring inherent complexity of the biomedical signals. However, these methods may not be well suited for quantifying the complexity of the individual multivariate sub-bands of the analysed signal. In this present study, we have tried to resolve this difficulty by employing TQWT for analysing the sub-band signals of the analysed multivariate signal. It should be noted that higher value of Q factor is suitable for analysing signals with oscillatory nature, whereas the lower value of Q factor is suitable for analysing signals with non-oscillatory transients in nature. Moreover, with an increased number of sub-bands and a higher value of Q-factor, a reasonably good resolution can be achieved simultaneously in high and low frequency regions of the considered signals. Finally, we have employed multivariate fuzzy entropy (mvFE) to the multivariate sub-band signals obtained from the analysed signal. The proposed Q-based multivariate sub-band entropy has been studied on the publicly available bivariate Bern Barcelona focal and non-focal EEG signals database to investigate the statistical significance of the proposed features in different time segmented signals. Finally, the features are fed to random forest and least squares support vector machine (LS-SVM) classifiers to select the best classifier. Our method has achieved the highest classification accuracy of 84.67% in classifying focal and non-focal EEG signals with LS-SVM classifier. The proposed multivariate sub-band fuzzy entropy can also be applied to measure complexity of other multivariate biomedical signals. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Figures

Figure 1

Open AccessFeature PaperArticle The Gibbs Paradox, the Landauer Principle and the Irreversibility Associated with Tilted Observers
Entropy 2017, 19(3), 110; doi:10.3390/e19030110
Received: 10 February 2017 / Revised: 6 March 2017 / Accepted: 9 March 2017 / Published: 11 March 2017
PDF Full-text (681 KB) | HTML Full-text | XML Full-text
Abstract
It is well known that, in the context of General Relativity, some spacetimes, when described by a congruence of comoving observers, may consist of a distribution of a perfect (non–dissipative) fluid, whereas the same spacetime as seen by a “tilted” (Lorentz–boosted) congruence of
[...] Read more.
It is well known that, in the context of General Relativity, some spacetimes, when described by a congruence of comoving observers, may consist of a distribution of a perfect (non–dissipative) fluid, whereas the same spacetime as seen by a “tilted” (Lorentz–boosted) congruence of observers may exhibit the presence of dissipative processes. As we shall see, the appearance of entropy-producing processes are related to the high dependence of entropy on the specific congruence of observers. This fact is well illustrated by the Gibbs paradox. The appearance of such dissipative processes, as required by the Landauer principle, are necessary in order to erase the different amount of information stored by comoving observers, with respect to tilted ones. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Open AccessArticle The Two-Time Interpretation and Macroscopic Time-Reversibility
Entropy 2017, 19(3), 111; doi:10.3390/e19030111
Received: 31 December 2016 / Revised: 12 February 2017 / Accepted: 6 March 2017 / Published: 12 March 2017
PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
The two-state vector formalism motivates a time-symmetric interpretation of quantum mechanics that entails a resolution of the measurement problem. We revisit a post-selection-assisted collapse model previously suggested by us, claiming that unlike the thermodynamic arrow of time, it can lead to reversible dynamics
[...] Read more.
The two-state vector formalism motivates a time-symmetric interpretation of quantum mechanics that entails a resolution of the measurement problem. We revisit a post-selection-assisted collapse model previously suggested by us, claiming that unlike the thermodynamic arrow of time, it can lead to reversible dynamics at the macroscopic level. In addition, the proposed scheme enables us to characterize the classical-quantum boundary. We discuss the limitations of this approach and its broad implications for other areas of physics. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
Figures

Figure 1

Open AccessArticle Quantum Probabilities as Behavioral Probabilities
Entropy 2017, 19(3), 112; doi:10.3390/e19030112
Received: 20 December 2016 / Revised: 27 February 2017 / Accepted: 7 March 2017 / Published: 13 March 2017
PDF Full-text (836 KB) | HTML Full-text | XML Full-text
Abstract
We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision
[...] Read more.
We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Open AccessArticle Recoverable Random Numbers in an Internet of Things Operating System
Entropy 2017, 19(3), 113; doi:10.3390/e19030113
Received: 29 December 2016 / Revised: 20 February 2017 / Accepted: 9 March 2017 / Published: 13 March 2017
PDF Full-text (1094 KB) | HTML Full-text | XML Full-text
Abstract
Over the past decade, several security issues with Linux Random Number Generator (LRNG) on PCs and Androids have emerged. The main problem involves the process of entropy harvesting, particularly at boot time. An entropy source in the input pool of LRNG is not
[...] Read more.
Over the past decade, several security issues with Linux Random Number Generator (LRNG) on PCs and Androids have emerged. The main problem involves the process of entropy harvesting, particularly at boot time. An entropy source in the input pool of LRNG is not transferred into the non-blocking output pool if the entropy counter of the input pool is less than 192 bits out of 4098 bits. Because the entropy estimation of LRNG is highly conservative, the process may require more than one minute for starting the transfer. Furthermore, the design principle of the estimation algorithm is not only heuristic but also unclear. Recently, Google released an Internet of Things (IoT) operating system called Brillo based on the Linux kernel. We analyze the behavior of the random number generator in Brillo, which inherits that of LRNG. In the results, we identify two features that enable recovery of random numbers. With these features, we demonstrate that random numbers of 700 bytes at boot time can be recovered with the success probability of 90% by using time complexity for 5.20 × 2 40 trials. Therefore, the entropy of random numbers of 700 bytes is merely about 43 bits. Since the initial random numbers are supposed to be used for sensitive security parameters, such as stack canary and key derivation, our observation can be applied to practical attacks against cryptosystem. Full article
Figures

Figure 1

Open AccessArticle A Model of Mechanothermodynamic Entropy in Tribology
Entropy 2017, 19(3), 115; doi:10.3390/e19030115
Received: 22 September 2016 / Revised: 22 February 2017 / Accepted: 6 March 2017 / Published: 14 March 2017
PDF Full-text (5358 KB) | HTML Full-text | XML Full-text
Abstract
A brief analysis of entropy concepts in continuum mechanics and thermodynamics is presented. The methods of accounting for friction, wear and fatigue processes in the calculation of the thermodynamic entropy are described. It is shown that these and other damage processes of solids
[...] Read more.
A brief analysis of entropy concepts in continuum mechanics and thermodynamics is presented. The methods of accounting for friction, wear and fatigue processes in the calculation of the thermodynamic entropy are described. It is shown that these and other damage processes of solids are more adequately described by tribo-fatigue entropy. It was established that mechanothermodynamic entropy calculated as the sum of interacting thermodynamic and tribo-fatigue entropy components has the most general character. Examples of usage (application) of tribo-fatigue and mechanothermodynamic entropies for practical analysis of wear and fatigue processes are given. Full article
(This article belongs to the Special Issue Entropy Application in Tribology)
Figures

Figure 1

Open AccessArticle Fluctuation-Driven Transport in Biological Nanopores. A 3D Poisson–Nernst–Planck Study
Entropy 2017, 19(3), 116; doi:10.3390/e19030116
Received: 28 January 2017 / Revised: 6 March 2017 / Accepted: 9 March 2017 / Published: 14 March 2017
PDF Full-text (3458 KB) | HTML Full-text | XML Full-text
Abstract
Living systems display a variety of situations in which non-equilibrium fluctuations couple to certain protein functions yielding astonishing results. Here we study the bacterial channel OmpF under conditions similar to those met in vivo, where acidic resistance mechanisms are known to yield oscillations
[...] Read more.
Living systems display a variety of situations in which non-equilibrium fluctuations couple to certain protein functions yielding astonishing results. Here we study the bacterial channel OmpF under conditions similar to those met in vivo, where acidic resistance mechanisms are known to yield oscillations in the electric potential across the cell membrane. We use a three-dimensional structure-based theoretical approach to assess the possibility of obtaining fluctuation-driven transport. Our calculations show that remarkably high voltages would be necessary to observe the actual transport of ions against their concentration gradient. The reasons behind this are the mild selectivity of this bacterial pore and the relatively low efficiencies of the oscillating signals characteristic of membrane cells (random telegraph noise and thermal noise). Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Open AccessArticle Thermoeconomic Optimization of an Irreversible Novikov Plant Model under Different Regimes of Performance
Entropy 2017, 19(3), 118; doi:10.3390/e19030118
Received: 1 February 2017 / Revised: 9 March 2017 / Accepted: 10 March 2017 / Published: 15 March 2017
PDF Full-text (2904 KB) | HTML Full-text | XML Full-text
Abstract
The so-called Novikov power plant model has been widely used to represent some actual power plants, such as nuclear electric power generators. In the present work, a thermo-economic study of a Novikov power plant model is presented under three different regimes of performance:
[...] Read more.
The so-called Novikov power plant model has been widely used to represent some actual power plants, such as nuclear electric power generators. In the present work, a thermo-economic study of a Novikov power plant model is presented under three different regimes of performance: maximum power (MP), maximum ecological function (ME) and maximum efficient power (EP). In this study, different heat transfer laws are used: The Newton’s law of cooling, the Stefan–Boltzmann radiation law, the Dulong–Petit’s law and another phenomenological heat transfer law. For the thermoeconomic optimization of power plant models, a benefit function defined as the quotient of an objective function and the total economical costs is commonly employed. Usually, the total costs take into account two contributions: a cost related to the investment and another stemming from the fuel consumption. In this work, a new cost associated to the maintenance of the power plant is also considered. With these new total costs, it is shown that under the maximum ecological function regime the plant improves its economic and energetic performance in comparison with the other two regimes. The methodology used in this paper is within the context of finite-time thermodynamics. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Variational Principle for Relative Tail Pressure
Entropy 2017, 19(3), 120; doi:10.3390/e19030120
Received: 4 January 2017 / Revised: 11 March 2017 / Accepted: 14 March 2017 / Published: 15 March 2017
PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract We introduce the relative tail pressure to establish a variational principle for continuous bundle random dynamical systems. We also show that the relative tail pressure is conserved by the principal extension. Full article
(This article belongs to the Special Issue Entropic Properties of Dynamical Systems)
Open AccessArticle Identity Based Generalized Signcryption Scheme in the Standard Model
Entropy 2017, 19(3), 121; doi:10.3390/e19030121
Received: 10 January 2017 / Revised: 7 March 2017 / Accepted: 13 March 2017 / Published: 17 March 2017
PDF Full-text (282 KB) | HTML Full-text | XML Full-text
Abstract
Generalized signcryption (GSC) can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. It is more suitable for the storage constrained setting. In this paper, motivated by Paterson–Schuldt’s scheme, based on bilinear pairing, we first
[...] Read more.
Generalized signcryption (GSC) can adaptively work as an encryption scheme, a signature scheme or a signcryption scheme with only one algorithm. It is more suitable for the storage constrained setting. In this paper, motivated by Paterson–Schuldt’s scheme, based on bilinear pairing, we first proposed an identity based generalized signcryption (IDGSC) scheme in the standard model. To the best of our knowledge, it is the first scheme that is proven secure in the standard model. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle On Hölder Projective Divergences
Entropy 2017, 19(3), 122; doi:10.3390/e19030122
Received: 20 January 2017 / Revised: 8 March 2017 / Accepted: 10 March 2017 / Published: 16 March 2017
PDF Full-text (6948 KB) | HTML Full-text | XML Full-text
Abstract
We describe a framework to build distances by measuring the tightness of inequalities and introduce the notion of proper statistical divergences and improper pseudo-divergences. We then consider the Hölder ordinary and reverse inequalities and present two novel classes of Hölder divergences and pseudo-divergences
[...] Read more.
We describe a framework to build distances by measuring the tightness of inequalities and introduce the notion of proper statistical divergences and improper pseudo-divergences. We then consider the Hölder ordinary and reverse inequalities and present two novel classes of Hölder divergences and pseudo-divergences that both encapsulate the special case of the Cauchy–Schwarz divergence. We report closed-form formulas for those statistical dissimilarities when considering distributions belonging to the same exponential family provided that the natural parameter space is a cone (e.g., multivariate Gaussians) or affine (e.g., categorical distributions). Those new classes of Hölder distances are invariant to rescaling and thus do not require distributions to be normalized. Finally, we show how to compute statistical Hölder centroids with respect to those divergences and carry out center-based clustering toy experiments on a set of Gaussian distributions which demonstrate empirically that symmetrized Hölder divergences outperform the symmetric Cauchy–Schwarz divergence. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Open AccessArticle Friction, Free Axes of Rotation and Entropy
Entropy 2017, 19(3), 123; doi:10.3390/e19030123
Received: 16 February 2017 / Revised: 13 March 2017 / Accepted: 14 March 2017 / Published: 17 March 2017
PDF Full-text (3239 KB) | HTML Full-text | XML Full-text
Abstract
Friction forces acting on rotators may promote their alignment and therefore eliminate degrees of freedom in their movement. The alignment of rotators by friction force was shown by experiments performed with different spinners, demonstrating how friction generates negentropy in a system of rotators.
[...] Read more.
Friction forces acting on rotators may promote their alignment and therefore eliminate degrees of freedom in their movement. The alignment of rotators by friction force was shown by experiments performed with different spinners, demonstrating how friction generates negentropy in a system of rotators. A gas of rigid rotators influenced by friction force is considered. The orientational negentropy generated by a friction force was estimated with the Sackur-Tetrode equation. The minimal change in total entropy of a system of rotators, corresponding to their eventual alignment, decreases with temperature. The reported effect may be of primary importance for the phase equilibrium and motion of ubiquitous colloidal and granular systems. Full article
Figures

Open AccessArticle Witnessing Multipartite Entanglement by Detecting Asymmetry
Entropy 2017, 19(3), 124; doi:10.3390/e19030124
Received: 4 February 2017 / Revised: 8 March 2017 / Accepted: 12 March 2017 / Published: 16 March 2017
Cited by 1 | PDF Full-text (817 KB) | HTML Full-text | XML Full-text
Abstract
The characterization of quantum coherence in the context of quantum information theory and its interplay with quantum correlations is currently subject of intense study. Coherence in a Hamiltonian eigenbasis yields asymmetry, the ability of a quantum system to break a dynamical symmetry generated
[...] Read more.
The characterization of quantum coherence in the context of quantum information theory and its interplay with quantum correlations is currently subject of intense study. Coherence in a Hamiltonian eigenbasis yields asymmetry, the ability of a quantum system to break a dynamical symmetry generated by the Hamiltonian. We here propose an experimental strategy to witness multipartite entanglement in many-body systems by evaluating the asymmetry with respect to an additive Hamiltonian. We test our scheme by simulating asymmetry and entanglement detection in a three-qubit Greenberger–Horne–Zeilinger (GHZ) diagonal state. Full article
Figures

Figure 1

Open AccessArticle Packer Detection for Multi-Layer Executables Using Entropy Analysis
Entropy 2017, 19(3), 125; doi:10.3390/e19030125
Received: 31 January 2017 / Revised: 9 March 2017 / Accepted: 13 March 2017 / Published: 16 March 2017
PDF Full-text (446 KB) | HTML Full-text | XML Full-text
Abstract
Packing algorithms are broadly used to avoid anti-malware systems, and the proportion of packed malware has been growing rapidly. However, just a few studies have been conducted on detection various types of packing algorithms in a systemic way. Following this understanding, we elaborate
[...] Read more.
Packing algorithms are broadly used to avoid anti-malware systems, and the proportion of packed malware has been growing rapidly. However, just a few studies have been conducted on detection various types of packing algorithms in a systemic way. Following this understanding, we elaborate a method to classify packing algorithms of a given executable into three categories: single-layer packing, re-packing, or multi-layer packing. We convert entropy values of the executable file loaded into memory into symbolic representations, for which we used SAX (Symbolic Aggregate Approximation). Based on experiments of 2196 programs and 19 packing algorithms, we identify that precision (97.7%), accuracy (97.5%), and recall ( 96.8%) of our method are respectively high to confirm that entropy analysis is applicable in identifying packing algorithms. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Open AccessArticle Nonequilibrium Thermodynamics and Scale Invariance
Entropy 2017, 19(3), 126; doi:10.3390/e19030126
Received: 30 January 2017 / Revised: 7 March 2017 / Accepted: 14 March 2017 / Published: 16 March 2017
PDF Full-text (244 KB) | HTML Full-text | XML Full-text
Abstract
A variant of continuous nonequilibrium thermodynamic theory based on the postulate of the scale invariance of the local relation between generalized fluxes and forces is proposed here. This single postulate replaces the assumptions on local equilibrium and on the known relation between thermodynamic
[...] Read more.
A variant of continuous nonequilibrium thermodynamic theory based on the postulate of the scale invariance of the local relation between generalized fluxes and forces is proposed here. This single postulate replaces the assumptions on local equilibrium and on the known relation between thermodynamic fluxes and forces, which are widely used in classical nonequilibrium thermodynamics. It is shown here that such a modification not only makes it possible to deductively obtain the main results of classical linear nonequilibrium thermodynamics, but also provides evidence for a number of statements for a nonlinear case (the maximum entropy production principle, the macroscopic reversibility principle, and generalized reciprocity relations) that are under discussion in the literature. Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle Fractional Jensen–Shannon Analysis of the Scientific Output of Researchers in Fractional Calculus
Entropy 2017, 19(3), 127; doi:10.3390/e19030127
Received: 9 March 2017 / Revised: 14 March 2017 / Accepted: 15 March 2017 / Published: 17 March 2017
Cited by 1 | PDF Full-text (359 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyses the citation profiles of researchers in fractional calculus. Different metrics are used to quantify the dissimilarities between the data, namely the Canberra distance, and the classical and the generalized (fractional) Jensen–Shannon divergence. The information is then visualized by means of
[...] Read more.
This paper analyses the citation profiles of researchers in fractional calculus. Different metrics are used to quantify the dissimilarities between the data, namely the Canberra distance, and the classical and the generalized (fractional) Jensen–Shannon divergence. The information is then visualized by means of multidimensional scaling and hierarchical clustering. The mathematical tools and metrics allow for direct comparison and visualization of researchers based on their relative positioning and on patterns displayed in two- or three-dimensional maps. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Pairs Generating as a Consequence of the Fractal Entropy: Theory and Applications
Entropy 2017, 19(3), 128; doi:10.3390/e19030128
Received: 31 January 2017 / Revised: 11 March 2017 / Accepted: 15 March 2017 / Published: 17 March 2017
PDF Full-text (890 KB) | HTML Full-text | XML Full-text
Abstract
In classical concepts, theoretical models are built assuming that the dynamics of the complex system’s stuctural units occur on continuous and differentiable motion variables. In reality, the dynamics of the natural complex systems are much more complicated. These difficulties can be overcome in
[...] Read more.
In classical concepts, theoretical models are built assuming that the dynamics of the complex system’s stuctural units occur on continuous and differentiable motion variables. In reality, the dynamics of the natural complex systems are much more complicated. These difficulties can be overcome in a complementary approach, using the fractal concept and the corresponding non-differentiable theoretical model, such as the scale relativity theory or the extended scale relativity theory. Thus, using the last theory, fractal entropy through non-differentiable Lie groups was established and, moreover, the pairs generating mechanisms through fractal entanglement states were explained. Our model has implications in the dynamics of biological structures, in the form of the “chameleon-like” behavior of cholesterol. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Distance-Based Lempel–Ziv Complexity for the Analysis of Electroencephalograms in Patients with Alzheimer’s Disease
Entropy 2017, 19(3), 129; doi:10.3390/e19030129
Received: 9 February 2017 / Revised: 13 March 2017 / Accepted: 15 March 2017 / Published: 17 March 2017
PDF Full-text (889 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of electroencephalograms (EEGs) of patients with Alzheimer’s disease (AD) could contribute to the diagnosis of this dementia. In this study, a new non-linear signal processing metric, distance-based Lempel–Ziv complexity (dLZC), is introduced to characterise changes between pairs of electrodes in EEGs
[...] Read more.
The analysis of electroencephalograms (EEGs) of patients with Alzheimer’s disease (AD) could contribute to the diagnosis of this dementia. In this study, a new non-linear signal processing metric, distance-based Lempel–Ziv complexity (dLZC), is introduced to characterise changes between pairs of electrodes in EEGs in AD. When complexity in each signal arises from different sub-sequences, dLZC would be greater than when similar sub-sequences are present in each signal. EEGs from 11 AD patients and 11 age-matched control subjects were analysed. The dLZC values for AD patients were lower than for control subjects for most electrode pairs, with statistically significant differences (p < 0.01, Student’s t-test) in 17 electrode pairs in the distant left, local posterior left, and interhemispheric regions. Maximum diagnostic accuracies with leave-one-out cross-validation were 77.27% for subject-based classification and 78.25% for epoch-based classification. These findings suggest not only that EEGs from AD patients are less complex than those from controls, but also that the richness of the information contained in pairs of EEGs from patients is also lower than in age-matched controls. The analysis of EEGs in AD with dLZC may increase the insight into brain dysfunction, providing complementary information to that obtained with other complexity and synchrony methods. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Quantitative EEG Markers of Entropy and Auto Mutual Information in Relation to MMSE Scores of Probable Alzheimer’s Disease Patients
Entropy 2017, 19(3), 130; doi:10.3390/e19030130
Received: 17 December 2016 / Revised: 28 February 2017 / Accepted: 3 March 2017 / Published: 17 March 2017
PDF Full-text (4315 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Analysis of nonlinear quantitative EEG (qEEG) markers describing complexity of signal in relation to severity of Alzheimer’s disease (AD) was the focal point of this study. In this study, 79 patients diagnosed with probable AD were recruited from the multi-centric Prospective Dementia Database
[...] Read more.
Analysis of nonlinear quantitative EEG (qEEG) markers describing complexity of signal in relation to severity of Alzheimer’s disease (AD) was the focal point of this study. In this study, 79 patients diagnosed with probable AD were recruited from the multi-centric Prospective Dementia Database Austria (PRODEM). EEG recordings were done with the subjects seated in an upright position in a resting state with their eyes closed. Models of linear regressions explaining disease severity, expressed in Mini Mental State Examination (MMSE) scores, were analyzed by the nonlinear qEEG markers of auto mutual information (AMI), Shannon entropy (ShE), Tsallis entropy (TsE), multiscale entropy (MsE), or spectral entropy (SpE), with age, duration of illness, and years of education as co-predictors. Linear regression models with AMI were significant for all electrode sites and clusters, where R 2 is 0.46 at the electrode site C3, 0.43 at Cz, F3, and central region, and 0.42 at the left region. MsE also had significant models at C3 with R 2 > 0.40 at scales τ = 5 and τ = 6 . ShE and TsE also have significant models at T7 and F7 with R 2 > 0.30 . Reductions in complexity, calculated by AMI, SpE, and MsE, were observed as the MMSE score decreased. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle Information Submanifold Based on SPD Matrices and Its Applications to Sensor Networks
Entropy 2017, 19(3), 131; doi:10.3390/e19030131
Received: 30 December 2016 / Revised: 1 March 2017 / Accepted: 16 March 2017 / Published: 17 March 2017
PDF Full-text (900 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, firstly, manifoldPD(n)consisting of alln×nsymmetric positive-definite matrices is introduced based on matrix information geometry; Secondly, the geometrical structures of information submanifold ofPD(n)are presented including metric,
[...] Read more.
In this paper, firstly, manifoldPD(n)consisting of alln×nsymmetric positive-definite matrices is introduced based on matrix information geometry; Secondly, the geometrical structures of information submanifold ofPD(n)are presented including metric, geodesic and geodesic distance; Thirdly, the information resolution with sensor networks is presented by three classical measurement models based on information submanifold; Finally, the bearing-only tracking by single sensor is introduced by the Fisher information matrix. The preliminary analysis results introduced in this paper indicate that information submanifold is able to offer consistent and more comprehensive means to understand and solve sensor network problems for targets resolution and tracking, which are not easily handled by some conventional analysis methods. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Discrepancies between Conventional Multiscale Entropy and Modified Short-Time Multiscale Entropy of Photoplethysmographic Pulse Signals in Middle- and Old- Aged Individuals with or without Diabetes
Entropy 2017, 19(3), 132; doi:10.3390/e19030132
Received: 2 February 2017 / Revised: 6 March 2017 / Accepted: 16 March 2017 / Published: 18 March 2017
Cited by 1 | PDF Full-text (1498 KB) | HTML Full-text | XML Full-text
Abstract
Multiscale entropy (MSE) of physiological signals may reflect cardiovascular health in diabetes. The classic MSE (cMSE) algorithm requires more than 750 signals for the calculations. The modified short-time MSE (sMSE) may have inconsistent outcomes compared with the cMSE at large time scales and
[...] Read more.
Multiscale entropy (MSE) of physiological signals may reflect cardiovascular health in diabetes. The classic MSE (cMSE) algorithm requires more than 750 signals for the calculations. The modified short-time MSE (sMSE) may have inconsistent outcomes compared with the cMSE at large time scales and in a disease status. Therefore, we compared the cMSE of 1500 (cMSE1500) consecutive and 1000 photoplethysmographic (PPG) pulse amplitudes with the sMSE of 500 PPG (sMSE500) pulse amplitudes of bilateral fingertips among middle- to old-aged individuals with or without type 2 diabetes. We discovered that cMSE1500 had the smallest value across scale factors 1–10, followed by cMSE1000, and then sMSE500 in both hands. The cMSE1500, cMSE1000 and sMSE500 did not differ at each scale factor in both hands of persons without diabetes and in the dominant hand of those with diabetes. In contrast, the sMSE500 differed at all scales 1–10 in the non-dominant hand with diabetes. In conclusion, autonomic dysfunction, prevalent in the non-dominant hand which had a low local physical activity in the person with diabetes, might be imprecisely evaluated by the sMSE; therefore, using more PPG signal numbers for the cMSE is preferred in such a situation. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessArticle Spectral Entropy Parameters during Rapid Ventricular Pacing for Transcatheter Aortic Valve Implantation
Entropy 2017, 19(3), 133; doi:10.3390/e19030133
Received: 16 December 2016 / Revised: 3 March 2017 / Accepted: 15 March 2017 / Published: 20 March 2017
PDF Full-text (907 KB) | HTML Full-text | XML Full-text
Abstract
The time-frequency balanced spectral entropy of the EEG is a monitoring technique measuring the level of hypnosis during general anesthesia. Two components of spectral entropy are calculated: state entropy (SE) and response entropy (RE). Transcatheter aortic valve implantation (TAVI) is a less invasive
[...] Read more.
The time-frequency balanced spectral entropy of the EEG is a monitoring technique measuring the level of hypnosis during general anesthesia. Two components of spectral entropy are calculated: state entropy (SE) and response entropy (RE). Transcatheter aortic valve implantation (TAVI) is a less invasive treatment for patients suffering from symptomatic aortic stenosis with contraindications for open heart surgery. The goal of hemodynamic management during the procedure is to achieve hemodynamic stability with exact blood pressure control and use of rapid ventricular pacing (RVP) that result in severe hypotension. The objective of this study was to examine how the spectral entropy values respond to RVP and other critical events during the TAVI procedure. Twenty one patients undergoing general anesthesia for TAVI were evaluated. The RVP was used twice during the procedure at a rate of 185 ± 9/min with durations of 16 ± 4 s (range 8–22 s) and 24 ± 6 s (range 18–39 s). The systolic blood pressure during RVP was under 50 ± 5 mmHg. Spectral entropy values SE were significantly declined during the RVP procedure, from 28 ± 13 to 23 ± 13 (p < 0.003) and from 29 ± 12 to 24 ± 10 (p < 0.001). The corresponding values for RE were 29 ± 13 vs. 24 ± 13 (p < 0.006) and 30 ± 12 vs. 25 ± 10 (p < 0.001). Both SE and RE values returned to the pre-RVP values after 1 min. Ultra-short hypotension during RVP changed the spectral entropy parameters, however these indices reverted rapidly to the same value before application of RVP. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle Permutation Entropy: New Ideas and Challenges
Entropy 2017, 19(3), 134; doi:10.3390/e19030134
Received: 17 February 2017 / Revised: 17 March 2017 / Accepted: 17 March 2017 / Published: 21 March 2017
Cited by 1 | PDF Full-text (2828 KB) | HTML Full-text | XML Full-text
Abstract
Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations,
[...] Read more.
Over recent years, some new variants of Permutation entropy have been introduced and applied to EEG analysis, including a conditional variant and variants using some additional metric information or being based on entropies that are different from the Shannon entropy. In some situations, it is not completely clear what kind of information the new measures and their algorithmic implementations provide. We discuss the new developments and illustrate them for EEG data. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle Structure and Dynamics of Water at Carbon-Based Interfaces
Entropy 2017, 19(3), 135; doi:10.3390/e19030135
Received: 9 March 2017 / Revised: 19 March 2017 / Accepted: 19 March 2017 / Published: 21 March 2017
PDF Full-text (1619 KB) | HTML Full-text | XML Full-text
Abstract
Water structure and dynamics are affected by the presence of a nearby interface. Here, first we review recent results by molecular dynamics simulations about the effect of different carbon-based materials, including armchair carbon nanotubes and a variety of graphene sheets—flat and with corrugation—on
[...] Read more.
Water structure and dynamics are affected by the presence of a nearby interface. Here, first we review recent results by molecular dynamics simulations about the effect of different carbon-based materials, including armchair carbon nanotubes and a variety of graphene sheets—flat and with corrugation—on water structure and dynamics. We discuss the calculations of binding energies, hydrogen bond distributions, water’s diffusion coefficients and their relation with surface’s geometries at different thermodynamical conditions. Next, we present new results of the crystallization and dynamics of water in a rigid graphene sieve. In particular, we show that the diffusion of water confined between parallel walls depends on the plate distance in a non-monotonic way and is related to the water structuring, crystallization, re-melting and evaporation for decreasing inter-plate distance. Our results could be relevant in those applications where water is in contact with nanostructured carbon materials at ambient or cryogenic temperatures, as in man-made superhydrophobic materials or filtration membranes, or in techniques that take advantage of hydrated graphene interfaces, as in aqueous electron cryomicroscopy for the analysis of proteins adsorbed on graphene. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Review

Jump to: Research, Other

Open AccessReview Quantum Theory from Rules on Information Acquisition
Entropy 2017, 19(3), 98; doi:10.3390/e19030098
Received: 23 January 2017 / Accepted: 17 February 2017 / Published: 3 March 2017
PDF Full-text (1088 KB) | HTML Full-text | XML Full-text
Abstract
We summarize a recent reconstruction of the quantum theory of qubits from rules constraining an observer’s acquisition of information about physical systems. This review is accessible and fairly self-contained, focusing on the main ideas and results and not the technical details. The reconstruction
[...] Read more.
We summarize a recent reconstruction of the quantum theory of qubits from rules constraining an observer’s acquisition of information about physical systems. This review is accessible and fairly self-contained, focusing on the main ideas and results and not the technical details. The reconstruction offers an informational explanation for the architecture of the theory and specifically for its correlation structure. In particular, it explains entanglement, monogamy and non-locality compellingly from limited accessible information and complementarity. As a by-product, it also unravels new ‘conserved informational charges’ from complementarity relations that characterize the unitary group and the set of pure states. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Other

Jump to: Research, Review

Open AccessLetter Specific Emitter Identification Based on the Natural Measure
Entropy 2017, 19(3), 117; doi:10.3390/e19030117
Received: 15 December 2016 / Revised: 5 March 2017 / Accepted: 9 March 2017 / Published: 15 March 2017
PDF Full-text (353 KB) | HTML Full-text | XML Full-text
Abstract
Specific emitter identification (SEI) techniques are often used in civilian and military spectrum-management operations, and they are also applied to support the security and authentication of wireless communication. In this letter, a new SEI method based on the natural measure of the one-dimensional
[...] Read more.
Specific emitter identification (SEI) techniques are often used in civilian and military spectrum-management operations, and they are also applied to support the security and authentication of wireless communication. In this letter, a new SEI method based on the natural measure of the one-dimensional component of the chaotic system is proposed. We find that the natural measures of the one-dimensional components of higher dimensional systems exist and that they are quite diverse for different systems. Based on this principle, the natural measure is used as an RF fingerprint in this letter. The natural measure can solve the problems caused by a small amount of data and a low sample rate. The Kullback–Leibler divergence is used to quantify the difference between the natural measures obtained from diverse emitters and classify them. The data obtained from real application are exploited to test the validity of the proposed method. Experimental results show that the proposed method is not only easy to operate, but also quite effective, even though the amount of data is small and the sample rate is low. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
loading...
Back to Top