Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 8 (August 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Calculating the entropy of a liquid is problematic because of its strong interactions, many degrees [...] Read more.
View options order results:
result details:
Displaying articles 1-97
Export citation of selected articles as:
Open AccessFeature PaperArticle
Modelling Thermally Induced Non-Equilibrium Gas Flows by Coupling Kinetic and Extended Thermodynamic Methods
Entropy 2019, 21(8), 816; https://doi.org/10.3390/e21080816 (registering DOI)
Received: 25 July 2019 / Revised: 14 August 2019 / Accepted: 15 August 2019 / Published: 20 August 2019
PDF Full-text (4132 KB) | XML Full-text
Abstract
Thermally induced non-equilibrium gas flows have been simulated in the present study by coupling kinetic and extended thermodynamic methods. Three different types of thermally induced gas flows, including temperature-discontinuity- and temperature-gradient-induced flows and radiometric flow, have been explored in the transition regime. The [...] Read more.
Thermally induced non-equilibrium gas flows have been simulated in the present study by coupling kinetic and extended thermodynamic methods. Three different types of thermally induced gas flows, including temperature-discontinuity- and temperature-gradient-induced flows and radiometric flow, have been explored in the transition regime. The temperature-discontinuity-induced flow case has shown that as the Knudsen number increases, the regularised 26 (R26) moment equation system will gradually loss its accuracy and validation. A coupling macro- and microscopic approach is employed to overcome these problems. The R26 moment equations are used at the macroscopic level for the bulk flow region, while the kinetic equation associated with the discrete velocity method (DVM) is applied to describe the gas close to the wall at the microscopic level, which yields a hybrid DVM/R26 approach. The numerical results have shown that the hybrid DVM/R26 method can be faithfully used for the thermally induced non-equilibrium flows. The proposed scheme not only improves the accuracy of the results in comparison with the R26 equations, but also extends their capability with a wider range of Knudsen numbers. In addition, the hybrid scheme is able to reduce the computational memory and time cost compared to the DVM. Full article
(This article belongs to the Special Issue Thermodynamics of Non-Equilibrium Gas Flows)
Open AccessArticle
Suggested Integral Analysis for Chaos-Based Image Cryptosystems
Entropy 2019, 21(8), 815; https://doi.org/10.3390/e21080815 (registering DOI)
Received: 2 July 2019 / Revised: 1 August 2019 / Accepted: 8 August 2019 / Published: 20 August 2019
PDF Full-text (16613 KB) | XML Full-text
Abstract
Currently, chaos-based cryptosystems are being proposed in the literature to provide confidentiality for digital images, since the diffusion effect in the Advance Encryption Standard (AES) algorithm is weak. Security is the most important challenge to assess in cryptosystems according to the National Institute [...] Read more.
Currently, chaos-based cryptosystems are being proposed in the literature to provide confidentiality for digital images, since the diffusion effect in the Advance Encryption Standard (AES) algorithm is weak. Security is the most important challenge to assess in cryptosystems according to the National Institute of Standard and Technology (NIST), then cost and performance, and finally algorithm and implementation. Recent chaos-based image encryption algorithms present basic security analysis, which could make them insecure for some applications. In this paper, we suggest an integral analysis framework related to comprehensive security analysis, cost and performance, and the algorithm and implementation for chaos-based image cryptosystems. The proposed guideline based on 20 analysis points can assist new cryptographic designers to present an integral analysis of new algorithms. Future comparisons of new schemes can be more consistent in terms of security and efficiency. In addition, we present aspects regarding digital chaos implementation, chaos validation, and key definition to improve the security of the overall cryptosystem. The suggested guideline does not guarantee security, and it does not intend to limit the liberty to implement new analysis. However, it provides for the first time in the literature a solid basis about integral analysis for chaos-based image cryptosystems as an effective approach to improve security. Full article
Open AccessArticle
Turbo Decoder Design based on an LUT-Normalized Log-MAP Algorithm
Entropy 2019, 21(8), 814; https://doi.org/10.3390/e21080814 (registering DOI)
Received: 14 June 2019 / Revised: 12 August 2019 / Accepted: 14 August 2019 / Published: 20 August 2019
PDF Full-text (5747 KB) | XML Full-text
Abstract
Turbo codes have been widely used in wireless communication systems due to their good error correction performance. Under time division long term evolution (TD-LTE) of the 3rd generation partnership project (3GPP) wireless communication standard, a Log maximum a posteriori (Log-MAP) decoding algorithm with [...] Read more.
Turbo codes have been widely used in wireless communication systems due to their good error correction performance. Under time division long term evolution (TD-LTE) of the 3rd generation partnership project (3GPP) wireless communication standard, a Log maximum a posteriori (Log-MAP) decoding algorithm with high complexity is usually approximated as a lookup-table Log-MAP (LUT-Log-MAP) algorithm and Max-Log-MAP algorithm, but these two algorithms have high complexity and high bit error rate, respectively. In this paper, we propose a normalized Log-MAP (Nor-Log-MAP) decoding algorithm in which the function max* is approximated by using a fixed normalized factor multiplied by the max function. Combining a Nor-Log-MAP algorithm with a LUT-Log-MAP algorithm creates a new kind of LUT-Nor-Log-MAP algorithm. Compared with the LUT-Log-MAP algorithm, the decoding performance of the LUT-Nor-Log-MAP algorithm is close to that of the LUT-Log-MAP algorithm. Based on the decoding method of the Nor-Log-MAP algorithm, we also put forward a normalization functional unit (NFU) for a soft-input soft-output (SISO) decoder computing unit. The simulation results show that the LUT-Nor-Log-MAP algorithm can save about 2.1% of logic resources compared with the LUT-Log-MAP algorithm. Compared with the Max-Log-MAP algorithm, the LUT-Nor-Log-MAP algorithm shows a gain of 0.25~0.5 dB in decoding performance. Using the Cyclone IV platform, the designed Turbo decoder can achieve a throughput of 36 Mbit/s under a maximum clock frequency of 44 MHz. Full article
Open AccessArticle
Enhanced Electron Scattering upon Ion Relocation in BaVS3 at 69 K
Entropy 2019, 21(8), 813; https://doi.org/10.3390/e21080813 (registering DOI)
Received: 25 July 2019 / Revised: 9 August 2019 / Accepted: 16 August 2019 / Published: 20 August 2019
PDF Full-text (2231 KB) | XML Full-text
Abstract
The present study deals with the anomalous heat capacity peak and thermal conductivity of BaVS3 near the metal-insulator transition present at 69 K. The transition is related to a structural transition from an orthorhombic to monoclinic phase. Heat capacity measurements at this [...] Read more.
The present study deals with the anomalous heat capacity peak and thermal conductivity of BaVS 3 near the metal-insulator transition present at 69 K. The transition is related to a structural transition from an orthorhombic to monoclinic phase. Heat capacity measurements at this temperature exhibit a significant and relatively broad peak, which is also sample dependent. The present study calculates the entropy increase during the structural transition and we show that the additional entropy is caused by enhanced electron scattering as a result of the structural reorientation of the nuclei. Within the model it is possible to explain quantitatively the observed peak alike structure in the heat capacity and in heat conductivity. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Obstructive Sleep Apnea Recognition Based on Multi-Bands Spectral Entropy Analysis of Short-Time Heart Rate Variability
Entropy 2019, 21(8), 812; https://doi.org/10.3390/e21080812 (registering DOI)
Received: 29 June 2019 / Revised: 11 August 2019 / Accepted: 16 August 2019 / Published: 20 August 2019
PDF Full-text (2931 KB) | XML Full-text
Abstract
Obstructive sleep apnea (OSA) syndrome is a common sleep disorder. As an alternative to polysomnography (PSG) for OSA screening, the current automatic OSA detection methods mainly concentrate on feature extraction and classifier selection based on physiological signals. It has been reported that OSA [...] Read more.
Obstructive sleep apnea (OSA) syndrome is a common sleep disorder. As an alternative to polysomnography (PSG) for OSA screening, the current automatic OSA detection methods mainly concentrate on feature extraction and classifier selection based on physiological signals. It has been reported that OSA is, along with autonomic nervous system (ANS) dysfunction and heart rate variability (HRV), a useful tool for ANS assessment. Therefore, in this paper, eight novel indices of short-time HRV are extracted for OSA detection, which are based on the proposed multi-bands time-frequency spectrum entropy (MTFSE) method. In the MTFSE, firstly, the power spectrum of HRV is estimated by the Burg–AR model, and the time-frequency spectrum image (TFSI) is obtained. Secondly, according to the physiological significance of HRV, the TFSI is divided into multiple sub-bands according to frequency. Last but not least, by studying the Shannon entropy of different sub-bands and the relationships among them, the eight indices are obtained. In order to validate the performance of MTFSE-based indices, the Physionet Apnea–ECG database and K-nearest neighbor (KNN), support vector machine (SVM), and decision tree (DT) classification methods are used. The SVM classification method gets the highest classification accuracy, its average accuracy is 91.89%, the average sensitivity is 88.01%, and the average specificity is 93.98%. Undeniably, the MTFSE-based indices provide a novel idea for the screening of OSA disease. Full article
(This article belongs to the Section Signal and Data Analysis)
Open AccessFeature PaperArticle
Magnetic Helicity and the Solar Dynamo
Entropy 2019, 21(8), 811; https://doi.org/10.3390/e21080811
Received: 16 July 2019 / Revised: 16 August 2019 / Accepted: 17 August 2019 / Published: 19 August 2019
Viewed by 111 | PDF Full-text (288 KB) | HTML Full-text | XML Full-text
Abstract
Solar magnetism is believed to originate through dynamo action in the tachocline. Statistical mechanics, in turn, tells us that dynamo action is an inherent property of magnetohydrodynamic (MHD) turbulence, depending essentially on magnetic helicity. Here, we model the tachocline as a rotating, thin [...] Read more.
Solar magnetism is believed to originate through dynamo action in the tachocline. Statistical mechanics, in turn, tells us that dynamo action is an inherent property of magnetohydrodynamic (MHD) turbulence, depending essentially on magnetic helicity. Here, we model the tachocline as a rotating, thin spherical shell containing MHD turbulence. Using this model, we find an expression for the entropy and from this develop the thermodynamics of MHD turbulence. This allows us to introduce the macroscopic parameters that affect magnetic self-organization and dynamo action, parameters that include magnetic helicity, as well as tachocline thickness and turbulent energy. Full article
(This article belongs to the Special Issue Solar and Stellar Variability and Statistical Mechanics)
Figures

Figure 1

Open AccessArticle
Quantifying the Unitary Generation of Coherence from Thermal Quantum Systems
Entropy 2019, 21(8), 810; https://doi.org/10.3390/e21080810
Received: 14 July 2019 / Revised: 1 August 2019 / Accepted: 16 August 2019 / Published: 19 August 2019
Viewed by 129 | PDF Full-text (556 KB) | HTML Full-text | XML Full-text
Abstract
Coherence is associated with transient quantum states; in contrast, equilibrium thermal quantum systems have no coherence. We investigate the quantum control task of generating maximum coherence from an initial thermal state employing an external field. A completely controllable Hamiltonian is assumed allowing the [...] Read more.
Coherence is associated with transient quantum states; in contrast, equilibrium thermal quantum systems have no coherence. We investigate the quantum control task of generating maximum coherence from an initial thermal state employing an external field. A completely controllable Hamiltonian is assumed allowing the generation of all possible unitary transformations. Optimizing the unitary control to achieve maximum coherence leads to a micro-canonical energy distribution on the diagonal energy representation. We demonstrate such a control scenario starting from a given Hamiltonian applying an external field, reaching the control target. Such an optimization task is found to be trap-less. By constraining the amount of energy invested by the control, maximum coherence leads to a canonical energy population distribution. When the optimization procedure constrains the final energy too tightly, local suboptimal traps are found. The global optimum is obtained when a small Lagrange multiplier is employed to constrain the final energy. Finally, we explore the task of generating coherences restricted to be close to the diagonal of the density matrix in the energy representation. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Figures

Figure 1

Open AccessArticle
Dual Loomis-Whitney Inequalities via Information Theory
Entropy 2019, 21(8), 809; https://doi.org/10.3390/e21080809
Received: 6 July 2019 / Revised: 16 August 2019 / Accepted: 16 August 2019 / Published: 18 August 2019
Viewed by 195 | PDF Full-text (403 KB)
Abstract
We establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions. In the first part of the paper, we derive volume bounds for convex bodies using generalized subadditivity properties of [...] Read more.
We establish lower bounds on the volume and the surface area of a geometric body using the size of its slices along different directions. In the first part of the paper, we derive volume bounds for convex bodies using generalized subadditivity properties of entropy combined with entropy bounds for log-concave random variables. In the second part, we investigate a new notion of Fisher information which we call the L 1 -Fisher information and show that certain superadditivity properties of the L 1 -Fisher information lead to lower bounds for the surface areas of polyconvex sets in terms of its slices. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Open AccessArticle
Evaluating Signalization and Channelization Selections at Intersections Based on an Entropy Method
Entropy 2019, 21(8), 808; https://doi.org/10.3390/e21080808
Received: 23 July 2019 / Revised: 13 August 2019 / Accepted: 16 August 2019 / Published: 18 August 2019
Viewed by 188 | PDF Full-text (6363 KB)
Abstract
Direct left turns (DLTs) could cause traffic slowdown, delay, stops, and even accidents on intersections, especially on no-median roads. Channelization and signalization can significantly diminish negative impact of DLTs. In China, a total of 56 large and medium-sized cities, including 17 provincial capitals, [...] Read more.
Direct left turns (DLTs) could cause traffic slowdown, delay, stops, and even accidents on intersections, especially on no-median roads. Channelization and signalization can significantly diminish negative impact of DLTs. In China, a total of 56 large and medium-sized cities, including 17 provincial capitals, have adopted vehicle restriction policies due to traffic congestion, vehicle energy conservation and emission reduction, which cause travel inconvenience for citizens. This paper mainly studies signalization and channelization selections at intersections based on an entropy method. Based on the commonly used three evaluation indexes, the number of vehicles, CO emissions and fuel consumption have been added. The entropy evaluation method (EEM) method is innovatively used to objectively calculate the weight of the six indexes, which carry out the optimal traffic volume combinations for intersections of present situation, channelization and signalization. A VISSIM simulation is also used to evaluate the operating status of three conditions. The results show that EEM could help enormously in choosing different methods at a certain intersection. With the EEM, six indexes decrease by 20–70% at most. Full article
Open AccessArticle
Memories of the Future. Predictable and Unpredictable Information in Fractional Flipping a Biased Coin
Entropy 2019, 21(8), 807; https://doi.org/10.3390/e21080807
Received: 1 August 2019 / Accepted: 15 August 2019 / Published: 18 August 2019
Viewed by 207 | PDF Full-text (2275 KB)
Abstract
Some uncertainty about flipping a biased coin can be resolved from the sequence of coin sides shown already. We report the exact amounts of predictable and unpredictable information in flipping a biased coin. Fractional coin flipping does not reflect any physical process, being [...] Read more.
Some uncertainty about flipping a biased coin can be resolved from the sequence of coin sides shown already. We report the exact amounts of predictable and unpredictable information in flipping a biased coin. Fractional coin flipping does not reflect any physical process, being defined as a binomial power series of the transition matrix for “integer” flipping. Due to strong coupling between the tossing outcomes at different times, the side repeating probabilities assumed to be independent for “integer” flipping get entangled with one another for fractional flipping. The predictable and unpredictable information components vary smoothly with the fractional order parameter. The destructive interference between two incompatible hypotheses about the flipping outcome culminates in a fair coin, which stays fair also for fractional flipping. Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
Figures

Graphical abstract

Open AccessArticle
Get Rid of Nonlocality from Quantum Physics
Entropy 2019, 21(8), 806; https://doi.org/10.3390/e21080806
Received: 7 July 2019 / Revised: 25 July 2019 / Accepted: 12 August 2019 / Published: 18 August 2019
Viewed by 183 | PDF Full-text (265 KB)
Abstract
This paper is aimed to dissociate nonlocality from quantum theory. We demonstrate that the tests on violation of the Bell type inequalities are simply statistical tests of local incompatibility of observables. In fact, these are tests on violation of the Bohr complementarity principle. [...] Read more.
This paper is aimed to dissociate nonlocality from quantum theory. We demonstrate that the tests on violation of the Bell type inequalities are simply statistical tests of local incompatibility of observables. In fact, these are tests on violation of the Bohr complementarity principle. Thus, the attempts to couple experimental violations of the Bell type inequalities with “quantum nonlocality” is really misleading. These violations are explained in the quantum theory as exhibitions of incompatibility of observables for a single quantum system, e.g., the spin projections for a single electron or the polarization projections for a single photon. Of course, one can go beyond quantum theory with the hidden variables models (as was suggested by Bell) and then discuss their possible nonlocal features. However, conventional quantum theory is local. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Open AccessArticle
Integrated Information in Process-Algebraic Compositions
Entropy 2019, 21(8), 805; https://doi.org/10.3390/e21080805
Received: 25 March 2019 / Revised: 12 August 2019 / Accepted: 14 August 2019 / Published: 17 August 2019
Viewed by 200 | PDF Full-text (592 KB)
Abstract
Integrated Information Theory (IIT) is most typically applied to Boolean Nets, a state transition model in which system parts cooperate by sharing state variables. By contrast, in Process Algebra, whose semantics can also be formulated in terms of (labeled) state [...] Read more.
Integrated Information Theory (IIT) is most typically applied to Boolean Nets, a state transition model in which system parts cooperate by sharing state variables. By contrast, in Process Algebra, whose semantics can also be formulated in terms of (labeled) state transitions, system parts—“processes”—cooperate by sharing transitions with matching labels, according to interaction patterns expressed by suitable composition operators. Despite this substantial difference, questioning how much additional information is provided by the integration of the interacting partners above and beyond the sum of their independent contributions appears perfectly legitimate with both types of cooperation. In fact, we collect statistical data about ϕ —integrated information—relative to pairs of boolean nets that cooperate by three alternative mechanisms: shared variables—the standard choice for boolean nets—and two forms of shared transition, inspired by two process algebras. We name these mechanisms α , β and γ . Quantitative characterizations of all of them are obtained by considering three alternative execution modes, namely synchronous, asynchronous and “hybrid”, by exploring the full range of possible coupling degrees in all three cases, and by considering two possible definitions of ϕ based on two alternative notions of distribution distance. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Open AccessFeature PaperArticle
Measures of Entropy to Characterize Fatigue Damage in Metallic Materials
Entropy 2019, 21(8), 804; https://doi.org/10.3390/e21080804
Received: 18 June 2019 / Revised: 7 August 2019 / Accepted: 14 August 2019 / Published: 17 August 2019
Viewed by 199 | PDF Full-text (4060 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the entropic damage indicators for metallic material fatigue processes obtained from three associated energy dissipation sources. Since its inception, reliability engineering has employed statistical and probabilistic models to assess the reliability and integrity of components and systems. To supplement the [...] Read more.
This paper presents the entropic damage indicators for metallic material fatigue processes obtained from three associated energy dissipation sources. Since its inception, reliability engineering has employed statistical and probabilistic models to assess the reliability and integrity of components and systems. To supplement the traditional techniques, an empirically-based approach, called physics of failure (PoF), has recently become popular. The prerequisite for a PoF analysis is an understanding of the mechanics of the failure process. Entropy, the measure of disorder and uncertainty, introduced from the second law of thermodynamics, has emerged as a fundamental and promising metric to characterize all mechanistic degradation phenomena and their interactions. Entropy has already been used as a fundamental and scale-independent metric to predict damage and failure. In this paper, three entropic-based metrics are examined and demonstrated for application to fatigue damage. We collected experimental data on energy dissipations associated with fatigue damage, in the forms of mechanical, thermal, and acoustic emission (AE) energies, and estimated and correlated the corresponding entropy generations with the observed fatigue damages in metallic materials. Three entropic theorems—thermodynamics, information, and statistical mechanics—support approaches used to estimate the entropic-based fatigue damage. Classical thermodynamic entropy provided a reasonably constant level of entropic endurance to fatigue failure. Jeffreys divergence in statistical mechanics and AE information entropy also correlated well with fatigue damage. Finally, an extension of the relationship between thermodynamic entropy and Jeffreys divergence from molecular-scale to macro-scale applications in fatigue failure resulted in an empirically-based pseudo-Boltzmann constant equivalent to the Boltzmann constant. Full article
Figures

Figure 1

Open AccessArticle
Gas-Vapor Mixture Temperature in the Near-Surface Layer of a Rapidly-Evaporating Water Droplet
Entropy 2019, 21(8), 803; https://doi.org/10.3390/e21080803
Received: 19 July 2019 / Revised: 10 August 2019 / Accepted: 15 August 2019 / Published: 16 August 2019
Viewed by 212 | PDF Full-text (1930 KB)
Abstract
Mathematical modeling of the heat and mass transfer processes in the evaporating droplet–high-temperature gas medium system is difficult due to the need to describe the dynamics of the formation of the quasi-steady temperature field of evaporating droplets, as well as of a gas-vapor [...] Read more.
Mathematical modeling of the heat and mass transfer processes in the evaporating droplet–high-temperature gas medium system is difficult due to the need to describe the dynamics of the formation of the quasi-steady temperature field of evaporating droplets, as well as of a gas-vapor buffer layer around them and in their trace during evaporation in high-temperature gas flows. We used planar laser-induced fluorescence (PLIF) and laser-induced phosphorescence (LIP). The experiments were conducted with water droplets (initial radius 1–2 mm) heated in a hot air flow (temperature 20–500 °С, velocity 0.5–6 m/s). Unsteady temperature fields of water droplets and the gas-vapor mixture around them were recorded. High inhomogeneity of temperature fields under study has been validated. To determine the temperature in the so called dead zones, we solved the problem of heat transfer, in which the temperature in boundary conditions was set on the basis of experimental values. Full article
Figures

Graphical abstract

Open AccessArticle
A Clustering Approach for Motif Discovery in ChIP-Seq Dataset
Entropy 2019, 21(8), 802; https://doi.org/10.3390/e21080802
Received: 6 June 2019 / Revised: 4 August 2019 / Accepted: 15 August 2019 / Published: 16 August 2019
Viewed by 178 | PDF Full-text (568 KB) | HTML Full-text | XML Full-text
Abstract
Chromatin immunoprecipitation combined with next-generation sequencing (ChIP-Seq) technology has enabled the identification of transcription factor binding sites (TFBSs) on a genome-wide scale. To effectively and efficiently discover TFBSs in the thousand or more DNA sequences generated by a ChIP-Seq data set, we propose [...] Read more.
Chromatin immunoprecipitation combined with next-generation sequencing (ChIP-Seq) technology has enabled the identification of transcription factor binding sites (TFBSs) on a genome-wide scale. To effectively and efficiently discover TFBSs in the thousand or more DNA sequences generated by a ChIP-Seq data set, we propose a new algorithm named AP-ChIP. First, we set two thresholds based on probabilistic analysis to construct and further filter the cluster subsets. Then, we use Affinity Propagation (AP) clustering on the candidate cluster subsets to find the potential motifs. Experimental results on simulated data show that the AP-ChIP algorithm is able to make an almost accurate prediction of TFBSs in a reasonable time. Also, the validity of the AP-ChIP algorithm is tested on a real ChIP-Seq data set. Full article
(This article belongs to the Section Entropy and Biology)
Figures

Figure 1

Open AccessArticle
Hall and Ion-Slip Effect on CNTS Nanofluid over a Porous Extending Surface through Heat Generation and Absorption
Entropy 2019, 21(8), 801; https://doi.org/10.3390/e21080801
Received: 3 July 2019 / Revised: 2 August 2019 / Accepted: 5 August 2019 / Published: 16 August 2019
Viewed by 231 | PDF Full-text (5016 KB) | HTML Full-text | XML Full-text
Abstract
In this research work, a 3D rotating flow of carbon nanotubes (CNTs) over a porous stretchable sheet for heat and mass transfer is investigated. Kerosene oil is considered as a base liquid and two types of CNTs, (Single & Multi) WCNTs are added [...] Read more.
In this research work, a 3D rotating flow of carbon nanotubes (CNTs) over a porous stretchable sheet for heat and mass transfer is investigated. Kerosene oil is considered as a base liquid and two types of CNTs, (Single & Multi) WCNTs are added as the additives to the base liquid. The present analysis further comprises the combined effect of the Hall, ion-slip, and thermal radiation, along with heat generation/absorption. The appropriate ordinary differential system of equations after applying appropriate transformation is calculated. The resulting nonlinear system of equations (conservation of mass, momentum, temperature) is explained by HAM (Homotopy Analysis Method). Solution of velocities and thermal fields are obtained and discussed graphically. Expression of C f and N u are intended for both type of nanoliquids. The influences of prominent physical factors are plotted for velocities and thermal profiles using Methematica. These graphical results are qualitatively in excellent agreement with the previous published results. Also, single wall nanoparticles are found to have higher temperatures than multi wall CNTs nanoparticles. Full article
Figures

Figure 1

Open AccessArticle
Quantum Simulation Logic, Oracles, and the Quantum Advantage
Entropy 2019, 21(8), 800; https://doi.org/10.3390/e21080800
Received: 16 May 2019 / Revised: 8 August 2019 / Accepted: 12 August 2019 / Published: 15 August 2019
Viewed by 367 | PDF Full-text (5249 KB) | HTML Full-text | XML Full-text
Abstract
Query complexity is a common tool for comparing quantum and classical computation, and it has produced many examples of how quantum algorithms differ from classical ones. Here we investigate in detail the role that oracles play for the advantage of quantum algorithms. We [...] Read more.
Query complexity is a common tool for comparing quantum and classical computation, and it has produced many examples of how quantum algorithms differ from classical ones. Here we investigate in detail the role that oracles play for the advantage of quantum algorithms. We do so by using a simulation framework, Quantum Simulation Logic (QSL), to construct oracles and algorithms that solve some problems with the same success probability and number of queries as the quantum algorithms. The framework can be simulated using only classical resources at a constant overhead as compared to the quantum resources used in quantum computation. Our results clarify the assumptions made and the conditions needed when using quantum oracles. Using the same assumptions on oracles within the simulation framework we show that for some specific algorithms, such as the Deutsch-Jozsa and Simon’s algorithms, there simply is no advantage in terms of query complexity. This does not detract from the fact that quantum query complexity provides examples of how a quantum computer can be expected to behave, which in turn has proved useful for finding new quantum algorithms outside of the oracle paradigm, where the most prominent example is Shor’s algorithm for integer factorization. Full article
Figures

Figure 1

Open AccessArticle
Distinguishing between Clausius, Boltzmann and Pauling Entropies of Frozen Non-Equilibrium States
Entropy 2019, 21(8), 799; https://doi.org/10.3390/e21080799
Received: 13 June 2019 / Revised: 9 August 2019 / Accepted: 12 August 2019 / Published: 15 August 2019
Viewed by 265 | PDF Full-text (3053 KB) | XML Full-text
Abstract
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do [...] Read more.
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do not necessarily provide mutually consistent results, and for equilibrium systems their difference was explained by introducing a residual zero-point entropy (following Pauling), apparently violating the Nernst theorem. At finite temperatures, associated statistical entropies which count microstates that do not contribute to a body’s heat capacity, differ systematically from Clausius entropy, and are of particular relevance as measures for metastable, frozen-in non-equilibrium structures and for symbolic information processing (following Shannon). In this paper, it is suggested to consider Clausius, Boltzmann, Pauling and Shannon entropies as distinct, though related, physical quantities with different key properties, in order to avoid confusion by loosely speaking about just “entropy” while actually referring to different kinds of it. For instance, zero-point entropy exclusively belongs to Boltzmann rather than Clausius entropy, while the Nernst theorem holds rigorously for Clausius rather than Boltzmann entropy. The discussion of those terms is underpinned by a brief historical review of the emergence of corresponding fundamental thermodynamic concepts. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Open AccessFeature PaperArticle
Sex Differences in the Complexity of Healthy Older Adults’ Magnetoencephalograms
Entropy 2019, 21(8), 798; https://doi.org/10.3390/e21080798
Received: 24 June 2019 / Revised: 12 August 2019 / Accepted: 13 August 2019 / Published: 15 August 2019
Viewed by 217 | PDF Full-text (1485 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of resting-state brain activity recording in magnetoencephalograms (MEGs) with new algorithms of symbolic dynamics analysis could help obtain a deeper insight into the functioning of the brain and identify potential differences between males and females. Permutation Lempel-Ziv complexity (PLZC), a recently [...] Read more.
The analysis of resting-state brain activity recording in magnetoencephalograms (MEGs) with new algorithms of symbolic dynamics analysis could help obtain a deeper insight into the functioning of the brain and identify potential differences between males and females. Permutation Lempel-Ziv complexity (PLZC), a recently introduced non-linear signal processing algorithm based on symbolic dynamics, was used to evaluate the complexity of MEG signals in source space. PLZC was estimated in a broad band of frequencies (2–45 Hz), as well as in narrow bands (i.e., theta (4–8 Hz), alpha (8–12 Hz), low beta (12–20 Hz), high beta (20–30 Hz), and gamma (30–45 Hz)) in a sample of 98 healthy elderly subjects (49 males, 49 female) aged 65–80 (average age of 72.71 ± 4.22 for males and 72.67 ± 4.21 for females). PLZC was significantly higher for females than males in the high beta band at posterior brain regions including the precuneus, and the parietal and occipital cortices. Further statistical analyses showed that higher complexity values over highly overlapping regions than the ones mentioned above were associated with larger hippocampal volumes only in females. These results suggest that sex differences in healthy aging can be identified from the analysis of magnetoencephalograms with novel signal processing methods. Full article
(This article belongs to the Special Issue Entropy Applications in EEG/MEG)
Figures

Figure 1

Open AccessArticle
A New Model for Complex Dynamical Networks Considering Random Data Loss
Entropy 2019, 21(8), 797; https://doi.org/10.3390/e21080797
Received: 2 August 2019 / Revised: 14 August 2019 / Accepted: 14 August 2019 / Published: 15 August 2019
Viewed by 190 | PDF Full-text (1808 KB) | HTML Full-text | XML Full-text
Abstract
Model construction is a very fundamental and important issue in the field of complex dynamical networks. With the state-coupling complex dynamical network model proposed, many kinds of complex dynamical network models were introduced by considering various practical situations. In this paper, aiming at [...] Read more.
Model construction is a very fundamental and important issue in the field of complex dynamical networks. With the state-coupling complex dynamical network model proposed, many kinds of complex dynamical network models were introduced by considering various practical situations. In this paper, aiming at the data loss which may take place in the communication between any pair of directly connected nodes in a complex dynamical network, we propose a new discrete-time complex dynamical network model by constructing an auxiliary observer and choosing the observer states to compensate for the lost states in the coupling term. By employing Lyapunov stability theory and stochastic analysis, a sufficient condition is derived to guarantee the compensation values finally equal to the lost values, namely, the influence of data loss is finally eliminated in the proposed model. Moreover, we generalize the modeling method to output-coupling complex dynamical networks. Finally, two numerical examples are provided to demonstrate the effectiveness of the proposed model. Full article
Figures

Figure 1

Open AccessArticle
Optimization and Evaluation of Ventilation Mode in Marine Data Center Based on AHP-Entropy Weight
Entropy 2019, 21(8), 796; https://doi.org/10.3390/e21080796
Received: 12 July 2019 / Revised: 9 August 2019 / Accepted: 13 August 2019 / Published: 15 August 2019
Viewed by 203 | PDF Full-text (5353 KB) | HTML Full-text | XML Full-text
Abstract
The ventilation mode affects the cooling efficiency of the air conditioners significantly in marine data centers. Three different ventilation modes, namely, underfloor ventilation, overhead ventilation, side ventilation, are numerically investigated for a typical marine data center. Four independent parameters, including temperature, velocity, air [...] Read more.
The ventilation mode affects the cooling efficiency of the air conditioners significantly in marine data centers. Three different ventilation modes, namely, underfloor ventilation, overhead ventilation, side ventilation, are numerically investigated for a typical marine data center. Four independent parameters, including temperature, velocity, air age, and uniformity index, are applied to evaluate the performances of the three ventilation modes. Further, the analytic hierarchy process (AHP) entropy weight model is established and further analysis is conducted to find the optimal ventilation mode of the marine data center. The results indicate that the underfloor ventilation mode has the best performance in the airflow patterns and temperature distribution evaluation projects, with the highest scores of 91.84 and 90.60. If low energy consumption is required, it is recommended to select the overhead ventilation mode with a maximum score of 93.50. The current evaluation results agree fairly well with the three dimensional simulation results, which further proves that the AHP entropy weight method is reasonable and has a high adaptability for the evaluation of air conditioning ventilation modes. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Figures

Figure 1

Open AccessArticle
Spectral Embedded Deep Clustering
Entropy 2019, 21(8), 795; https://doi.org/10.3390/e21080795
Received: 13 June 2019 / Revised: 4 August 2019 / Accepted: 12 August 2019 / Published: 15 August 2019
Viewed by 210 | PDF Full-text (565 KB) | HTML Full-text | XML Full-text
Abstract
We propose a new clustering method based on a deep neural network. Given an unlabeled dataset and the number of clusters, our method directly groups the dataset into the given number of clusters in the original space. We use a conditional discrete probability [...] Read more.
We propose a new clustering method based on a deep neural network. Given an unlabeled dataset and the number of clusters, our method directly groups the dataset into the given number of clusters in the original space. We use a conditional discrete probability distribution defined by a deep neural network as a statistical model. Our strategy is first to estimate the cluster labels of unlabeled data points selected from a high-density region, and then to conduct semi-supervised learning to train the model by using the estimated cluster labels and the remaining unlabeled data points. Lastly, by using the trained model, we obtain the estimated cluster labels of all given unlabeled data points. The advantage of our method is that it does not require key conditions. Existing clustering methods with deep neural networks assume that the cluster balance of a given dataset is uniform. Moreover, it also can be applied to various data domains as long as the data is expressed by a feature vector. In addition, it is observed that our method is robust against outliers. Therefore, the proposed method is expected to perform, on average, better than previous methods. We conducted numerical experiments on five commonly used datasets to confirm the effectiveness of the proposed method. Full article
Figures

Figure 1

Open AccessReview
The Introduction of Entropy and Information Methods to Ecology by Ramon Margalef
Entropy 2019, 21(8), 794; https://doi.org/10.3390/e21080794
Received: 24 June 2019 / Revised: 12 August 2019 / Accepted: 12 August 2019 / Published: 15 August 2019
Viewed by 249 | PDF Full-text (243 KB) | HTML Full-text | XML Full-text
Abstract
In ecology and evolution, entropic methods are now used widely and increasingly frequently. Their use can be traced back to Ramon Margalef’s first attempt 70 years ago to use log-series to quantify ecological diversity, including searching for ecologically meaningful groupings within a large [...] Read more.
In ecology and evolution, entropic methods are now used widely and increasingly frequently. Their use can be traced back to Ramon Margalef’s first attempt 70 years ago to use log-series to quantify ecological diversity, including searching for ecologically meaningful groupings within a large assemblage, which we now call the gamma level. The same year, Shannon and Weaver published a generally accessible form of Shannon’s work on information theory, including the measure that we now call Shannon–Wiener entropy. Margalef seized on that measure and soon proposed that ecologists should use the Shannon–Weiner index to evaluate diversity, including assessing local (alpha) diversity and differentiation between localities (beta). He also discussed relating this measure to environmental variables and ecosystem processes such as succession. Over the subsequent decades, he enthusiastically expanded upon his initial suggestions. Finally, 2019 also would have been Margalef’s 100th birthday. Full article
Open AccessArticle
A Comparative Study of Multiscale Sample Entropy and Hierarchical Entropy and Its Application in Feature Extraction for Ship-Radiated Noise
Entropy 2019, 21(8), 793; https://doi.org/10.3390/e21080793
Received: 2 July 2019 / Revised: 8 August 2019 / Accepted: 13 August 2019 / Published: 14 August 2019
Viewed by 264 | PDF Full-text (1061 KB) | HTML Full-text | XML Full-text
Abstract
The presence of marine ambient noise makes it difficult to extract effective features from ship-radiated noise. Traditional feature extraction methods based on the Fourier transform or wavelets are limited in such a complex ocean environment. Recently, entropy-based methods have been proven to have [...] Read more.
The presence of marine ambient noise makes it difficult to extract effective features from ship-radiated noise. Traditional feature extraction methods based on the Fourier transform or wavelets are limited in such a complex ocean environment. Recently, entropy-based methods have been proven to have many advantages compared with traditional methods. In this paper, we propose a novel feature extraction method for ship-radiated noise based on hierarchical entropy (HE). Compared with the traditional entropy, namely multiscale sample entropy (MSE), which only considers information carried in the lower frequency components, HE takes into account both lower and higher frequency components of signals. We illustrate the different properties of HE and MSE by testing them on simulation signals. The results show that HE has better performance than MSE, especially when the difference in signals is mainly focused on higher frequency components. Furthermore, experiments on real-world data of five types of ship-radiated noise are conducted. A probabilistic neural network is employed to evaluate the performance of the obtained features. Results show that HE has a higher classification accuracy for the five types of ship-radiated noise compared with MSE. This indicates that the HE-based feature extraction method could be used to identify ships in the field of underwater acoustic signal processing. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Figures

Figure 1

Open AccessArticle
Model of Random Field with Piece-Constant Values and Sampling-Restoration Algorithm of Its Realizations
Entropy 2019, 21(8), 792; https://doi.org/10.3390/e21080792
Received: 27 June 2019 / Revised: 31 July 2019 / Accepted: 5 August 2019 / Published: 14 August 2019
Viewed by 243 | PDF Full-text (3441 KB) | HTML Full-text | XML Full-text
Abstract
We propose a description of the model of a random piecewise constant field formed by the sum of realizations of two Markov processes with an arbitrary number of states and defined along mutually perpendicular axes. The number of field quantization levels can be [...] Read more.
We propose a description of the model of a random piecewise constant field formed by the sum of realizations of two Markov processes with an arbitrary number of states and defined along mutually perpendicular axes. The number of field quantization levels can be arbitrary. Realizations of a random field model of the desired shape are created by appropriate selection of parameters for formative realization of Markov processes. For the proposed field model, we investigated the sampling and restoration algorithm of any selected realizations. As a result, we determined the optimal sampling and recovery algorithms. The resulting sampling is fundamentally non-periodic. Recovery errors are calculated. Two examples are considered. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Figure 1

Open AccessArticle
Blind Frequency Estimation and Symbol Recovery for the Analytically Solvable Chaotic System
Entropy 2019, 21(8), 791; https://doi.org/10.3390/e21080791
Received: 26 June 2019 / Revised: 5 August 2019 / Accepted: 5 August 2019 / Published: 13 August 2019
Viewed by 257 | PDF Full-text (787 KB) | HTML Full-text | XML Full-text
Abstract
The analytically solvable chaotic system (ASCS) is a promising chaotic system in chaos communication and radar fields. In this paper, we propose a maximum likelihood estimator (MLE) to estimate the frequency of ASCS, then a difference-integral (DI) detector is designed with the estimated [...] Read more.
The analytically solvable chaotic system (ASCS) is a promising chaotic system in chaos communication and radar fields. In this paper, we propose a maximum likelihood estimator (MLE) to estimate the frequency of ASCS, then a difference-integral (DI) detector is designed with the estimated frequency, and the symbols encoded in the signal are recovered. In the proposed method, the frequency parameter is estimated by an MLE based on the square power of the received signal. The Cramer-Rao lower bound in blind frequency estimation and the bit error performance in symbol detection are analyzed to assess the performance of the proposed method. Numerical results validate the analysis and demonstrate that the proposed symbol detector achieves the error performance with a little cost of 1 dB compared to the coherent detector. The robustness of the proposed method towards parameters is also verified through simulations. Full article
Figures

Figure 1

Open AccessArticle
A Secure and Fast Image Encryption Scheme Based on Double Chaotic S-Boxes
Entropy 2019, 21(8), 790; https://doi.org/10.3390/e21080790
Received: 22 July 2019 / Revised: 7 August 2019 / Accepted: 12 August 2019 / Published: 13 August 2019
Viewed by 258 | PDF Full-text (5469 KB) | XML Full-text
Abstract
In order to improve the security and efficiency of image encryption systems comprehensively, a novel chaotic S-box based image encryption scheme is proposed. Firstly, a new compound chaotic system, Sine-Tent map, is proposed to widen the chaotic range and improve the chaotic performance [...] Read more.
In order to improve the security and efficiency of image encryption systems comprehensively, a novel chaotic S-box based image encryption scheme is proposed. Firstly, a new compound chaotic system, Sine-Tent map, is proposed to widen the chaotic range and improve the chaotic performance of 1D discrete chaotic maps. As a result, the new compound chaotic system is more suitable for cryptosystem. Secondly, an efficient and simple method for generating S-boxes is proposed, which can greatly improve the efficiency of S-box production. Thirdly, a novel double S-box based image encryption algorithm is proposed. By introducing equivalent key sequences {r, t} related with image ciphertext, the proposed cryptosystem can resist the four classical types of attacks, which is an advantage over other S-box based encryption schemes. Furthermore, it enhanced the resistance of the system to differential analysis attack by two rounds of forward and backward confusion-diffusion operation with double S-boxes. The simulation results and security analysis verify the effectiveness of the proposed scheme. The new scheme has obvious efficiency advantages, which means that it has better application potential in real-time image encryption. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Open AccessArticle
Asymptotic Behavior of Memristive Circuits
Entropy 2019, 21(8), 789; https://doi.org/10.3390/e21080789
Received: 16 July 2019 / Revised: 2 August 2019 / Accepted: 6 August 2019 / Published: 13 August 2019
Viewed by 291 | PDF Full-text (649 KB) | HTML Full-text | XML Full-text
Abstract
The interest in memristors has risen due to their possible application both as memory units and as computational devices in combination with CMOS. This is in part due to their nonlinear dynamics, and a strong dependence on the circuit topology. We provide evidence [...] Read more.
The interest in memristors has risen due to their possible application both as memory units and as computational devices in combination with CMOS. This is in part due to their nonlinear dynamics, and a strong dependence on the circuit topology. We provide evidence that also purely memristive circuits can be employed for computational purposes. In the present paper we show that a polynomial Lyapunov function in the memory parameters exists for the case of DC controlled memristors. Such a Lyapunov function can be asymptotically approximated with binary variables, and mapped to quadratic combinatorial optimization problems. This also shows a direct parallel between memristive circuits and the Hopfield-Little model. In the case of Erdos-Renyi random circuits, we show numerically that the distribution of the matrix elements of the projectors can be roughly approximated with a Gaussian distribution, and that it scales with the inverse square root of the number of elements. This provides an approximated but direct connection with the physics of disordered system and, in particular, of mean field spin glasses. Using this and the fact that the interaction is controlled by a projector operator on the loop space of the circuit. We estimate the number of stationary points of the approximate Lyapunov function and provide a scaling formula as an upper bound in terms of the circuit topology only. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle
Pricing Interval European Option with the Principle of Maximum Entropy
Entropy 2019, 21(8), 788; https://doi.org/10.3390/e21080788
Received: 25 June 2019 / Revised: 2 August 2019 / Accepted: 9 August 2019 / Published: 13 August 2019
Viewed by 256 | PDF Full-text (1166 KB) | HTML Full-text | XML Full-text
Abstract
This paper develops the interval maximum entropy model for the interval European option valuation by estimating an underlying asset distribution. The refined solution for the model is obtained by the Lagrange multiplier. The particle swarm optimization algorithm is applied to calculate the density [...] Read more.
This paper develops the interval maximum entropy model for the interval European option valuation by estimating an underlying asset distribution. The refined solution for the model is obtained by the Lagrange multiplier. The particle swarm optimization algorithm is applied to calculate the density function of the underlying asset, which can be utilized to price the Shanghai Stock Exchange (SSE) 50 Exchange Trades Funds (ETF) option of China and the Boeing stock option of the United States. Results show that maximum entropy distribution provides precise estimations for the underlying asset of interval number situations. In this way, we can get the distribution of the underlying assets and apply it to the interval European option pricing in the financial market. Full article
Figures

Figure 1

Open AccessArticle
Geometric Estimation of Multivariate Dependency
Entropy 2019, 21(8), 787; https://doi.org/10.3390/e21080787
Received: 21 May 2019 / Revised: 2 August 2019 / Accepted: 8 August 2019 / Published: 12 August 2019
Viewed by 371 | PDF Full-text (612 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes a geometric estimator of dependency between a pair of multivariate random variables. The proposed estimator of dependency is based on a randomly permuted geometric graph (the minimal spanning tree) over the two multivariate samples. This estimator converges to a quantity [...] Read more.
This paper proposes a geometric estimator of dependency between a pair of multivariate random variables. The proposed estimator of dependency is based on a randomly permuted geometric graph (the minimal spanning tree) over the two multivariate samples. This estimator converges to a quantity that we call the geometric mutual information (GMI), which is equivalent to the Henze–Penrose divergence. between the joint distribution of the multivariate samples and the product of the marginals. The GMI has many of the same properties as standard MI but can be estimated from empirical data without density estimation; making it scalable to large datasets. The proposed empirical estimator of GMI is simple to implement, involving the construction of an minimal spanning tree (MST) spanning over both the original data and a randomly permuted version of this data. We establish asymptotic convergence of the estimator and convergence rates of the bias and variance for smooth multivariate density functions belonging to a Hölder class. We demonstrate the advantages of our proposed geometric dependency estimator in a series of experiments. Full article
(This article belongs to the Special Issue Women in Information Theory 2018)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top