Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 16, Issue 11 (November 2014) , Pages 5601-6194

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-34
Export citation of selected articles as:
Open AccessArticle
What You See Is What You Get
Entropy 2014, 16(11), 6186-6194; https://doi.org/10.3390/e16116186
Received: 23 June 2014 / Revised: 30 October 2014 / Accepted: 4 November 2014 / Published: 21 November 2014
Cited by 8 | Viewed by 2201 | PDF Full-text (156 KB) | HTML Full-text | XML Full-text
Abstract
This paper corrects three widely held misunderstandings about Maxent when used in common sense reasoning: That it is language dependent; That it produces objective facts; That it subsumes, and so is at least as untenable as, the paradox-ridden Principle of Insufficient Reason. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Open AccessArticle
Self-oscillating Water Chemiluminescence Modes and Reactive Oxygen Species Generation Induced by Laser Irradiation; Effect of the Exclusion Zone Created by Nafion
Entropy 2014, 16(11), 6166-6185; https://doi.org/10.3390/e16116166
Received: 20 August 2014 / Revised: 30 October 2014 / Accepted: 17 November 2014 / Published: 21 November 2014
Cited by 7 | Viewed by 2429 | PDF Full-text (1956 KB) | HTML Full-text | XML Full-text
Abstract
Samples of water inside and outside an exclusion zone (EZ), created by Nafionswollen in water, were irradiated at the wavelength l = 1264 nm, which stimulates the electronic transition of dissolved oxygen from the triplet state to the excited singlet state. This [...] Read more.
Samples of water inside and outside an exclusion zone (EZ), created by Nafion swollen in water, were irradiated at the wavelength l = 1264 nm, which stimulates the electronic transition of dissolved oxygen from the triplet state to the excited singlet state. This irradiation induces, after a long latent period, chemiluminescence self-oscillations in the visible and near UV spectral range, which last many hours. It occurs that this effect is EZ-specific: the chemiluminescence intensity is twice lower than that from the bulk water, while the latent period is longer for the EZ. Laser irradiation causes accumulation of H2O2, which is also EZ-specific: its concentration inside the EZ is less than that in the bulk water. These phenomena can be interpreted in terms of a model of decreasing O2 content in the EZ due to increased chemical activity of bisulfite anions (HSO3), arisen as the result of dissociation of terminal sulfonate groups of the Nafion. The wavelet transform analysis of the chemiluminescence intensity from the EZ and the bulk water gives, that self-oscillations regimes occurring in the liquid after the latent period are the determinate processes. It occurred that the chemiluminescence dynamics in case of EZ is characterized by a single-frequency self-oscillating regime, whereas in case of the bulk water, the self-oscillation spectrum consists of three spectral bands. Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Open AccessArticle
Improving the Authentication Scheme and Access Control Protocol for VANETs
Entropy 2014, 16(11), 6152-6165; https://doi.org/10.3390/e16116152
Received: 10 August 2014 / Revised: 25 September 2014 / Accepted: 4 November 2014 / Published: 19 November 2014
Cited by 3 | Viewed by 2091 | PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
Privacy and security are very important in vehicular ad hoc networks (VANETs). VANETs are negatively affected by any malicious user’s behaviors, such as bogus information and replay attacks on the disseminated messages. Among various security threats, privacy preservation is one of the new [...] Read more.
Privacy and security are very important in vehicular ad hoc networks (VANETs). VANETs are negatively affected by any malicious user’s behaviors, such as bogus information and replay attacks on the disseminated messages. Among various security threats, privacy preservation is one of the new challenges of protecting users’ private information. Existing authentication protocols to secure VANETs raise challenges, such as certificate distribution and reduction of the strong reliance on tamper-proof devices. In 2011, Yeh et al. proposed a PAACP: a portable privacy-preserving authentication and access control protocol in vehicular ad hoc networks. However, PAACP in the authorization phase is breakable and cannot maintain privacy in VANETs. In this paper, we present a cryptanalysis of an attachable blind signature and demonstrate that the PAACP’s authorized credential (AC) is not secure and private, even if the AC is secretly stored in a tamper-proof device. An eavesdropper can construct an AC from an intercepted blind document. Any eavesdropper can determine who has which access privileges to access which service. For this reason, this paper copes with these challenges and proposes an efficient scheme. We conclude that an improving authentication scheme and access control protocol for VANETs not only resolves the problems that have appeared, but also is more secure and efficient. Full article
Open AccessArticle
Application of Entropy and Fractal Dimension Analyses to the Pattern Recognition of Contaminated Fish Responses in Aquaculture
Entropy 2014, 16(11), 6133-6151; https://doi.org/10.3390/e16116133
Received: 6 October 2014 / Revised: 13 November 2014 / Accepted: 17 November 2014 / Published: 19 November 2014
Cited by 17 | Viewed by 2569 | PDF Full-text (3076 KB) | HTML Full-text | XML Full-text
Abstract
The objective of the work was to develop a non-invasive methodology for image acquisition, processing and nonlinear trajectory analysis of the collective fish response to a stochastic event. Object detection and motion estimation were performed by an optical flow algorithm in order to [...] Read more.
The objective of the work was to develop a non-invasive methodology for image acquisition, processing and nonlinear trajectory analysis of the collective fish response to a stochastic event. Object detection and motion estimation were performed by an optical flow algorithm in order to detect moving fish and simultaneously eliminate background, noise and artifacts. The Entropy and the Fractal Dimension (FD) of the trajectory followed by the centroids of the groups of fish were calculated using Shannon and permutation Entropy and the Katz, Higuchi and Katz-Castiglioni’s FD algorithms respectively. The methodology was tested on three case groups of European sea bass (Dicentrarchus labrax), two of which were similar (C1 control and C2 tagged fish) and very different from the third (C3, tagged fish submerged in methylmercury contaminated water). The results indicate that Shannon entropy and Katz-Castiglioni were the most sensitive algorithms and proved to be promising tools for the non-invasive identification and quantification of differences in fish responses. In conclusion, we believe that this methodology has the potential to be embedded in online/real time architecture for contaminant monitoring programs in the aquaculture industry. Full article
(This article belongs to the Special Issue Entropy in Bioinspired Intelligence)
Figures

Graphical abstract

Open AccessArticle
Entropy Generation during Turbulent Flow of Zirconia-water and Other Nanofluids in a Square Cross Section Tube with a Constant Heat Flux
Entropy 2014, 16(11), 6116-6132; https://doi.org/10.3390/e16116116
Received: 8 July 2014 / Revised: 11 August 2014 / Accepted: 6 November 2014 / Published: 19 November 2014
Cited by 43 | Viewed by 2409 | PDF Full-text (815 KB) | HTML Full-text | XML Full-text
Abstract
The entropy generation based on the second law of thermodynamics is investigated for turbulent forced convection flow of ZrO2-water nanofluid through a square pipe with constant wall heat flux. Effects of different particle concentrations, inlet conditions and particle sizes on entropy [...] Read more.
The entropy generation based on the second law of thermodynamics is investigated for turbulent forced convection flow of ZrO2-water nanofluid through a square pipe with constant wall heat flux. Effects of different particle concentrations, inlet conditions and particle sizes on entropy generation of ZrO2-water nanofluid are studied. Contributions from frictional and thermal entropy generations are investigated, and the optimal working condition is analyzed. The results show that the optimal volume concentration of nanoparticles to minimize the entropy generation increases when the Reynolds number decreases. It was also found that the thermal entropy generation increases with the increase of nanoparticle size whereas the frictional entropy generation decreases. Finally, the entropy generation of ZrO2-water was compared with that from other nanofluids (including Al2O3, SiO2 and CuO nanoparticles in water). The results showed that the SiO2 provided the highest entropy generation. Full article
Open AccessArticle
Image Fusion Based on the \({\Delta ^{ - 1}} - T{V_0}\) Energy Function
Entropy 2014, 16(11), 6099-6115; https://doi.org/10.3390/e16116099
Received: 9 October 2014 / Revised: 11 November 2014 / Accepted: 12 November 2014 / Published: 18 November 2014
Cited by 1 | Viewed by 1772 | PDF Full-text (3781 KB) | HTML Full-text | XML Full-text
Abstract
This article proposes a \({\Delta^{-1}}-T{V_0}\) energy function to fuse a multi-spectral image with a panchromatic image. The proposed energy function consists of two components, a \(TV_0\) component and a \(\Delta^{-1}\) component. The \(TV_0\) term uses the sparse priority to increase the detailed spatial [...] Read more.
This article proposes a \({\Delta^{-1}}-T{V_0}\) energy function to fuse a multi-spectral image with a panchromatic image. The proposed energy function consists of two components, a \(TV_0\) component and a \(\Delta^{-1}\) component. The \(TV_0\) term uses the sparse priority to increase the detailed spatial information; while the \({\Delta ^{ - 1}}\) term removes the block effect of the multi-spectral image. Furthermore, as the proposed energy function is non-convex, we also adopt an alternative minimization algorithm and the \(L_0\) gradient minimization to solve it. Experimental results demonstrate the improved performance of the proposed method over existing methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessReview
How to Read Probability Distributions as Statements about Process
Entropy 2014, 16(11), 6059-6098; https://doi.org/10.3390/e16116059
Received: 22 October 2014 / Revised: 13 November 2014 / Accepted: 14 November 2014 / Published: 18 November 2014
Cited by 12 | Viewed by 2530 | PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
Probability distributions can be read as simple expressions of information. Each continuous probability distribution describes how information changes with magnitude. Once one learns to read a probability distribution as a measurement scale of information, opportunities arise to understand the processes that generate the [...] Read more.
Probability distributions can be read as simple expressions of information. Each continuous probability distribution describes how information changes with magnitude. Once one learns to read a probability distribution as a measurement scale of information, opportunities arise to understand the processes that generate the commonly observed patterns. Probability expressions may be parsed into four components: the dissipation of all information, except the preservation of average values, taken over the measurement scale that relates changes in observed values to changes in information, and the transformation from the underlying scale on which information dissipates to alternative scales on which probability pattern may be expressed. Information invariances set the commonly observed measurement scales and the relations between them. In particular, a measurement scale for information is defined by its invariance to specific transformations of underlying values into measurable outputs. Essentially all common distributions can be understood within this simple framework of information invariance and measurement scale. Full article
(This article belongs to the Section Statistical Physics)
Open AccessArticle
Informational Non-Differentiable Entropy and Uncertainty Relations in Complex Systems
Entropy 2014, 16(11), 6042-6058; https://doi.org/10.3390/e16116042
Received: 17 July 2014 / Revised: 9 November 2014 / Accepted: 12 November 2014 / Published: 18 November 2014
Cited by 9 | Viewed by 1817 | PDF Full-text (255 KB) | HTML Full-text | XML Full-text
Abstract
Considering that the movements of complex system entities take place on continuous, but non-differentiable, curves, concepts, like non-differentiable entropy, informational non-differentiable entropy and informational non-differentiable energy, are introduced. First of all, the dynamics equations of the complex system entities (Schrödinger-type or fractal hydrodynamic-type) [...] Read more.
Considering that the movements of complex system entities take place on continuous, but non-differentiable, curves, concepts, like non-differentiable entropy, informational non-differentiable entropy and informational non-differentiable energy, are introduced. First of all, the dynamics equations of the complex system entities (Schrödinger-type or fractal hydrodynamic-type) are obtained. The last one gives a specific fractal potential, which generates uncertainty relations through non-differentiable entropy. Next, the correlation between informational non-differentiable entropy and informational non-differentiable energy implies specific uncertainty relations through a maximization principle of the informational non-differentiable entropy and for a constant value of the informational non-differentiable energy. Finally, for a harmonic oscillator, the constant value of the informational non-differentiable energy is equivalent to a quantification condition. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessCommunication
Effect of Atmospheric Ions on Interfacial Water
Entropy 2014, 16(11), 6033-6041; https://doi.org/10.3390/e16116033
Received: 4 August 2014 / Revised: 30 October 2014 / Accepted: 10 November 2014 / Published: 18 November 2014
Viewed by 1960 | PDF Full-text (1131 KB) | HTML Full-text | XML Full-text
Abstract
The effect of atmospheric positivity on the electrical properties of interfacial water was explored. Interfacial, or exclusion zone (EZ) water was created in the standard way, next to a sheet of Nafion placed horizontally at the bottom of a water-filled chamber. Positive atmospheric [...] Read more.
The effect of atmospheric positivity on the electrical properties of interfacial water was explored. Interfacial, or exclusion zone (EZ) water was created in the standard way, next to a sheet of Nafion placed horizontally at the bottom of a water-filled chamber. Positive atmospheric ions were created from a high voltage source placed above the chamber. Electrical potential distribution in the interfacial water was measured using microelectrodes. We found that beyond a threshold, the positive ions diminished the magnitude of the negative electrical potential in the interfacial water, sometimes even turning it to positive. Additionally, positive ions produced by an air conditioner were observed to generate similar effects; i.e., the electrical potential shifted in the positive direction but returned to negative when the air conditioner stopped blowing. Sometimes, the effect of the positive ions from the air conditioner was strong enough to destroy the structure of interfacial water by turning the potential decidedly positive. Thus, positive air ions can compromise interfacial water negativity and may explain the known negative impact of positive ions on health. Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Open AccessArticle
Entropy Generation through a Deterministic Boundary-Layer Structure in Warm Dense Plasma
Entropy 2014, 16(11), 6006-6032; https://doi.org/10.3390/e16116006
Received: 27 October 2014 / Revised: 12 November 2014 / Accepted: 13 November 2014 / Published: 17 November 2014
Cited by 2 | Viewed by 1664 | PDF Full-text (983 KB) | HTML Full-text | XML Full-text
Abstract
The computational prediction of nonlinear interactive instabilities in three-dimensional boundary layers is obtained for a warm dense plasma boundary layer environment. The method is applied to the Richtmyer–Meshkov flow over the rippled surface of a laser-driven warm dense plasma experiment. Coupled, nonlinear spectral [...] Read more.
The computational prediction of nonlinear interactive instabilities in three-dimensional boundary layers is obtained for a warm dense plasma boundary layer environment. The method is applied to the Richtmyer–Meshkov flow over the rippled surface of a laser-driven warm dense plasma experiment. Coupled, nonlinear spectral velocity equations of Lorenz form are solved with the mean boundary-layer velocity gradients as input control parameters. The nonlinear time series solutions indicate that after an induction period, a sharp instability occurs in the solutions. The power spectral density yields the available kinetic energy dissipation rates within the instability. The application of the singular value decomposition technique to the nonlinear time series solution yields empirical entropies. Empirical entropic indices are then obtained from these entropies. The intermittency exponents obtained from the entropic indices thus allow the computation of the entropy generation through the deterministic structure to the final dissipation of the initial fluctuating kinetic energy into background thermal energy, representing the resulting entropy increase. Full article
Open AccessArticle
A Quantitative Analysis of an EEG Epileptic Record Based on MultiresolutionWavelet Coefficients
Entropy 2014, 16(11), 5976-6005; https://doi.org/10.3390/e16115976
Received: 22 August 2014 / Revised: 7 November 2014 / Accepted: 11 November 2014 / Published: 17 November 2014
Cited by 6 | Viewed by 2069 | PDF Full-text (806 KB) | HTML Full-text | XML Full-text
Abstract
The characterization of the dynamics associated with electroencephalogram (EEG) signal combining an orthogonal discrete wavelet transform analysis with quantifiers originated from information theory is reviewed. In addition, an extension of this methodology based on multiresolution quantities, called wavelet leaders, is presented. In particular, [...] Read more.
The characterization of the dynamics associated with electroencephalogram (EEG) signal combining an orthogonal discrete wavelet transform analysis with quantifiers originated from information theory is reviewed. In addition, an extension of this methodology based on multiresolution quantities, called wavelet leaders, is presented. In particular, the temporal evolution of Shannon entropy and the statistical complexity evaluated with different sets of multiresolution wavelet coefficients are considered. Both methodologies are applied to the quantitative EEG time series analysis of a tonic-clonic epileptic seizure, and comparative results are presented. In particular, even when both methods describe the dynamical changes of the EEG time series, the one based on wavelet leaders presents a better time resolution. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography)
Open AccessArticle
Self-Organization at Aqueous Colloid-Membrane Interfaces and an Optical Method to Measure the Kinetics of Exclusion Zone Formation
Entropy 2014, 16(11), 5954-5975; https://doi.org/10.3390/e16115954
Received: 14 July 2014 / Revised: 9 November 2014 / Accepted: 11 November 2014 / Published: 17 November 2014
Cited by 2 | Viewed by 2394 | PDF Full-text (2827 KB) | HTML Full-text | XML Full-text
Abstract
Exclusion zone (EZ) formation at water-membrane interfaces was studied via bright- and dark-field microscopy. Various aqueous colloids including suspensions of charged microspheres, silicon dioxide particles, and raw whole milk were studied with Nafion® hydrophilic membranes. Interfacial formations observed included EZs and more [...] Read more.
Exclusion zone (EZ) formation at water-membrane interfaces was studied via bright- and dark-field microscopy. Various aqueous colloids including suspensions of charged microspheres, silicon dioxide particles, and raw whole milk were studied with Nafion® hydrophilic membranes. Interfacial formations observed included EZs and more complex patterns including striations, double layers, banding, dendritic aggregates of particles, and double-stranded structures resembling Birkeland current filaments in cold plasmas. A complex three-dimensional dynamic structure and continuous flow patterns persist in and around EZs, maintaining movement of the colloidal particles even after EZs are fully formed, for which a schematic is proposed. Since radiant energy is critical for EZ formation, we hypothesize that these interfacial phenomena are non-equilibrium dissipative structures that self-organize and self-maintain due to ongoing dynamic processes that may involve hydrodynamic interactions. Another experimental approach undertaken involved the construction of a microscope flow cell to measure the kinetics of EZ formation using sequential microphotography analyzed with macro-programmed ImageJ software to investigate effects of different types of conditioned water. No significant difference was found between spring water and the same water treated by a magnetic vortexer. A significant difference was found for municipal tap water compared to electrolyzed alkaline tap water from the same source. Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Figures

Graphical abstract

Open AccessArticle
Performance Analysis of a Coal-Fired External Combustion Compressed Air Energy Storage System
Entropy 2014, 16(11), 5935-5953; https://doi.org/10.3390/e16115935
Received: 18 August 2014 / Revised: 30 October 2014 / Accepted: 7 November 2014 / Published: 13 November 2014
Cited by 4 | Viewed by 2312 | PDF Full-text (853 KB) | HTML Full-text | XML Full-text
Abstract
Compressed air energy storage (CAES) is one of the large-scale energy storage technologies utilized to provide effective power peak load shaving. In this paper, a coal-fired external combustion CAES, which only uses coal as fuel, is proposed. Unlike the traditional CAES, the combustion [...] Read more.
Compressed air energy storage (CAES) is one of the large-scale energy storage technologies utilized to provide effective power peak load shaving. In this paper, a coal-fired external combustion CAES, which only uses coal as fuel, is proposed. Unlike the traditional CAES, the combustion chamber is substituted with an external combustion heater in which high-pressure air is heated before entering turbines to expand in the proposed system. A thermodynamic analysis of the proposed CAES is conducted on the basis of the process simulation. The overall efficiency and the efficiency of electricity storage are 48.37% and 81.50%, respectively. Furthermore, the exergy analysis is then derived and forecasted, and the exergy efficiency of the proposed system is 47.22%. The results show that the proposed CAES has more performance advantages than Huntorf CAES (the first CAES plant in the world). Techno-economic analysis of the coal-fired CAES shows that the cost of electricity (COE) is $106.33/MWh, which is relatively high in the rapidly developing power market. However, CAES will be more likely to be competitive if the power grid is improved and suitable geographical conditions for storage caverns are satisfied. This research provides a new approach for developing CAES in China. Full article
Open AccessArticle
Effect of an Internal Heat Exchanger on Performance of the Transcritical Carbon Dioxide Refrigeration Cycle with an Expander
Entropy 2014, 16(11), 5919-5934; https://doi.org/10.3390/e16115919
Received: 1 August 2014 / Revised: 24 September 2014 / Accepted: 29 October 2014 / Published: 10 November 2014
Cited by 8 | Viewed by 2550 | PDF Full-text (390 KB) | HTML Full-text | XML Full-text
Abstract
The effect of the internal heat exchanger (IHE) on the performance of the transcritical carbon dioxide refrigeration cycle with an expander is analyzed theoretically on the basis of the first and second laws of thermodynamics. The possible parameters affecting system efficiency such as [...] Read more.
The effect of the internal heat exchanger (IHE) on the performance of the transcritical carbon dioxide refrigeration cycle with an expander is analyzed theoretically on the basis of the first and second laws of thermodynamics. The possible parameters affecting system efficiency such as heat rejection pressure, gas cooler outlet temperature, evaporating temperature, expander isentropic efficiency and IHE effectiveness are investigated. It is found that the IHE addition in the carbon dioxide refrigeration cycle with an expander increases the specific cooling capacity and compression work, and decreases the optimum heat rejection pressure and the expander output power. An IHE addition does not always improve the system performance in the refrigeration cycle with an expander. The throttle valve cycle with IHE provides a 5.6% to 17% increase in maximum COP compared to that of the basic cycle. For the ideal expander cycle with IHE, the maximum COP is approximately 12.3% to 16.1% lower than the maximum COP of the cycle without IHE. Whether the energy efficiency of the cycle by IHE can be improved depends on the isentropic efficiency level of the expander. The use of IHE is only applicable in the cases of lower expander isentropic efficiencies or higher gas cooler exit temperatures for the refrigeration cycle with an expander from the view of energy efficiency. Full article
Open AccessArticle
Comparative Study of Entropy Sensitivity to Missing Biosignal Data
Entropy 2014, 16(11), 5901-5918; https://doi.org/10.3390/e16115901
Received: 3 July 2014 / Revised: 5 August 2014 / Accepted: 3 November 2014 / Published: 10 November 2014
Cited by 13 | Viewed by 2534 | PDF Full-text (227 KB) | HTML Full-text | XML Full-text
Abstract
Entropy estimation metrics have become a widely used method to identify subtle changes or hidden features in biomedical records. These methods have been more effective than conventional linear techniques in a number of signal classification applications, specially the healthy–pathological segmentation dichotomy. Nevertheless, a [...] Read more.
Entropy estimation metrics have become a widely used method to identify subtle changes or hidden features in biomedical records. These methods have been more effective than conventional linear techniques in a number of signal classification applications, specially the healthy–pathological segmentation dichotomy. Nevertheless, a thorough characterization of these measures, namely, how to match metric and signal features, is still lacking. This paper studies a specific characterization problem: the influence of missing samples in biomedical records. The assessment is conducted using four of the most popular entropy metrics: Approximate Entropy, Sample Entropy, Fuzzy Entropy, and Detrended Fluctuation Analysis. The rationale of this study is that missing samples are a signal disturbance that can arise in many cases: signal compression, non-uniform sampling, or data transmission stages. It is of great interest to determine if these real situations can impair the capability of segmenting signal classes using such metrics. The experiments employed several biosignals: electroencephalograms, gait records, and RR time series. Samples of these signals were systematically removed, and the entropy computed for each case. The results showed that these metrics are robust against missing samples: With a data loss percentage of 50% or even higher, the methods were still able to distinguish among signal classes. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography)
Figures

Graphical abstract

Open AccessArticle
Heat Transfer Characteristics of a Speaker Using Nano-Sized Ferrofluid
Entropy 2014, 16(11), 5891-5900; https://doi.org/10.3390/e16115891
Received: 17 May 2014 / Revised: 17 October 2014 / Accepted: 6 November 2014 / Published: 10 November 2014
Cited by 7 | Viewed by 2179 | PDF Full-text (1357 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this article is to study the heat transfer characteristics of a voice-coil and permanent magnet for a speaker using nano-sized ferrofluid. In order to investigate the temperature characteristics of the speaker, the speaker power ratings, ambient temperatures of the test [...] Read more.
The purpose of this article is to study the heat transfer characteristics of a voice-coil and permanent magnet for a speaker using nano-sized ferrofluid. In order to investigate the temperature characteristics of the speaker, the speaker power ratings, ambient temperatures of the test chamber, chamber sizes and input signals were tested. As a result, the temperatures of the voice-coil and magnet for the speaker increased with time due to the thermal linearity. The temperature of the voice-coil increased with the decrease of the input signals, but with the increase of the nominal power rating. The voice-coil temperature of Speaker 1 using ferrofluid of an amount of 650 μL at an elapsed time of 10,000 s was 24.5% lower than that of general Speaker 1. In addition, the proper size selection of the enclosure is an important design factor to ensure the sound quality and effective heat transfer of the speaker. Full article
Open AccessArticle
J.J. Thomson and Duhem’s Lagrangian Approaches to Thermodynamics
Entropy 2014, 16(11), 5876-5890; https://doi.org/10.3390/e16115876
Received: 21 August 2014 / Revised: 28 October 2014 / Accepted: 4 November 2014 / Published: 6 November 2014
Cited by 2 | Viewed by 2137 | PDF Full-text (722 KB) | HTML Full-text | XML Full-text
Abstract
In the last decades of the nineteenth century, different attitudes towards mechanics led to two main theoretical approaches to thermodynamics: an abstract and phenomenological approach, and a very different approach in terms of microscopic models. In reality some intermediate solutions were also put [...] Read more.
In the last decades of the nineteenth century, different attitudes towards mechanics led to two main theoretical approaches to thermodynamics: an abstract and phenomenological approach, and a very different approach in terms of microscopic models. In reality some intermediate solutions were also put forward. Helmholtz and Planck relied on a mere complementarity between mechanical and thermal variables in the expressions of state functions, and Oettingen explored the possibility of a more demanding symmetry between mechanical and thermal capacities. Planck refused microscopic interpretations of heat, whereas Helmholtz made also recourse to a Lagrangian approach involving fast hidden motions. J.J. Thomson incorporated the two mechanical attitudes in his theoretical framework, and put forward a very general theory for physical and chemical processes. He made use of two sets of Lagrangian coordinates that corresponded to two components of kinetic energy: alongside macroscopic energy, there was a microscopic energy, which was associated with the absolute temperature. Duhem put forward a bold design of unification between physics and chemistry, which was based on the two principles of thermodynamics. From the mathematical point of view, his thermodynamics or energetics consisted of a Lagrangian generalization of mechanics that could potentially describe every kind of irreversible process, explosive chemical reactions included. Full article
Open AccessArticle
A New Quantum f-Divergence for Trace Class Operators in Hilbert Spaces
Entropy 2014, 16(11), 5853-5875; https://doi.org/10.3390/e16115853
Received: 13 October 2014 / Revised: 30 October 2014 / Accepted: 3 November 2014 / Published: 6 November 2014
Cited by 1 | Viewed by 1764 | PDF Full-text (274 KB) | HTML Full-text | XML Full-text
Abstract
A new quantum f-divergence for trace class operators in Hilbert Spaces is introduced. It is shown that for normalised convex functions it is nonnegative. Some upper bounds are provided. Applications for some classes of convex functions of interest are also given. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
New Insights into the Fractional Order Diffusion Equation Using Entropy and Kurtosis
Entropy 2014, 16(11), 5838-5852; https://doi.org/10.3390/e16115838
Received: 17 October 2014 / Accepted: 31 October 2014 / Published: 6 November 2014
Cited by 20 | Viewed by 2484 | PDF Full-text (503 KB) | HTML Full-text | XML Full-text
Abstract
Fractional order derivative operators offer a concise description to model multi-scale, heterogeneous and non-local systems. Specifically, in magnetic resonance imaging, there has been recent work to apply fractional order derivatives to model the non-Gaussian diffusion signal, which is ubiquitous in the movement of [...] Read more.
Fractional order derivative operators offer a concise description to model multi-scale, heterogeneous and non-local systems. Specifically, in magnetic resonance imaging, there has been recent work to apply fractional order derivatives to model the non-Gaussian diffusion signal, which is ubiquitous in the movement of water protons within biological tissue. To provide a new perspective for establishing the utility of fractional order models, we apply entropy for the case of anomalous diffusion governed by a fractional order diffusion equation generalized in space and in time. This fractional order representation, in the form of the Mittag–Leffler function, gives an entropy minimum for the integer case of Gaussian diffusion and greater values of spectral entropy for non-integer values of the space and time derivatives. Furthermore, we consider kurtosis, defined as the normalized fourth moment, as another probabilistic description of the fractional time derivative. Finally, we demonstrate the implementation of anomalous diffusion, entropy and kurtosis measurements in diffusion weighted magnetic resonance imaging in the brain of a chronic ischemic stroke patient. Full article
(This article belongs to the Special Issue Complex Systems and Nonlinear Dynamics)
Open AccessArticle
Plant Friendly Input Design for Parameter Estimation in an Inertial System with Respect to D-Efficiency Constraints
Entropy 2014, 16(11), 5822-5837; https://doi.org/10.3390/e16115822
Received: 27 March 2014 / Revised: 14 July 2014 / Accepted: 29 October 2014 / Published: 6 November 2014
Cited by 3 | Viewed by 1823 | PDF Full-text (820 KB) | HTML Full-text | XML Full-text
Abstract
System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal [...] Read more.
System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality) expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly) way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Choked Flow Characteristics of Subcritical Refrigerant Flowing Through Converging-Diverging Nozzles
Entropy 2014, 16(11), 5810-5821; https://doi.org/10.3390/e16115810
Received: 10 August 2014 / Revised: 17 October 2014 / Accepted: 29 October 2014 / Published: 4 November 2014
Cited by 2 | Viewed by 2138 | PDF Full-text (1092 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents the experimental results the choked flow characteristics of a subcritical refrigerant through a converging-diverging nozzle. A test nozzle with a throat diameter of 2 mm was designed and developed. The influence of operating conditions on the choked flow characteristics, i.e., [...] Read more.
This paper presents the experimental results the choked flow characteristics of a subcritical refrigerant through a converging-diverging nozzle. A test nozzle with a throat diameter of 2 mm was designed and developed. The influence of operating conditions on the choked flow characteristics, i.e., the pressure profile and mass flow rate under choked flow conditions are investigated. The results indicate that the choked flow occurs in the flow of subcritical refrigerant through nozzles under the normal working conditions of air-conditioners or heat pumps. The pressure drop near the throat is about 80% of the total pressure drop through the nozzle. The critical mass flux is about 19,800 ~ 24,000 kg/(s·m2). The critical mass flow rate increases with increasing the upstream pressure and subcooling. Furthermore, the relative errors between the model predictions and the experimental results for the critical mass flux are also presented. It is found that the deviations of the predictions for homogeneous equilibrium model and Henry-Fauske model from the experimental values are −35% ~ 5% and 15% ~ 35%, respectively Full article
Open AccessArticle
Global Stability Analysis of a Curzon–Ahlborn Heat Engine under Different Regimes of Performance
Entropy 2014, 16(11), 5796-5809; https://doi.org/10.3390/e16115796
Received: 27 August 2014 / Revised: 26 October 2014 / Accepted: 29 October 2014 / Published: 4 November 2014
Cited by 5 | Viewed by 2029 | PDF Full-text (1022 KB) | HTML Full-text | XML Full-text
Abstract
We present a global stability analysis of a Curzon–Ahlborn heat engine considering different regimes of performance. The stability theory is used to construct the Lyapunov functions to prove the asymptotic stability behavior around the steady state of internal temperatures. We provide a general [...] Read more.
We present a global stability analysis of a Curzon–Ahlborn heat engine considering different regimes of performance. The stability theory is used to construct the Lyapunov functions to prove the asymptotic stability behavior around the steady state of internal temperatures. We provide a general analytic procedure for the description of the global stability by considering internal irreversibilities and a linear heat transfer law at the thermal couplings. The conditions of the global stability are explored for three regimes of performance: maximum power (MP), efficient power (EP) and the so-called ecological function (EF). Moreover, the analytical results were corroborated by means of numerical integrations, which fully validate the properties of the global asymptotic stability. Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle
Multiscale Compression Entropy of Microvascular Blood FlowSignals: Comparison of Results from Laser Speckle Contrastand Laser Doppler Flowmetry Data in Healthy Subjects
Entropy 2014, 16(11), 5777-5795; https://doi.org/10.3390/e16115777
Received: 7 October 2014 / Revised: 27 October 2014 / Accepted: 30 October 2014 / Published: 4 November 2014
Cited by 11 | Viewed by 2505 | PDF Full-text (293 KB) | HTML Full-text | XML Full-text
Abstract
Microvascular perfusion is commonly used to study the peripheral cardiovascular system. Microvascular blood flow can be continuously and non-invasively monitored with laser speckle contrast imaging (LSCI) or with laser Doppler flowmetry (LDF). These two optical-based techniques give perfusion values in arbitrary units. Our [...] Read more.
Microvascular perfusion is commonly used to study the peripheral cardiovascular system. Microvascular blood flow can be continuously and non-invasively monitored with laser speckle contrast imaging (LSCI) or with laser Doppler flowmetry (LDF). These two optical-based techniques give perfusion values in arbitrary units. Our goal is to better understand the perfusion time series given by each technique. For this purpose, we propose a nonlinear complexity analysis of LSCI and LDF time series recorded simultaneously in nine healthy subjects. This is performed through the computation of their multiscale compression entropy. The results obtained with LSCI time series computed from different regions of interest (ROI) sizes are examined. Our findings show that, for LSCI and LDF time series, compression entropy values are less than one for all of the scales analyzed. This suggests that, for all scales, there are repetitive structures within the data fluctuations. Moreover, at the largest scales studied, LDF signals seem to have structures that are different from those Entropy 2014, 16 5778 of Gaussian white noise. By opposition, this is not observed for LSCI time series computed from small ROI sizes Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Figures

Graphical abstract

Open AccessArticle
Inferring a Drive-Response Network from Time Series of Topological Measures in Complex Networks with Transfer Entropy
Entropy 2014, 16(11), 5753-5776; https://doi.org/10.3390/e16115753
Received: 19 August 2014 / Revised: 5 October 2014 / Accepted: 28 October 2014 / Published: 3 November 2014
Cited by 6 | Viewed by 2420 | PDF Full-text (3117 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Topological measures are crucial to describe, classify and understand complex networks. Lots of measures are proposed to characterize specific features of specific networks, but the relationships among these measures remain unclear. Taking into account that pulling networks from different domains together for statistical [...] Read more.
Topological measures are crucial to describe, classify and understand complex networks. Lots of measures are proposed to characterize specific features of specific networks, but the relationships among these measures remain unclear. Taking into account that pulling networks from different domains together for statistical analysis might provide incorrect conclusions, we conduct our investigation with data observed from the same network in the form of simultaneously measured time series. We synthesize a transfer entropy-based framework to quantify the relationships among topological measures, and then to provide a holistic scenario of these measures by inferring a drive-response network. Techniques from Symbolic Transfer Entropy, Effective Transfer Entropy, and Partial Transfer Entropy are synthesized to deal with challenges such as time series being non-stationary, finite sample effects and indirect effects. We resort to kernel density estimation to assess significance of the results based on surrogate data. The framework is applied to study 20 measures across 2779 records in the Technology Exchange Network, and the results are consistent with some existing knowledge. With the drive-response network, we evaluate the influence of each measure by calculating its strength, and cluster them into three classes, i.e., driving measures, responding measures and standalone measures, according to the network communities. Full article
(This article belongs to the Special Issue Transfer Entropy) Printed Edition available
Open AccessArticle
Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information
Entropy 2014, 16(11), 5738-5752; https://doi.org/10.3390/e16115738
Received: 13 July 2014 / Revised: 26 September 2014 / Accepted: 11 October 2014 / Published: 3 November 2014
Cited by 8 | Viewed by 2490 | PDF Full-text (867 KB) | HTML Full-text | XML Full-text
Abstract
The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM) output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different [...] Read more.
The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM) output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC) which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS) of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Open AccessReview
Applying Information Theory to Neuronal Networks: From Theory to Experiments
Entropy 2014, 16(11), 5721-5737; https://doi.org/10.3390/e16115721
Received: 3 June 2014 / Revised: 27 July 2014 / Accepted: 28 October 2014 / Published: 3 November 2014
Cited by 4 | Viewed by 3196 | PDF Full-text (992 KB) | HTML Full-text | XML Full-text
Abstract
Information-theory is being increasingly used to analyze complex, self-organizing processes on networks, predominantly in analytical and numerical studies. Perhaps one of the most paradigmatic complex systems is a network of neurons, in which cognition arises from the information storage, transfer, and processing among [...] Read more.
Information-theory is being increasingly used to analyze complex, self-organizing processes on networks, predominantly in analytical and numerical studies. Perhaps one of the most paradigmatic complex systems is a network of neurons, in which cognition arises from the information storage, transfer, and processing among individual neurons. In this article we review experimental techniques suitable for validating information-theoretical predictions in simple neural networks, as well as generating new hypotheses. Specifically, we focus on techniques that may be used to measure both network (microcircuit) anatomy as well as neuronal activity simultaneously. This is needed to study the role of the network structure on the emergent collective dynamics, which is one of the reasons to study the characteristics of information processing. We discuss in detail two suitable techniques, namely calcium imaging and the application of multi-electrode arrays to simple neural networks in culture, and discuss their advantages and limitations in an accessible manner for non-experts. In particular, we show that each technique induces a qualitatively different type of error on the measured mutual information. The ultimate goal of this work is to bridge the gap between theorists and experimentalists in their shared goal of understanding the behavior of networks of neurons. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Open AccessArticle
The Case for Tetrahedral Oxy-subhydride (TOSH) Structures in the Exclusion Zones of Anchored Polar Solvents Including Water
Entropy 2014, 16(11), 5712-5720; https://doi.org/10.3390/e16115712
Received: 19 May 2014 / Revised: 9 September 2014 / Accepted: 22 October 2014 / Published: 3 November 2014
Viewed by 2049 | PDF Full-text (1325 KB) | HTML Full-text | XML Full-text
Abstract
We hypothesize a mechanistic model of how negatively-charged exclusion zones (EZs) are created. While the growth of EZs is known to be associated with the absorption of ambient photonic energy, the molecular dynamics giving rise to this process need greater elucidation. We believe [...] Read more.
We hypothesize a mechanistic model of how negatively-charged exclusion zones (EZs) are created. While the growth of EZs is known to be associated with the absorption of ambient photonic energy, the molecular dynamics giving rise to this process need greater elucidation. We believe they arise due to the formation of oxy-subhydride structures (OH)(H2O)4 with a tetrahedral (sp3) (OH)(H2O)3 core. Five experimental data sets derived by previous researchers were assessed in this regard: (1) water-derived EZ light absorbance at specific infrared wavelengths, (2) EZ negative potential in water and ethanol, (3) maximum EZ light absorbance at 270 nm ultraviolet wavelength, (4) ability of dimethyl sulphoxide but not ether to form an EZ, and (5) transitory nature of melting ice derived EZs. The proposed tetrahedral oxy-subhydride structures (TOSH) appear to adequately account for all of the experimental evidence derived from water or other polar solvents. Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Open AccessArticle
Sample Entropy and Traditional Measures of Heart Rate Dynamics Reveal Different Modes of Cardiovascular Control During Low Intensity Exercise
Entropy 2014, 16(11), 5698-5711; https://doi.org/10.3390/e16115698
Received: 24 September 2014 / Revised: 15 October 2014 / Accepted: 27 October 2014 / Published: 31 October 2014
Cited by 22 | Viewed by 2706 | PDF Full-text (696 KB) | HTML Full-text | XML Full-text
Abstract
Nonlinear parameters of heart rate variability (HRV) have proven their prognostic value in clinical settings, but their physiological background is not very well established. We assessed the effects of low intensity isometric (ISO) and dynamic (DYN) exercise of the lower limbs on heart [...] Read more.
Nonlinear parameters of heart rate variability (HRV) have proven their prognostic value in clinical settings, but their physiological background is not very well established. We assessed the effects of low intensity isometric (ISO) and dynamic (DYN) exercise of the lower limbs on heart rate matched intensity on traditional and entropy measures of HRV. Due to changes of afferent feedback under DYN and ISO a distinct autonomic response, mirrored by HRV measures, was hypothesized. Five-minute inter-beat interval measurements of 43 healthy males (26.0 ± 3.1 years) were performed during rest, DYN and ISO in a randomized order. Blood pressures and rate pressure product were higher during ISO vs. DYN (p < 0.001). HRV indicators SDNN as well as low and high frequency power were significantly higher during ISO (p < 0.001 for all measures). Compared to DYN, sample entropy (SampEn) was lower during ISO (p < 0.001). Concluding, contraction mode itself is a significant modulator of the autonomic cardiovascular response to exercise. Compared to DYN, ISO evokes a stronger blood pressure response and an enhanced interplay between both autonomic branches. Non-linear HRV measures indicate a more regular behavior under ISO. Results support the view of the reciprocal antagonism being only one of many modes of autonomic heart rate control. Under different conditions; the identical “end product” heart rate might be achieved by other modes such as sympathovagal co-activation as well. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Figures

Graphical abstract

Open AccessArticle
A Load Balancing Algorithm Based on Maximum Entropy Methods in Homogeneous Clusters
Entropy 2014, 16(11), 5677-5697; https://doi.org/10.3390/e16115677
Received: 17 April 2014 / Revised: 13 October 2014 / Accepted: 23 October 2014 / Published: 30 October 2014
Cited by 4 | Viewed by 2134 | PDF Full-text (1097 KB) | HTML Full-text | XML Full-text
Abstract
In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new [...] Read more.
In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new load balancing algorithm for homogeneous clusters based on the Maximum Entropy Method (MEM). By calculating the entropy of the system and using the maximum entropy principle to ensure that each scheduling and migration is performed following the increasing tendency of the entropy, the system can achieve the load balancing status as soon as possible, shorten the task execution time and enable high performance. The result of simulation experiments show that this algorithm is more advanced when it comes to the time and extent of the load balance of the homogeneous cluster system compared with traditional algorithms. It also provides novel thoughts of solutions for the load balancing problem of the homogeneous cluster system. Full article
Figures

Graphical abstract

Open AccessArticle
Permutation Entropy Applied to the Characterization of the Clinical Evolution of Epileptic Patients under PharmacologicalTreatment
Entropy 2014, 16(11), 5668-5676; https://doi.org/10.3390/e16115668
Received: 12 August 2014 / Revised: 3 October 2014 / Accepted: 23 October 2014 / Published: 29 October 2014
Cited by 10 | Viewed by 2044 | PDF Full-text (691 KB) | HTML Full-text | XML Full-text
Abstract
Different techniques originated in information theory and tools from nonlinear systems theory have been applied to the analysis of electro-physiological time series. Several clinically relevant results have emerged from the use of concepts, such as entropy, chaos and complexity, in analyzing electrocardiograms and [...] Read more.
Different techniques originated in information theory and tools from nonlinear systems theory have been applied to the analysis of electro-physiological time series. Several clinically relevant results have emerged from the use of concepts, such as entropy, chaos and complexity, in analyzing electrocardiograms and electroencephalographic (EEG) records. In this work, we develop a method based on permutation entropy (PE) to characterize EEG records from different stages in the treatment of a chronic epileptic patient. Our results show that the PE is useful for clearly quantifying the evolution of the patient along a certain lapse of time and allows visualizing in a very convenient way the effects of the pharmacotherapy. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography)
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top