Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 10 (October 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The Kepler Space Telescope monitored the light (photometry) coming from approximately 150,000 stars [...] Read more.
View options order results:
result details:
Displaying articles 1-63
Export citation of selected articles as:
Open AccessArticle Detection of Causal Relations in Time Series Affected by Noise in Tokamaks Using Geodesic Distance on Gaussian Manifolds
Entropy 2017, 19(10), 569; https://doi.org/10.3390/e19100569
Received: 20 July 2017 / Revised: 20 October 2017 / Accepted: 20 October 2017 / Published: 24 October 2017
PDF Full-text (2245 KB) | HTML Full-text | XML Full-text
Abstract
Abstract: Modern experiments in Magnetic Confinement Nuclear Fusion can produce Gigabytes of data, mainly in form of time series. The acquired signals, composing massive databases, are typically affected by significant levels of noise. The interpretation of the time series can therefore become
[...] Read more.
Abstract: Modern experiments in Magnetic Confinement Nuclear Fusion can produce Gigabytes of data, mainly in form of time series. The acquired signals, composing massive databases, are typically affected by significant levels of noise. The interpretation of the time series can therefore become quite involved, particularly when tenuous causal relations have to be investigated. In the last years, synchronization experiments, to control potentially dangerous instabilities, have become a subject of intensive research. Their interpretation requires quite delicate causality analysis. In this paper, the approach of Information Geometry is applied to the problem of assessing the effectiveness of synchronization experiments on JET (Joint European Torus). In particular, the use of the Geodesic Distance on Gaussian Manifolds is shown to improve the results of advanced techniques such as Recurrent Plots and Complex Networks, when the noise level is not negligible. In cases affected by particularly high levels of noise, compromising the traditional treatments, the use of the Geodesic Distance on Gaussian Manifolds allows deriving quite encouraging results. In addition to consolidating conclusions previously quite uncertain, it has been demonstrated that the proposed approach permit to successfully analyze signals of discharges which were otherwise unusable, therefore salvaging the interpretation of those experiments. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Entropy Analysis of Short-Term Heartbeat Interval Time Series during Regular Walking
Entropy 2017, 19(10), 568; https://doi.org/10.3390/e19100568
Received: 18 September 2017 / Revised: 11 October 2017 / Accepted: 21 October 2017 / Published: 24 October 2017
Cited by 4 | PDF Full-text (917 KB) | HTML Full-text | XML Full-text
Abstract
Entropy measures have been extensively used to assess heart rate variability (HRV), a noninvasive marker of cardiovascular autonomic regulation. It is yet to be elucidated whether those entropy measures can sensitively respond to changes of autonomic balance and whether the responses, if there
[...] Read more.
Entropy measures have been extensively used to assess heart rate variability (HRV), a noninvasive marker of cardiovascular autonomic regulation. It is yet to be elucidated whether those entropy measures can sensitively respond to changes of autonomic balance and whether the responses, if there are any, are consistent across different entropy measures. Sixteen healthy subjects were enrolled in this study. Each subject undertook two 5-min ECG measurements, one in a resting seated position and another while walking on a treadmill at a regular speed of 5 km/h. For each subject, the two measurements were conducted in a randomized order and a 30-min rest was required between them. HRV time series were derived and were analyzed by eight entropy measures, i.e., approximate entropy (ApEn), corrected ApEn (cApEn), sample entropy (SampEn), fuzzy entropy without removing local trend (FuzzyEn-g), fuzzy entropy with local trend removal (FuzzyEn-l), permutation entropy (PermEn), conditional entropy (CE), and distribution entropy (DistEn). Compared to resting seated position, regular walking led to significantly reduced CE and DistEn (both p ≤ 0.006; Cohen’s d = 0.9 for CE, d = 1.7 for DistEn), and increased PermEn (p < 0.0001; d = 1.9), while all these changes disappeared after performing a linear detrend or a wavelet detrend (<~0.03 Hz) on HRV. In addition, cApEn, SampEn, FuzzyEn-g, and FuzzyEn-l showed significant decreases during regular walking after linear detrending (all p < 0.006; 0.8 < d < 1), while a significantly increased ApEn (p < 0.0001; d = 1.9) and a significantly reduced cApEn (p = 0.0006; d = 0.8) were observed after wavelet detrending. To conclude, multiple entropy analyses should be performed to assess HRV in order for objective results and caution should be paid when drawing conclusions based on observations from a single measure. Besides, results from different studies will not be comparable unless it is clearly stated whether data have been detrended and the methods used for detrending have been specified. Full article
Figures

Figure 1

Open AccessArticle Radially Excited AdS5 Black Holes in Einstein–Maxwell–Chern–Simons Theory
Entropy 2017, 19(10), 567; https://doi.org/10.3390/e19100567
Received: 12 September 2017 / Revised: 9 October 2017 / Accepted: 23 October 2017 / Published: 24 October 2017
PDF Full-text (492 KB) | HTML Full-text | XML Full-text
Abstract
In the large coupling regime of the 5-dimensional Einstein–Maxwell–Chern–Simons theory, charged and rotating cohomogeneity-1 black holes form sequences of extremal and non-extremal radially excited configurations. These asymptotically global Anti-de Sitter (AdS5) black holes form a discrete set of solutions, characterised by
[...] Read more.
In the large coupling regime of the 5-dimensional Einstein–Maxwell–Chern–Simons theory, charged and rotating cohomogeneity-1 black holes form sequences of extremal and non-extremal radially excited configurations. These asymptotically global Anti-de Sitter (AdS 5 ) black holes form a discrete set of solutions, characterised by the vanishing of the total angular momenta, or the horizon angular velocity. However, the solutions are not static. In this paper, we study the branch structure that contains these excited states, and its relation with the static Reissner–Nordström-AdS black hole. Thermodynamic properties of these solutions are considered, revealing that the branches with lower excitation number can become thermodynamically unstable beyond certain critical solutions that depend on the free parameters of the configuration. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Figures

Figure 1

Open AccessArticle Biological Aging and Life Span Based on Entropy Stress via Organ and Mitochondrial Metabolic Loading
Entropy 2017, 19(10), 566; https://doi.org/10.3390/e19100566
Received: 7 July 2017 / Revised: 21 September 2017 / Accepted: 7 October 2017 / Published: 23 October 2017
Cited by 1 | PDF Full-text (5405 KB) | HTML Full-text | XML Full-text
Abstract
The energy for sustaining life is released through the oxidation of glucose, fats, and proteins. A part of the energy released within each cell is stored as chemical energy of Adenosine Tri-Phosphate molecules, which is essential for performing life-sustaining functions, while the remainder
[...] Read more.
The energy for sustaining life is released through the oxidation of glucose, fats, and proteins. A part of the energy released within each cell is stored as chemical energy of Adenosine Tri-Phosphate molecules, which is essential for performing life-sustaining functions, while the remainder is released as heat in order to maintain isothermal state of the body. Earlier literature introduced the availability concepts from thermodynamics, related the specific irreversibility and entropy generation rates to metabolic efficiency and energy release rate of organ k, computed whole body specific entropy generation rate of whole body at any given age as a sum of entropy generation within four vital organs Brain, Heart, Kidney, Liver (BHKL) with 5th organ being the rest of organs (R5) and estimated the life span using an upper limit on lifetime entropy generated per unit mass of body, σM,life. The organ entropy stress expressed in terms of lifetime specific entropy generated per unit mass of body organs (kJ/(K kg of organ k)) was used to rank organs and heart ranked highest while liver ranked lowest. The present work includes the effects of (1) two additional organs: adipose tissue (AT) and skeletal muscles (SM) which are of importance to athletes; (2) proportions of nutrients oxidized which affects blood temperature and metabolic efficiencies; (3) conversion of the entropy stress from organ/cellular level to mitochondrial level; and (4) use these parameters as metabolism-based biomarkers for quantifying the biological aging process in reaching the limit of σM,life. Based on the 7-organ model and Elia constants for organ metabolic rates for a male of 84 kg steady mass and using basic and derived allometric constants of organs, the lifetime energy expenditure is estimated to be 2725 MJ/kg body mass while lifetime entropy generated is 6050 kJ/(K kg body mass) with contributions of 190; 1835.0; 610; 290; 700; 1470 and 95 kJ/K contributed by AT-BHKL-SM-R7 to 1 kg body mass over life time. The corresponding life time entropy stresses of organs are: 1.2; 60.5; 110.5; 110.5; 50.5; 3.5; 3.0 MJ/K per kg organ mass. Thus, among vital organs highest stress is for heart and kidney and lowest stress is for liver. The 5-organ model (BHKL and R5) also shows similar ranking. Based on mitochondrial volume and 5-organ model, the entropy stresses of organs expressed in kJ/K per cm3 of Mito volume are: 12,670; 5465; 2855; 4730 kJ/cm3 of Mito for BHKL indicating brain to be highly stressed and liver to be least stressed. Thus, the organ entropy stress ranking based on unit volume of mitochondria within an organ (kJ/(K cm3 of Mito of organ k)) differs from entropy stress based on unit mass of organ. Based on metabolic loading, the brains of athletes already under extreme mitochondrial stress and facing reduced metabolic efficiency under concussion are subjected to more increased stress. In the absence of non-intrusive measurements for estimating organ-based metabolic rates which can serve as metabolism-based biomarkers for biological aging (BA) of whole body, alternate methods are suggested for estimating the biological aging rate. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Fluctuation of Information Entropy Measures in Cell Image
Entropy 2017, 19(10), 565; https://doi.org/10.3390/e19100565
Received: 31 July 2017 / Revised: 10 September 2017 / Accepted: 18 October 2017 / Published: 23 October 2017
PDF Full-text (2548 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
A simple, label-free cytometry technique is introduced. It is based on the analysis of the fluctuation of image Gray Level Information Entropy (GLIE) which is shown to reflect intracellular biophysical properties like generalized entropy. In this study, the analytical relations between cellular thermodynamic
[...] Read more.
A simple, label-free cytometry technique is introduced. It is based on the analysis of the fluctuation of image Gray Level Information Entropy (GLIE) which is shown to reflect intracellular biophysical properties like generalized entropy. In this study, the analytical relations between cellular thermodynamic generalized entropy and diffusivity and GLIE fluctuation measures are explored for the first time. The standard deviation (SD) of GLIE is shown by experiments, simulation and theoretical analysis to be indifferent to microscope system “noise”. Then, the ability of GLIE fluctuation measures to reflect basic cellular entropy conditions of early death and malignancy is demonstrated in a cell model of human, healthy-donor lymphocytes, malignant Jurkat cells, as well as dead lymphocytes and Jurkat cells. Utilization of GLIE-based fluctuation measures seems to have the advantage of displaying biophysical characterization of the tested cells, like diffusivity and entropy, in a novel, unique, simple and illustrative way. Full article
Figures

Figure 1

Open AccessArticle Prior Elicitation, Assessment and Inference with a Dirichlet Prior
Entropy 2017, 19(10), 564; https://doi.org/10.3390/e19100564
Received: 29 July 2017 / Revised: 12 October 2017 / Accepted: 20 October 2017 / Published: 22 October 2017
Cited by 1 | PDF Full-text (297 KB) | HTML Full-text | XML Full-text
Abstract
Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the
[...] Read more.
Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis. Full article
Figures

Figure 1

Open AccessArticle Construction of New Fractional Repetition Codes from Relative Difference Sets with λ=1
Entropy 2017, 19(10), 563; https://doi.org/10.3390/e19100563
Received: 12 August 2017 / Revised: 19 October 2017 / Accepted: 19 October 2017 / Published: 22 October 2017
Cited by 2 | PDF Full-text (736 KB) | HTML Full-text | XML Full-text
Abstract
Fractional repetition (FR) codes are a class of distributed storage codes that replicate and distribute information data over several nodes for easy repair, as well as efficient reconstruction. In this paper, we propose three new constructions of FR codes based on relative difference
[...] Read more.
Fractional repetition (FR) codes are a class of distributed storage codes that replicate and distribute information data over several nodes for easy repair, as well as efficient reconstruction. In this paper, we propose three new constructions of FR codes based on relative difference sets (RDSs) with λ = 1 . Specifically, we propose new ( q 2 - 1 , q , q ) FR codes using cyclic RDS with parameters ( q + 1 , q - 1 , q , 1 ) constructed from q-ary m-sequences of period q 2 - 1 for a prime power q, ( p 2 , p , p ) FR codes using non-cyclic RDS with parameters ( p , p , p , 1 ) for an odd prime p or p = 4 and ( 4 l , 2 l , 2 l ) FR codes using non-cyclic RDS with parameters ( 2 l , 2 l , 2 l , 1 ) constructed from the Galois ring for a positive integer l. They are differentiated from the existing FR codes with respect to the constructable code parameters. It turns out that the proposed FR codes are (near) optimal for some parameters in terms of the FR capacity bound. Especially, ( 8 , 3 , 3 ) and ( 9 , 3 , 3 ) FR codes are optimal, that is, they meet the FR capacity bound for all k. To support various code parameters, we modify the proposed ( q 2 - 1 , q , q ) FR codes using decimation by a factor of the code length q 2 - 1 , which also gives us new good FR codes. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Symmetries and Geometrical Properties of Dynamical Fluctuations in Molecular Dynamics
Entropy 2017, 19(10), 562; https://doi.org/10.3390/e19100562
Received: 9 September 2017 / Revised: 18 October 2017 / Accepted: 19 October 2017 / Published: 22 October 2017
Cited by 2 | PDF Full-text (506 KB) | HTML Full-text | XML Full-text
Abstract
We describe some general results that constrain the dynamical fluctuations that can occur in non-equilibrium steady states, with a focus on molecular dynamics. That is, we consider Hamiltonian systems, coupled to external heat baths, and driven out of equilibrium by non-conservative forces. We
[...] Read more.
We describe some general results that constrain the dynamical fluctuations that can occur in non-equilibrium steady states, with a focus on molecular dynamics. That is, we consider Hamiltonian systems, coupled to external heat baths, and driven out of equilibrium by non-conservative forces. We focus on the probabilities of rare events (large deviations). First, we discuss a PT (parity-time) symmetry that appears in ensembles of trajectories where a current is constrained to have a large (non-typical) value. We analyse the heat flow in such ensembles, and compare it with non-equilibrium steady states. Second, we consider pathwise large deviations that are defined by considering many copies of a system. We show how the probability currents in such systems can be decomposed into orthogonal contributions that are related to convergence to equilibrium and to dissipation. We discuss the implications of these results for modelling non-equilibrium steady states. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessFeature PaperArticle Comparing Markov Chain Samplers for Molecular Simulation
Entropy 2017, 19(10), 561; https://doi.org/10.3390/e19100561
Received: 6 September 2017 / Revised: 17 October 2017 / Accepted: 19 October 2017 / Published: 21 October 2017
Cited by 1 | PDF Full-text (414 KB) | HTML Full-text | XML Full-text
Abstract
Markov chain Monte Carlo sampling propagators, including numerical integrators for stochastic dynamics, are central to the calculation of thermodynamic quantities and determination of structure for molecular systems. Efficiency is paramount, and to a great extent, this is determined by the integrated autocorrelation time
[...] Read more.
Markov chain Monte Carlo sampling propagators, including numerical integrators for stochastic dynamics, are central to the calculation of thermodynamic quantities and determination of structure for molecular systems. Efficiency is paramount, and to a great extent, this is determined by the integrated autocorrelation time (IAcT). This quantity varies depending on the observable that is being estimated. It is suggested that it is the maximum of the IAcT over all observables that is the relevant metric. Reviewed here is a method for estimating this quantity. For reversible propagators (which are those that satisfy detailed balance), the maximum IAcT is determined by the spectral gap in the forward transfer operator, but for irreversible propagators, the maximum IAcT can be far less than or greater than what might be inferred from the spectral gap. This is consistent with recent theoretical results (not to mention past practical experience) suggesting that irreversible propagators generally perform better if not much better than reversible ones. Typical irreversible propagators have a parameter controlling the mix of ballistic and diffusive movement. To gain insight into the effect of the damping parameter for Langevin dynamics, its optimal value is obtained here for a multidimensional quadratic potential energy function. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Figures

Figure 1

Open AccessArticle EXONEST: The Bayesian Exoplanetary Explorer
Entropy 2017, 19(10), 559; https://doi.org/10.3390/e19100559
Received: 4 August 2017 / Revised: 28 September 2017 / Accepted: 11 October 2017 / Published: 20 October 2017
Cited by 2 | PDF Full-text (2438 KB) | HTML Full-text | XML Full-text
Abstract
The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying
[...] Read more.
The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires) exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects. Full article
Figures

Figure 1

Open AccessArticle Comparative Statistical Mechanics of Muscle and Non-Muscle Contractile Systems: Stationary States of Near-Equilibrium Systems in A Linear Regime
Entropy 2017, 19(10), 558; https://doi.org/10.3390/e19100558
Received: 1 August 2017 / Revised: 30 September 2017 / Accepted: 16 October 2017 / Published: 20 October 2017
PDF Full-text (3998 KB) | HTML Full-text | XML Full-text
Abstract
A. Huxley’s equations were used to determine the mechanical properties of muscle myosin II (MII) at the molecular level, as well as the probability of the occurrence of the different stages in the actin–myosin cycle. It was then possible to use the formalism
[...] Read more.
A. Huxley’s equations were used to determine the mechanical properties of muscle myosin II (MII) at the molecular level, as well as the probability of the occurrence of the different stages in the actin–myosin cycle. It was then possible to use the formalism of statistical mechanics with the grand canonical ensemble to calculate numerous thermodynamic parameters such as entropy, internal energy, affinity, thermodynamic flow, thermodynamic force, and entropy production rate. This allows us to compare the thermodynamic parameters of a non-muscle contractile system, such as the normal human placenta, with those of different striated skeletal muscles (soleus and extensor digitalis longus) as well as the heart muscle and smooth muscles (trachea and uterus) in the rat. In the human placental tissues, it was observed that the kinetics of the actin–myosin crossbridges were considerably slow compared with those of smooth and striated muscular systems. The entropy production rate was also particularly low in the human placental tissues, as compared with that observed in smooth and striated muscular systems. This is partly due to the low thermodynamic flow found in the human placental tissues. However, the unitary force of non-muscle myosin (NMII) generated by each crossbridge cycle in the myofibroblasts of the human placental tissues was similar in magnitude to that of MII in the myocytes of both smooth and striated muscle cells. Statistical mechanics represents a powerful tool for studying the thermodynamics of all contractile muscle and non-muscle systems. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Performance Improvement of Plug-and-Play Dual-Phase-Modulated Quantum Key Distribution by Using a Noiseless Amplifier
Entropy 2017, 19(10), 546; https://doi.org/10.3390/e19100546
Received: 9 August 2017 / Revised: 13 October 2017 / Accepted: 13 October 2017 / Published: 20 October 2017
Cited by 3 | PDF Full-text (1854 KB) | HTML Full-text | XML Full-text
Abstract
We show that the successful use of a noiseless linear amplifier (NLA) can help increase the maximum transmission distance and tolerate more excess noise of the plug-and-play dual-phase-modulated continuous-variable quantum key distribution. In particular, an equivalent entanglement-based scheme model is proposed to analyze
[...] Read more.
We show that the successful use of a noiseless linear amplifier (NLA) can help increase the maximum transmission distance and tolerate more excess noise of the plug-and-play dual-phase-modulated continuous-variable quantum key distribution. In particular, an equivalent entanglement-based scheme model is proposed to analyze the security, and the secure bound is derived with the presence of a Gaussian noisy and lossy channel. The analysis shows that the performance of the NLA-based protocol can be further improved by adjusting the effective parameters. Full article
(This article belongs to the Special Issue Entropy and Information in the Foundation of Quantum Physics)
Figures

Figure 1

Open AccessArticle Multivariate Multiscale Symbolic Entropy Analysis of Human Gait Signals
Entropy 2017, 19(10), 557; https://doi.org/10.3390/e19100557
Received: 25 September 2017 / Revised: 13 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
Cited by 3 | PDF Full-text (2294 KB) | HTML Full-text | XML Full-text
Abstract
The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales
[...] Read more.
The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales and multi-channel statistical dependence inherent in such time series. To overcome this problem, multivariate multiscale symbolic entropy is proposed in this paper to distinguish the complexity of human gait signals in health and disease. The embedding dimension, time delay and quantization levels are appropriately designed to construct similarity of signals for calculating complexity of human gait. The proposed method can accurately detect healthy and pathologic group from realistic multivariate human gait time series on multiple scales. It strongly supports wearable healthcare with simplicity, robustness, and fast computation. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessLetter A Combinatorial Grassmannian Representation of the Magic Three-Qubit Veldkamp Line
Entropy 2017, 19(10), 556; https://doi.org/10.3390/e19100556
Received: 17 September 2017 / Revised: 13 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
PDF Full-text (235 KB) | HTML Full-text | XML Full-text
Abstract
It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G2(7), V(G2(7)). The lines of the ambient symplectic polar
[...] Read more.
It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G 2 ( 7 ) , V ( G 2 ( 7 ) ) . The lines of the ambient symplectic polar space are those lines of V ( G 2 ( 7 ) ) whose cores feature an odd number of points of G 2 ( 7 ) . After introducing the basic properties of three different types of points and seven distinct types of lines of V ( G 2 ( 7 ) ) , we explicitly show the combinatorial Grassmannian composition of the magic Veldkamp line; we first give representatives of points and lines of its core generalized quadrangle GQ ( 2 , 2 ) , and then additional points and lines of a specific elliptic quadric Q - (5, 2), a hyperbolic quadric Q + (5, 2), and a quadratic cone Q ^ (4, 2) that are centered on the GQ ( 2 , 2 ) . In particular, each point of Q + (5, 2) is represented by a Pasch configuration and its complementary line, the (Schläfli) double-six of points in Q - (5, 2) comprise six Cayley–Salmon configurations and six Desargues configurations with their complementary points, and the remaining Cayley–Salmon configuration stands for the vertex of Q ^ (4, 2). Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Figures

Figure 1

Open AccessArticle The Prior Can Often Only Be Understood in the Context of the Likelihood
Entropy 2017, 19(10), 555; https://doi.org/10.3390/e19100555
Received: 26 August 2017 / Revised: 30 September 2017 / Accepted: 14 October 2017 / Published: 19 October 2017
Cited by 5 | PDF Full-text (256 KB) | HTML Full-text | XML Full-text
Abstract
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key
[...] Read more.
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Open AccessArticle Upper Bounds for the Rate Distortion Function of Finite-Length Data Blocks of Gaussian WSS Sources
Entropy 2017, 19(10), 554; https://doi.org/10.3390/e19100554
Received: 19 September 2017 / Revised: 14 October 2017 / Accepted: 15 October 2017 / Published: 19 October 2017
Cited by 1 | PDF Full-text (256 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new
[...] Read more.
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Figures

Figure 1

Open AccessArticle Rainfall Network Optimization Using Radar and Entropy
Entropy 2017, 19(10), 553; https://doi.org/10.3390/e19100553
Received: 9 September 2017 / Revised: 14 October 2017 / Accepted: 16 October 2017 / Published: 19 October 2017
Cited by 2 | PDF Full-text (2844 KB) | HTML Full-text | XML Full-text
Abstract
In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it
[...] Read more.
In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it is possible to observe rainfall widely in a short time. However, the rainfall data obtained by radar do not necessarily correspond to that observed by ground-based rain gauges. The in-situ rainfall data from telemetering rain gauges were used to calibrate a radar system. Therefore, the rainfall intensity; as well as its distribution over the catchment can be obtained using radar. Once the rainfall data of past years at the desired locations over the catchment were generated, the entropy based on probability was applied to optimize the rainfall network. This method is applicable in remote and mountain areas. Its most important utility is to construct an optimal rainfall network in an ungauged catchment. The design of a rainfall network in the catchment of the Feitsui Reservoir was used to illustrate the various steps as well as the reliability of the method. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Prediction Model of the Power System Frequency Using a Cross-Entropy Ensemble Algorithm
Entropy 2017, 19(10), 552; https://doi.org/10.3390/e19100552
Received: 20 September 2017 / Revised: 17 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
PDF Full-text (4540 KB) | HTML Full-text | XML Full-text
Abstract
Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional
[...] Read more.
Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional approaches. In this paper, an ensemble algorithm using cross-entropy as a combination strategy is presented to address the trade-off between prediction accuracy and calculation speed. The prediction difficulty caused by inadequate numbers of severe disturbance samples is also overcome by the ensemble model. In the proposed ensemble algorithm, base learners are selected following the principle of diversity, which guarantees the ensemble algorithm’s accuracy. Cross-entropy is applied to evaluate the fitting performance of the base learners and to set the weight coefficient in the ensemble algorithm. Subsequently, an online prediction model based on the algorithm is established that integrates training, prediction and updating. In the Western System Coordinating Council 9-bus (WSCC 9) system and the Institute of Electrical and Electronics Engineers 39-bus (IEEE 39) system, the algorithm is shown to significantly improve the prediction accuracy in both sample-rich and sample-poor situations, verifying the effectiveness and superiority of the proposed ensemble algorithm. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Cosmographic Thermodynamics of Dark Energy
Entropy 2017, 19(10), 551; https://doi.org/10.3390/e19100551
Received: 4 August 2017 / Revised: 2 October 2017 / Accepted: 9 October 2017 / Published: 19 October 2017
PDF Full-text (373 KB) | HTML Full-text | XML Full-text
Abstract
Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their
[...] Read more.
Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their evolutions in terms of z. We demonstrate that dark energy may be modeled as perfect gas, only as the Mayer relation is preserved. In particular, we find that the Mayer relation holds if j q > 1 2 . The former result turns out to be general so that, even at the transition time, the jerk parameter j cannot violate the condition: j t r > 1 2 . This outcome rules out those models which predict opposite cases, whereas it turns out to be compatible with the concordance paradigm. We thus compare our bounds with the Λ CDM model, highlighting that a constant dark energy term seems to be compatible with the so-obtained specific heat thermodynamics, after a precise redshift domain. In our treatment, we show the degeneracy between unified dark energy models with zero sound speed and the concordance paradigm. Under this scheme, we suggest that the cosmological constant may be viewed as an effective approach to dark energy either at small or high redshift domains. Last but not least, we discuss how to reconstruct dark energy’s entropy from specific heats and we finally compute both entropy and specific heats into the luminosity distance d L , in order to fix constraints over them through cosmic data. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessFeature PaperArticle Entropy of Entropy: Measurement of Dynamical Complexity for Biological Systems
Entropy 2017, 19(10), 550; https://doi.org/10.3390/e19100550
Received: 13 September 2017 / Revised: 10 October 2017 / Accepted: 16 October 2017 / Published: 18 October 2017
Cited by 2 | PDF Full-text (856 KB) | HTML Full-text | XML Full-text
Abstract
Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis
[...] Read more.
Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis combines the features of MSE and an alternate measure of information, called superinformation, useful for DNA sequences. In this work, we apply the hybrid analysis to the cardiac interbeat interval time series. We find that the EoE value is significantly higher for the healthy than the pathologic groups. Particularly, short time series of 70 heart beats is sufficient for EoE analysis with an accuracy of 81% and longer series of 500 beats results in an accuracy of 90%. In addition, the EoE versus Shannon entropy plot of heart rate time series exhibits an inverted U relationship with the maximal EoE value appearing in the middle of extreme order and disorder. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Risk Assessment and Decision-Making under Uncertainty in Tunnel and Underground Engineering
Entropy 2017, 19(10), 549; https://doi.org/10.3390/e19100549
Received: 31 August 2017 / Revised: 29 September 2017 / Accepted: 16 October 2017 / Published: 18 October 2017
Cited by 1 | PDF Full-text (5869 KB) | HTML Full-text | XML Full-text
Abstract
The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this
[...] Read more.
The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this study, the existing entropy-risk decision model is first discussed and analyzed, and its deficiencies are improved upon and overcome. Then, this study addresses the fact that existing studies only consider parameter uncertainty and ignore the influence of the model uncertainty. Here, focus is on the issue of model uncertainty and differences in risk consciousness with different decision-makers. The utility theory is introduced in the model. Finally, a risk decision model is proposed based on the sensitivity analysis and the tolerance cost, which can improve decision-making efficiency. This research can provide guidance or reference for the evaluation and decision-making of complex systems engineering problems, and indicate a direction for further research of risk assessment and decision-making issues. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Figures

Figure 1

Open AccessArticle A Permutation Disalignment Index-Based Complex Network Approach to Evaluate Longitudinal Changes in Brain-Electrical Connectivity
Entropy 2017, 19(10), 548; https://doi.org/10.3390/e19100548
Received: 26 September 2017 / Revised: 12 October 2017 / Accepted: 13 October 2017 / Published: 17 October 2017
Cited by 2 | PDF Full-text (2554 KB) | HTML Full-text | XML Full-text
Abstract
In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI)
[...] Read more.
In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI) condition, when caused by a disorder degenerating into dementia, affects the brain connectivity. Motivated by the promising results achieved through the recently developed descriptor of coupling strength between EEG signals, the Permutation Disalignment Index (PDI), the present paper introduces a novel PDI-based complex network model to evaluate the longitudinal variations in brain-electrical connectivity. A group of 33 amnestic MCI subjects was enrolled and followed-up with over four months. The results were compared to MoCA (Montreal Cognitive Assessment) tests, which scores the cognitive abilities of the patient. A significant negative correlation could be observed between MoCA variation and the characteristic path length ( λ ) variation ( r = - 0 . 56 , p = 0 . 0006 ), whereas a significant positive correlation could be observed between MoCA variation and the variation of clustering coefficient (CC, r = 0 . 58 , p = 0 . 0004 ), global efficiency (GE, r = 0 . 57 , p = 0 . 0005 ) and small worldness (SW, r = 0 . 57 , p = 0 . 0005 ). Cognitive decline thus seems to reflect an underlying cortical “disconnection” phenomenon: worsened subjects indeed showed an increased λ and decreased CC, GE and SW. The PDI-based connectivity model, proposed in the present work, could be a novel tool for the objective quantification of longitudinal brain-electrical connectivity changes in MCI subjects. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Figures

Figure 1

Open AccessArticle Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit
Entropy 2017, 19(10), 545; https://doi.org/10.3390/e19100545
Received: 5 July 2017 / Revised: 2 October 2017 / Accepted: 3 October 2017 / Published: 14 October 2017
Cited by 1 | PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret
[...] Read more.
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion by offering a more structured achieving scheme and some simpler conjectures to prove. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Open AccessArticle Equilibration in the Nosé–Hoover Isokinetic Ensemble: Effect of Inter-Particle Interactions
Entropy 2017, 19(10), 544; https://doi.org/10.3390/e19100544
Received: 19 September 2017 / Revised: 11 October 2017 / Accepted: 11 October 2017 / Published: 14 October 2017
Cited by 1 | PDF Full-text (468 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics
[...] Read more.
We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics aim to generate the canonical equilibrium distribution of a system at a desired temperature by employing a set of time-reversible, deterministic equations of motion. A signature of canonical equilibrium is a single-particle momentum distribution that is Gaussian. We find that the equilibrium properties of the system within the Nosé–Hoover dynamics coincides with that within the canonical ensemble. Moreover, starting from out-of-equilibrium initial conditions, the average kinetic energy of the system relaxes to its target value over a size-independent timescale. However, quite surprisingly, our results indicate that under the same conditions and with only long-range interactions present in the system, the momentum distribution relaxes to its Gaussian form in equilibrium over a scale that diverges with the system size. On adding short-range interactions, the relaxation is found to occur over a timescale that has a much weaker dependence on system size. This system-size dependence of the timescale vanishes when only short-range interactions are present in the system. An implication of such an ultra-slow relaxation when only long-range interactions are present in the system is that macroscopic observables other than the average kinetic energy when estimated in the Nosé–Hoover dynamics may take an unusually long time to relax to its canonical equilibrium value. Our work underlines the crucial role that interactions play in deciding the equivalence between Nosé–Hoover and canonical equilibrium. Full article
Figures

Figure 1

Open AccessArticle Is Cetacean Intelligence Special? New Perspectives on the Debate
Entropy 2017, 19(10), 543; https://doi.org/10.3390/e19100543
Received: 28 June 2017 / Revised: 30 August 2017 / Accepted: 2 September 2017 / Published: 13 October 2017
PDF Full-text (2097 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under
[...] Read more.
In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under significant scrutiny, in large part due to problems of how to define and test the cognitive performance of animals. This paper presents evidence supporting complex cognition in cetaceans obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge. This theoretical framework explaining animal intelligence in neural computational terms is supported using a new mathematical model. Two pathways leading to higher levels of intelligence in animals are identified, each reflecting a trade-off either in energetic requirements or the number of neurons used. A description of the evolutionary pathway that led to increased cognitive capacities in cetacean brains is detailed and evidence supporting complex cognition in cetaceans is presented. This paper also provides an interpretation of the adaptive function of cetacean neuronal traits. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Backtracking and Mixing Rate of Diffusion on Uncorrelated Temporal Networks
Entropy 2017, 19(10), 542; https://doi.org/10.3390/e19100542
Received: 31 August 2017 / Revised: 6 October 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
PDF Full-text (345 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s
[...] Read more.
We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s tendency to backtrack at each step (return where he/she comes from), as well as the resulting effect on the mixing rate of the process. As we show through empirical data, non-Poisson dynamics may significantly slow down diffusion due to backtracking, by a mechanism intrinsically different from the standard bus paradox and related temporal mechanisms. We conclude by discussing the implications of our work for the interpretation of results generated by null models of temporal networks. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Stationary Wavelet Singular Entropy and Kernel Extreme Learning for Bearing Multi-Fault Diagnosis
Entropy 2017, 19(10), 541; https://doi.org/10.3390/e19100541
Received: 7 September 2017 / Revised: 5 October 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
Cited by 2 | PDF Full-text (402 KB) | HTML Full-text | XML Full-text
Abstract
The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy
[...] Read more.
The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy as an important fault signature pattern. To compute the entropy, we propose combining stationary wavelet transform and singular value decomposition. The resulting feature extraction method, that we call stationary wavelet singular entropy (SWSE), aims to improve the accuracy of the diagnostics of bearing failure by finding a small number of high-quality fault signature patterns. The features extracted by the SWSE are then passed on to a kernel extreme learning machine (KELM) classifier. The proposed SWSE-KELM algorithm is evaluated using two bearing vibration signal databases obtained from Case Western Reserve University. We compare our SWSE feature extraction method to other well-known methods in the literature such as stationary wavelet packet singular entropy (SWPSE) and decimated wavelet packet singular entropy (DWPSE). The experimental results show that the SWSE-KELM consistently outperforms both the SWPSE-KELM and DWPSE-KELM methods. Further, our SWSE method requires fewer features than the other two evaluated methods, which makes our SWSE-KELM algorithm simpler and faster. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Complexity-Entropy Maps as a Tool for the Characterization of the Clinical Electrophysiological Evolution of Patients under Pharmacological Treatment with Psychotropic Drugs
Entropy 2017, 19(10), 540; https://doi.org/10.3390/e19100540
Received: 26 July 2017 / Revised: 5 October 2017 / Accepted: 6 October 2017 / Published: 13 October 2017
PDF Full-text (1437 KB) | HTML Full-text | XML Full-text
Abstract
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this
[...] Read more.
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Kovacs-Like Memory Effect in Athermal Systems: Linear Response Analysis
Entropy 2017, 19(10), 539; https://doi.org/10.3390/e19100539
Received: 19 September 2017 / Revised: 10 October 2017 / Accepted: 11 October 2017 / Published: 13 October 2017
Cited by 2 | PDF Full-text (4976 KB) | HTML Full-text | XML Full-text
Abstract
We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to
[...] Read more.
We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to a general class of models with conserved momentum and non-conserved energy. Our theoretical predictions, obtained within the first Sonine approximation, show an excellent agreement with the numerical results. Furthermore, we prove that the observed non-monotonic relaxation is consistent with the monotonic decay of the non-equilibrium entropy. Full article
Figures

Figure 1

Open AccessArticle Investigating the Thermodynamic Performances of TO-Based Metamaterial Tunable Cells with an Entropy Generation Approach
Entropy 2017, 19(10), 538; https://doi.org/10.3390/e19100538
Received: 26 August 2017 / Revised: 27 September 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
Cited by 2 | PDF Full-text (2546 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled
[...] Read more.
Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled on related positions following the TO design. However, owing to the integration and rotation of materials in tunable cells, they might lead to extra thermal losses as compared with the previous continuum design. This paper focuses on investigating the thermodynamic properties of tunable cells under related design parameters. The universal expression for the local entropy generation rate in such metamaterial systems is obtained considering the influence of rotation. A series of contrast schemes are established to describe the thermodynamic process and thermal energy distributions from the viewpoint of entropy analysis. Moreover, effects of design parameters on thermal dissipations and system irreversibility are investigated. In conclusion, more thermal dissipations and stronger thermodynamic processes occur in a system with larger conductivity ratios and rotation angles. This paper presents a detailed description of the thermodynamic properties of metamaterial tunable cells and provides reference for selecting appropriate design parameters on related positions to fabricate more efficient and energy-economical switchable TO devices. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Back to Top