Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 10 (October 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-51
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle Energy, Exergy and Economic Evaluation Comparison of Small-Scale Single and Dual Pressure Organic Rankine Cycles Integrated with Low-Grade Heat Sources
Entropy 2017, 19(10), 476; doi:10.3390/e19100476
Received: 16 July 2017 / Revised: 28 August 2017 / Accepted: 29 August 2017 / Published: 21 September 2017
PDF Full-text (595 KB) | HTML Full-text | XML Full-text
Abstract
Low-grade heat sources such as solar thermal, geothermal, exhaust gases and industrial waste heat are suitable alternatives for power generation which can be exploited by means of small-scale Organic Rankine Cycle (ORC). This paper combines thermodynamic optimization and economic analysis to assess the
[...] Read more.
Low-grade heat sources such as solar thermal, geothermal, exhaust gases and industrial waste heat are suitable alternatives for power generation which can be exploited by means of small-scale Organic Rankine Cycle (ORC). This paper combines thermodynamic optimization and economic analysis to assess the performance of single and dual pressure ORC operating with different organic fluids and targeting small-scale applications. Maximum power output is lower than 45 KW while the temperature of the heat source varies in the range 100–200 °C. The studied working fluids, namely R1234yf, R1234ze(E) and R1234ze(Z), are selected based on environmental, safety and thermal performance criteria. Levelized Cost of Electricity (LCOE) and Specific Investment Cost (SIC) for two operation conditions are presented: maximum power output and maximum thermal efficiency. Results showed that R1234ze(Z) achieves the highest net power output (up to 44 kW) when net power output is optimized. Regenerative ORC achieves the highest performance when thermal efficiency is optimized (up to 18%). Simple ORC is the most cost-effective among the studied cycle configurations, requiring a selling price of energy of 0.3 USD/kWh to obtain a payback period of 8 years. According to SIC results, the working fluid R1234ze(Z) exhibits great potential for simple ORC when compared to conventional R245fa. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Open AccessArticle Analysis of Entropy Generation in Flow of Methanol-Based Nanofluid in a Sinusoidal Wavy Channel
Entropy 2017, 19(10), 490; doi:10.3390/e19100490
Received: 24 April 2017 / Revised: 5 July 2017 / Accepted: 14 July 2017 / Published: 8 October 2017
PDF Full-text (2660 KB) | HTML Full-text | XML Full-text
Abstract
The entropy generation due to heat transfer and fluid friction in mixed convective peristaltic flow of methanol-Al2O3 nano fluid is examined. Maxwell’s thermal conductivity model is used in analysis. Velocity and temperature profiles are utilized in the computation of the
[...] Read more.
The entropy generation due to heat transfer and fluid friction in mixed convective peristaltic flow of methanol-Al2O3 nano fluid is examined. Maxwell’s thermal conductivity model is used in analysis. Velocity and temperature profiles are utilized in the computation of the entropy generation number. The effects of involved physical parameters on velocity, temperature, entropy generation number, and Bejan number are discussed and explained graphically. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle An Entropy Based Low-Cycle Fatigue Life Prediction Model for Solder Materials
Entropy 2017, 19(10), 503; doi:10.3390/e19100503
Received: 11 August 2017 / Revised: 8 September 2017 / Accepted: 14 September 2017 / Published: 25 September 2017
PDF Full-text (5904 KB) | HTML Full-text | XML Full-text
Abstract
Fatigue damage is an irreversible progression which can be represented by the entropy increase, and it is well known that the second law of thermodynamics can describe an irreversible process. Based on the concept of entropy, the second law of thermodynamics can provide
[...] Read more.
Fatigue damage is an irreversible progression which can be represented by the entropy increase, and it is well known that the second law of thermodynamics can describe an irreversible process. Based on the concept of entropy, the second law of thermodynamics can provide the changing direction of system. In the current study, a new entropy increment model is developed based on the frame work of continuum damage mechanics. The proposed model is applied to determine the entropy increment during the fatigue damage process. Based on the relationship between entropy and fatigue life, a new fatigue life prediction model is proposed with clear physical meaning. To verify the proposed model, eight groups of experiments were performed with different aging and experimental conditions. The theoretical predictions show good agreement with the experimental data. It is noted that with higher aging temperatures, the value of ε th / ε cr becomes larger and the residual fatigue life reduces. The value of ε th / ε cr goes larger and the residual fatigue life becomes shorter with higher strain amplitude. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Entropy Generation in Thermal Radiative Loading of Structures with Distinct Heaters
Entropy 2017, 19(10), 506; doi:10.3390/e19100506
Received: 20 July 2017 / Revised: 28 August 2017 / Accepted: 18 September 2017 / Published: 29 September 2017
PDF Full-text (1922 KB) | HTML Full-text | XML Full-text
Abstract
Thermal loading by radiant heaters is used in building heating and hot structure design applications. In this research, characteristics of the thermal radiative heating of an enclosure by a distinct heater are investigated from the second law of thermodynamics point of view. The
[...] Read more.
Thermal loading by radiant heaters is used in building heating and hot structure design applications. In this research, characteristics of the thermal radiative heating of an enclosure by a distinct heater are investigated from the second law of thermodynamics point of view. The governing equations of conservation of mass, momentum, and energy (fluid and solid) are solved by the finite volume method and the semi-implicit method for pressure linked equations (SIMPLE) algorithm. Radiant heaters are modeled by constant heat flux elements, and the lower wall is held at a constant temperature while the other boundaries are adiabatic. The thermal conductivity and viscosity of the fluid are temperature-dependent, which leads to complex partial differential equations with nonlinear coefficients. The parameter study is done based on the amount of thermal load (presented by heating number) as well as geometrical configuration parameters, such as the aspect ratio of the enclosure and the radiant heater number. The results present the effect of thermal and geometrical parameters on entropy generation and the distribution field. Furthermore, the effect of thermal radiative heating on both of the components of entropy generation (viscous dissipation and heat dissipation) is investigated. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Stock Selection for Portfolios Using Expected Utility-Entropy Decision Model
Entropy 2017, 19(10), 508; doi:10.3390/e19100508
Received: 8 June 2017 / Revised: 16 August 2017 / Accepted: 19 September 2017 / Published: 21 September 2017
PDF Full-text (810 KB) | HTML Full-text | XML Full-text
Abstract
Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E) measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further
[...] Read more.
Yang and Qiu proposed and then recently improved an expected utility-entropy (EU-E) measure of risk and decision model. When segregation holds, Luce et al. derived an expected utility term, plus a constant multiplies the Shannon entropy as the representation of risky choices, further demonstrating the reasonability of the EU-E decision model. In this paper, we apply the EU-E decision model to selecting the set of stocks to be included in the portfolios. We first select 7 and 10 stocks from the 30 component stocks of Dow Jones Industrial Average index, and then derive and compare the efficient portfolios in the mean-variance framework. The conclusions imply that efficient portfolios composed of 7(10) stocks selected using the EU-E model with intermediate intervals of the tradeoff coefficients are more efficient than that composed of the sets of stocks selected using the expected utility model. Furthermore, the efficient portfolio of 7(10) stocks selected by the EU-E decision model have almost the same efficient frontier as that of the sample of all stocks. This suggests the necessity of incorporating both the expected utility and Shannon entropy together when taking risky decisions, further demonstrating the importance of Shannon entropy as the measure of uncertainty, as well as the applicability of the EU-E model as a decision-making model. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Exergy Analysis of a Pilot Parabolic Solar Dish-Stirling System
Entropy 2017, 19(10), 509; doi:10.3390/e19100509
Received: 26 July 2017 / Revised: 18 September 2017 / Accepted: 19 September 2017 / Published: 21 September 2017
PDF Full-text (3324 KB) | HTML Full-text | XML Full-text
Abstract
Energy and exergy analyses were carried out for a pilot parabolic solar dish-Stirling System. The system was set up at a site at Kerman City, located in a sunny desert area of Iran. Variations in energy and exergy efficiency were considered during the
[...] Read more.
Energy and exergy analyses were carried out for a pilot parabolic solar dish-Stirling System. The system was set up at a site at Kerman City, located in a sunny desert area of Iran. Variations in energy and exergy efficiency were considered during the daytime hours of the average day of each month in a year. A maximum collector energy efficiency and total energy efficiency of 54% and 12.2%, respectively, were predicted in July, while during the period between November and February the efficiency values were extremely low. The maximum collector exergy efficiency was 41.5% in July, while the maximum total exergy efficiency reached 13.2%. The values of energy losses as a percentage of the total losses of the main parts of the system were also reported. Results showed that the major energy and exergy losses occurred in the receiver. The second biggest portion of energy losses occurred in the Stirling engine, while the portion of exergy loss in the concentrator was higher compared to the Stirling engine. Finally, the performance of the Kerman pilot was compared to that of the EuroDish project. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Partially Observable Markov Decision Process-Based Transmission Policy over Ka-Band Channels for Space Information Networks
Entropy 2017, 19(10), 510; doi:10.3390/e19100510
Received: 24 July 2017 / Revised: 30 August 2017 / Accepted: 20 September 2017 / Published: 21 September 2017
PDF Full-text (1081 KB) | HTML Full-text | XML Full-text
Abstract
The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN), which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is
[...] Read more.
The Ka-band and higher Q/V band channels can provide an appealing capacity for the future deep-space communications and Space Information Networks (SIN), which are viewed as a primary solution to satisfy the increasing demands for high data rate services. However, Ka-band channel is much more sensitive to the weather conditions than the conventional communication channels. Moreover, due to the huge distance and long propagation delay in SINs, the transmitter can only obtain delayed Channel State Information (CSI) from feedback. In this paper, the noise temperature of time-varying rain attenuation at Ka-band channels is modeled to a two-state Gilbert–Elliot channel, to capture the channel capacity that randomly ranging from good to bad state. An optimal transmission scheme based on Partially Observable Markov Decision Processes (POMDP) is proposed, and the key thresholds for selecting the optimal transmission method in the SIN communications are derived. Simulation results show that our proposed scheme can effectively improve the throughput. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Far-From-Equilibrium Time Evolution between Two Gamma Distributions
Entropy 2017, 19(10), 511; doi:10.3390/e19100511
Received: 18 August 2017 / Revised: 18 September 2017 / Accepted: 21 September 2017 / Published: 22 September 2017
PDF Full-text (794 KB) | HTML Full-text | XML Full-text
Abstract
Many systems in nature and laboratories are far from equilibrium and exhibit significant fluctuations, invalidating the key assumptions of small fluctuations and short memory time in or near equilibrium. A full knowledge of Probability Distribution Functions (PDFs), especially time-dependent PDFs, becomes essential in
[...] Read more.
Many systems in nature and laboratories are far from equilibrium and exhibit significant fluctuations, invalidating the key assumptions of small fluctuations and short memory time in or near equilibrium. A full knowledge of Probability Distribution Functions (PDFs), especially time-dependent PDFs, becomes essential in understanding far-from-equilibrium processes. We consider a stochastic logistic model with multiplicative noise, which has gamma distributions as stationary PDFs. We numerically solve the transient relaxation problem and show that as the strength of the stochastic noise increases, the time-dependent PDFs increasingly deviate from gamma distributions. For sufficiently strong noise, a transition occurs whereby the PDF never reaches a stationary state, but instead, forms a peak that becomes ever more narrowly concentrated at the origin. The addition of an arbitrarily small amount of additive noise regularizes these solutions and re-establishes the existence of stationary solutions. In addition to diagnostic quantities such as mean value, standard deviation, skewness and kurtosis, the transitions between different solutions are analysed in terms of entropy and information length, the total number of statistically-distinguishable states that a system passes through in time. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Complex and Entropy of Fluctuations of Agent-Based Interacting Financial Dynamics with Random Jump
Entropy 2017, 19(10), 512; doi:10.3390/e19100512
Received: 27 July 2017 / Revised: 8 September 2017 / Accepted: 21 September 2017 / Published: 23 September 2017
PDF Full-text (888 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the complex behaviors and entropy properties for a novel random complex interacting stock price dynamics, which is established by the combination of stochastic contact process and compound Poisson process, concerning with stock return fluctuations caused by the spread of investors’
[...] Read more.
This paper investigates the complex behaviors and entropy properties for a novel random complex interacting stock price dynamics, which is established by the combination of stochastic contact process and compound Poisson process, concerning with stock return fluctuations caused by the spread of investors’ attitudes and random jump fluctuations caused by the macroeconomic environment, respectively. To better understand the fluctuation complex behaviors of the proposed price dynamics, the entropy analyses of random logarithmic price returns and corresponding absolute returns of simulation dataset with different parameter set are preformed, including permutation entropy, fractional permutation entropy, sample entropy and fractional sample entropy. We found that a larger λ or γ leads to more complex dynamics, and the absolute return series exhibit lower complex dynamics than the return series. To verify the rationality of the proposed compound price model, the corresponding analyses of actual market datasets are also comparatively preformed. The empirical results verify that the proposed price model can reproduce some important complex dynamics of actual stock markets to some extent. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle “Wave-Packet Reduction” and the Quantum Character of the Actualization of Potentia
Entropy 2017, 19(10), 513; doi:10.3390/e19100513
Received: 1 September 2017 / Revised: 19 September 2017 / Accepted: 21 September 2017 / Published: 24 September 2017
PDF Full-text (217 KB) | HTML Full-text | XML Full-text
Abstract
Werner Heisenberg introduced the notion of quantum potentia in order to accommodate the indeterminism associated with quantum measurement. Potentia captures the capacity of the system to be found to possess a property upon a corresponding sharp measurement in which it is actualized. The
[...] Read more.
Werner Heisenberg introduced the notion of quantum potentia in order to accommodate the indeterminism associated with quantum measurement. Potentia captures the capacity of the system to be found to possess a property upon a corresponding sharp measurement in which it is actualized. The specific potentiae of the individual system are represented formally by the complex amplitudes in the measurement bases of the eigenstate in which it is prepared. All predictions for future values of system properties can be made by an experimenter using the probabilities which are the squared moduli of these amplitudes that are the diagonal elements of the density matrix description of the pure ensemble to which the system, so prepared, belongs. Heisenberg considered the change of the ensemble attribution following quantum measurement to be analogous to the classical change in Gibbs’ thermodynamics when measurement of the canonical ensemble enables a microcanonical ensemble description. This analogy, presented by Heisenberg as operating at the epistemic level, is analyzed here. It has led some to claim not only that the change of the state in measurement is classical mechanical, bringing its quantum character into question, but also that Heisenberg held this to be the case. Here, these claims are shown to be incorrect, because the analogy concerns the change of ensemble attribution by the experimenter upon learning the result of the measurement, not the actualization of the potentia responsible for the change of the individual system state which—in Heisenberg’s interpretation of quantum mechanics—is objective in nature and independent of the experimenter’s knowledge. Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)
Open AccessArticle Characterizing Complexity Changes in Chinese Stock Markets by Permutation Entropy
Entropy 2017, 19(10), 514; doi:10.3390/e19100514
Received: 23 August 2017 / Revised: 20 September 2017 / Accepted: 20 September 2017 / Published: 24 September 2017
PDF Full-text (882 KB) | HTML Full-text | XML Full-text
Abstract
Financial time series analyses have played an important role in developing some of the fundamental economic theories. However, many of the published analyses of financial time series focus on long-term average behavior of a market, and thus shed little light on the temporal
[...] Read more.
Financial time series analyses have played an important role in developing some of the fundamental economic theories. However, many of the published analyses of financial time series focus on long-term average behavior of a market, and thus shed little light on the temporal evolution of a market, which from time to time may be interrupted by stock crashes and financial crises. Consequently, in terms of complexity science, it is still unknown whether the market complexity during a stock crash decreases or increases. To answer this question, we have examined the temporal variation of permutation entropy (PE) in Chinese stock markets by computing PE from high-frequency composite indies of two stock markets: the Shanghai Stock Exchange (SSE) and the Shenzhen Stock Exchange (SZSE). We have found that PE decreased significantly in two significant time windows, each encompassing a rapid market rise and then a few gigantic stock crashes. One window started in the middle of 2006, long before the 2008 global financial crisis, and continued up to early 2011. The other window was more recent, started in the middle of 2014, and ended in the middle of 2016. Since both windows were at least one year long, and proceeded stock crashes by at least half a year, the decrease in PE can be invaluable warning signs for regulators and investors alike. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessFeature PaperArticle Multiuser Channels with Statistical CSI at the Transmitter: Fading Channel Alignments and Stochastic Orders, an Overview
Entropy 2017, 19(10), 515; doi:10.3390/e19100515
Received: 13 June 2017 / Revised: 8 September 2017 / Accepted: 20 September 2017 / Published: 25 September 2017
PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
In this overview paper, we introduce an application of stochastic orders in wireless communications. In particular, we show how to use stochastic orders to investigate the ergodic capacity results for fast fading Gaussian memoryless multiuser channels when only the statistics of the channel
[...] Read more.
In this overview paper, we introduce an application of stochastic orders in wireless communications. In particular, we show how to use stochastic orders to investigate the ergodic capacity results for fast fading Gaussian memoryless multiuser channels when only the statistics of the channel state information are known at the transmitters (CSIT). In general, the characterization of the capacity region of multiuser channels with only statistical CSIT is open. To attain our goal, in this work we resort to classifying the random channels through their probability distributions by which we can obtain the capacity results. To be more precise, we derive sufficient conditions to attain some information theoretic channel orders such as degraded and very strong interference by exploiting the usual stochastic order and exploiting the same marginal property. After that, we apply the developed scheme to Gaussian interference channels and Gaussian broadcast channels. We also extend the framework to channels with multiple antennas. Possible scenarios for channel enhancement under statistical CSIT are also discussed. Several practical examples such as Rayleigh fading and Nakagami-m fading, etc., illustrate the application of the derived results. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Complexity Analysis of Neonatal EEG Using Multiscale Entropy: Applications in Brain Maturation and Sleep Stage Classification
Entropy 2017, 19(10), 516; doi:10.3390/e19100516
Received: 1 September 2017 / Revised: 19 September 2017 / Accepted: 22 September 2017 / Published: 26 September 2017
PDF Full-text (1475 KB) | HTML Full-text | XML Full-text
Abstract
Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to
[...] Read more.
Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to investigate how EEG complexity evolves during electrocortical maturation and whether complexity features can be used to classify sleep stages. First , we developed a regression model that estimates the postmenstrual age (PMA) using a combination of complexity features. Then, these features are used to build a sleep stage classifier. The analysis is performed on a database consisting of 97 EEG recordings from 26 prematurely born infants, recorded between 27 and 42 weeks PMA. The results of the regression analysis revealed a significant positive correlation between the EEG complexity and the infant’s age. Moreover, the PMA of the neonate could be estimated with a root mean squared error of 1.88 weeks. The sleep stage classifier was able to discriminate quiet sleep from nonquiet sleep with an area under the curve (AUC) of 90%. These results suggest that the complexity of the brain dynamics is a highly useful index for brain maturation quantification and neonatal sleep stage classification. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Participation Ratio for Constraint-Driven Condensation with Superextensive Mass
Entropy 2017, 19(10), 517; doi:10.3390/e19100517
Received: 30 August 2017 / Revised: 19 September 2017 / Accepted: 22 September 2017 / Published: 26 September 2017
PDF Full-text (339 KB) | HTML Full-text | XML Full-text
Abstract
Broadly distributed random variables with a power-law distribution f(m)m-(1+α) are known to generate condensation effects. This means that, when the exponent α lies in a certain interval, the largest variable in a
[...] Read more.
Broadly distributed random variables with a power-law distribution f ( m ) m - ( 1 + α ) are known to generate condensation effects. This means that, when the exponent α lies in a certain interval, the largest variable in a sum of N (independent and identically distributed) terms is for large N of the same order as the sum itself. In particular, when the distribution has infinite mean ( 0 < α < 1 ) one finds unconstrained condensation, whereas for α > 1 constrained condensation takes places fixing the total mass to a large enough value M = i = 1 N m i > M c . In both cases, a standard indicator of the condensation phenomenon is the participation ratio Y k = i m i k / ( i m i ) k ( k > 1 ), which takes a finite value for N when condensation occurs. To better understand the connection between constrained and unconstrained condensation, we study here the situation when the total mass is fixed to a superextensive value M N 1 + δ ( δ > 0 ), hence interpolating between the unconstrained condensation case (where the typical value of the total mass scales as M N 1 / α for α < 1 ) and the extensive constrained mass. In particular we show that for exponents α < 1 a condensate phase for values δ > δ c = 1 / α - 1 is separated from a homogeneous phase at δ < δ c from a transition line, δ = δ c , where a weak condensation phenomenon takes place. We focus on the evaluation of the participation ratio as a generic indicator of condensation, also recalling or presenting results in the standard cases of unconstrained mass and of fixed extensive mass. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Connecting Information Geometry and Geometric Mechanics
Entropy 2017, 19(10), 518; doi:10.3390/e19100518
Received: 13 July 2017 / Revised: 10 September 2017 / Accepted: 20 September 2017 / Published: 27 September 2017
PDF Full-text (377 KB) | HTML Full-text | XML Full-text
Abstract
The divergence function in information geometry, and the discrete Lagrangian in discrete geometric mechanics each induce a differential geometric structure on the product manifold Q×Q. We aim to investigate the relationship between these two objects, and the fundamental role that
[...] Read more.
The divergence function in information geometry, and the discrete Lagrangian in discrete geometric mechanics each induce a differential geometric structure on the product manifold Q × Q . We aim to investigate the relationship between these two objects, and the fundamental role that duality, in the form of Legendre transforms, plays in both fields. By establishing an analogy between these two approaches, we will show how a fruitful cross-fertilization of techniques may arise from switching formulations based on the cotangent bundle T * Q (as in geometric mechanics) and the tangent bundle T Q (as in information geometry). In particular, we establish, through variational error analysis, that the divergence function agrees with the exact discrete Lagrangian up to third order if and only if Q is a Hessian manifold. Full article
(This article belongs to the Special Issue Information Geometry II)
Open AccessArticle Randomness Representation of Turbulence in Canopy Flows Using Kolmogorov Complexity Measures
Entropy 2017, 19(10), 519; doi:10.3390/e19100519
Received: 11 August 2017 / Revised: 21 September 2017 / Accepted: 25 September 2017 / Published: 27 September 2017
PDF Full-text (642 KB) | HTML Full-text | XML Full-text
Abstract
Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC) is proposed. This methodology is applied to experimental
[...] Read more.
Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC) is proposed. This methodology is applied to experimental data from a turbulent flow developing in a laboratory channel with canopy of three different densities. The methodology is even compared with the traditional approach based on classical turbulence statistics. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Entropy Ensemble Filter: A Modified Bootstrap Aggregating (Bagging) Procedure to Improve Efficiency in Ensemble Model Simulation
Entropy 2017, 19(10), 520; doi:10.3390/e19100520
Received: 11 August 2017 / Revised: 12 September 2017 / Accepted: 26 September 2017 / Published: 28 September 2017
PDF Full-text (9103 KB) | HTML Full-text | XML Full-text
Abstract
Over the past two decades, the Bootstrap AGGregatING (bagging) method has been widely used for improving simulation. The computational cost of this method scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive
[...] Read more.
Over the past two decades, the Bootstrap AGGregatING (bagging) method has been widely used for improving simulation. The computational cost of this method scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The novel procedure proposed in this study is the Entropy Ensemble Filter (EEF), which uses the most informative training data sets in the ensemble rather than all ensemble members created by the bagging method. The results of this study indicate efficiency of the proposed method in application to synthetic data simulation on a sinusoidal signal, a sawtooth signal, and a composite signal. The EEF method can reduce the computational time of simulation by around 50% on average while maintaining predictive performance at the same level of the conventional method, where all of the ensemble models are used for simulation. The analysis of the error gradient (root mean square error of ensemble averages) shows that using the 40% most informative ensemble members of the set initially defined by the user appears to be most effective. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle Evaluating the Irregularity of Natural Languages
Entropy 2017, 19(10), 521; doi:10.3390/e19100521
Received: 9 August 2017 / Revised: 23 September 2017 / Accepted: 25 September 2017 / Published: 29 September 2017
PDF Full-text (1191 KB) | HTML Full-text | XML Full-text
Abstract
In the present work, we quantify the irregularity of different European languages belonging to four linguistic families (Romance, Germanic, Uralic and Slavic) and an artificial language (Esperanto). We modified a well-known method to calculate the approximate and sample entropy of written texts. We
[...] Read more.
In the present work, we quantify the irregularity of different European languages belonging to four linguistic families (Romance, Germanic, Uralic and Slavic) and an artificial language (Esperanto). We modified a well-known method to calculate the approximate and sample entropy of written texts. We find differences in the degree of irregularity between the families and our method, which is based on the search of regularities in a sequence of symbols, and consistently distinguishes between natural and synthetic randomized texts. Moreover, we extended our study to the case where multiple scales are accounted for, such as the multiscale entropy analysis. Our results revealed that real texts have non-trivial structure compared to the ones obtained from randomization procedures. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Secure Communication for Two-Way Relay Networks with Imperfect CSI
Entropy 2017, 19(10), 522; doi:10.3390/e19100522
Received: 30 July 2017 / Revised: 24 September 2017 / Accepted: 26 September 2017 / Published: 29 September 2017
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers a two-way relay network, where two legitimate users exchange messages through several cooperative relays in the presence of an eavesdropper, and the Channel State Information (CSI) of the eavesdropper is imperfectly known. The Amplify-and-Forward (AF) relay protocol is used. We
[...] Read more.
This paper considers a two-way relay network, where two legitimate users exchange messages through several cooperative relays in the presence of an eavesdropper, and the Channel State Information (CSI) of the eavesdropper is imperfectly known. The Amplify-and-Forward (AF) relay protocol is used. We design the relay beamforming weights to minimize the total relay transmit power, while requiring the Signal-to-Noise-Ratio (SNRs) of the legitimate users to be higher than the given thresholds and the achievable rate of the eavesdropper to be upper-bounded. Due to the imperfect CSI, a robust optimization problem is summarized. A novel iterative algorithm is proposed, where the line search technique is applied, and the feasibility is preserved during iterations. In each iteration, two Quadratically-Constrained Quadratic Programming (QCQP) subproblems and a one-dimensional subproblem are optimally solved. The optimality property of the robust optimization problem is analyzed. Simulation results show that the proposed algorithm performs very close to the non-robust model with perfect CSI, in terms of the obtained relay transmit power; it~achieves higher secrecy rate compared to the existing work. Numerically, the proposed algorithm converges very quickly, and more than 85% of the problems are solved optimally. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Informative Nature and Nonlinearity of Lagged Poincaré Plots Indices in Analysis of Heart Rate Variability
Entropy 2017, 19(10), 523; doi:10.3390/e19100523
Received: 11 August 2017 / Revised: 16 September 2017 / Accepted: 28 September 2017 / Published: 10 October 2017
PDF Full-text (1137 KB) | HTML Full-text | XML Full-text
Abstract
Lagged Poincaré plots have been successful in characterizing abnormal cardiac function. However, the current research practices do not favour any specific lag of Poincaré plots, thus complicating the comparison of results of different researchers in their analysis of heart rate of healthy subjects
[...] Read more.
Lagged Poincaré plots have been successful in characterizing abnormal cardiac function. However, the current research practices do not favour any specific lag of Poincaré plots, thus complicating the comparison of results of different researchers in their analysis of heart rate of healthy subjects and patients. We researched the informative nature of lagged Poincaré plots in different states of the autonomic nervous system. It was tested in three models: different age groups, groups with different balance of autonomous regulation, and in hypertensive patients. Correlation analysis shows that for lag l = 6, SD1/SD2 has weak (r = 0.33) correlation with linear parameters of heart rate variability (HRV). For l more than 6 it displays even less correlation with linear parameters, but the changes in SD1/SD2 become statistically insignificant. Secondly, surrogate data tests show that the real SD1/SD2 is statistically different from its surrogate value and the conclusion could be made that the heart rhythm has nonlinear properties. Thirdly, the three models showed that for different functional states of the autonomic nervous system (ANS), SD1/SD2 ratio varied only for lags l = 5 and 6. All of this allow to us to give cautious recommendation to use SD1/SD2 with lags 5 and 6 as a nonlinear characteristic of HRV. The received data could be used as the basis for continuing the research in standardisation of nonlinear analytic methods. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessFeature PaperArticle On the Limiting Behaviour of the Fundamental Geodesics of Information Geometry
Entropy 2017, 19(10), 524; doi:10.3390/e19100524
Received: 13 July 2017 / Revised: 13 September 2017 / Accepted: 28 September 2017 / Published: 30 September 2017
PDF Full-text (386 KB) | HTML Full-text | XML Full-text
Abstract
The Information Geometry of extended exponential families has received much recent attention in a variety of important applications, notably categorical data analysis, graphical modelling and, more specifically, log-linear modelling. The essential geometry here comes from the closure of an exponential family in a
[...] Read more.
The Information Geometry of extended exponential families has received much recent attention in a variety of important applications, notably categorical data analysis, graphical modelling and, more specifically, log-linear modelling. The essential geometry here comes from the closure of an exponential family in a high-dimensional simplex. In parallel, there has been a great deal of interest in the purely Fisher Riemannian structure of (extended) exponential families, most especially in the Markov chain Monte Carlo literature. These parallel developments raise challenges, addressed here, at a variety of levels: both theoretical and practical—relatedly, conceptual and methodological. Centrally to this endeavour, this paper makes explicit the underlying geometry of these two areas via an analysis of the limiting behaviour of the fundamental geodesics of Information Geometry, these being Amari’s (+1) and (0)-geodesics, respectively. Overall, a substantially more complete account of the Information Geometry of extended exponential families is provided than has hitherto been the case. We illustrate the importance and benefits of this novel formulation through applications. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle On Entropy Dynamics for Active “Living” Particles
Entropy 2017, 19(10), 525; doi:10.3390/e19100525
Received: 13 September 2017 / Revised: 20 September 2017 / Accepted: 28 September 2017 / Published: 2 October 2017
PDF Full-text (208 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a modeling approach, followed by entropy calculations of the dynamics of large systems of interacting active particles viewed as living—hence, complex—systems. Active particles are partitioned into functional subsystems, while their state is modeled by a discrete scalar variable, while the
[...] Read more.
This paper presents a modeling approach, followed by entropy calculations of the dynamics of large systems of interacting active particles viewed as living—hence, complex—systems. Active particles are partitioned into functional subsystems, while their state is modeled by a discrete scalar variable, while the state of the overall system is defined by a probability distribution function over the state of the particles. The aim of this paper consists of contributing to a further development of the mathematical kinetic theory of active particles. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Open AccessArticle A Formula of Packing Pressure of a Factor Map
Entropy 2017, 19(10), 526; doi:10.3390/e19100526
Received: 26 July 2017 / Revised: 28 September 2017 / Accepted: 30 September 2017 / Published: 4 October 2017
PDF Full-text (238 KB)
Abstract
In this paper, using the notion of packing pressure, we show a formula of packing pressure of a factor map. We also give an application in conformal repellers. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Open AccessFeature PaperArticle Coarse-Graining and the Blackwell Order
Entropy 2017, 19(10), 527; doi:10.3390/e19100527
Received: 15 June 2017 / Revised: 13 September 2017 / Accepted: 20 September 2017 / Published: 6 October 2017
PDF Full-text (325 KB) | HTML Full-text | XML Full-text
Abstract
Suppose we have a pair of information channels, κ1,κ2, with a common input. The Blackwell order is a partial order over channels that compares κ1 and κ2 by the maximal expected utility an agent can obtain
[...] Read more.
Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ 1 is said to be Blackwell-inferior to κ 2 if and only if κ 1 can be constructed by garbling the output of κ 2 . A related partial order stipulates that κ 2 is more capable than κ 1 if the mutual information between the input and output is larger for κ 2 than for κ 1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ 1 is less capable than κ 2 but not Blackwell-inferior. We show that this may even happen when κ 1 is constructed by coarse-graining the inputs of κ 2 . Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions. Full article
Figures

Figure 1

Open AccessArticle Generalized Skew-Normal Negentropy and Its Application to Fish Condition Factor Time Series
Entropy 2017, 19(10), 528; doi:10.3390/e19100528
Received: 7 September 2017 / Revised: 28 September 2017 / Accepted: 30 September 2017 / Published: 6 October 2017
PDF Full-text (337 KB) | HTML Full-text | XML Full-text
Abstract
The problem of measuring the disparity of a particular probability density function from a normal one has been addressed in several recent studies. The most used technique to deal with the problem has been exact expressions using information measures over particular distributions. In
[...] Read more.
The problem of measuring the disparity of a particular probability density function from a normal one has been addressed in several recent studies. The most used technique to deal with the problem has been exact expressions using information measures over particular distributions. In this paper, we consider a class of asymmetric distributions with a normal kernel, called Generalized Skew-Normal (GSN) distributions. We measure the degrees of disparity of these distributions from the normal distribution by using exact expressions for the GSN negentropy in terms of cumulants. Specifically, we focus on skew-normal and modified skew-normal distributions. Then, we establish the Kullback–Leibler divergences between each GSN distribution and the normal one in terms of their negentropies to develop hypothesis testing for normality. Finally, we apply this result to condition factor time series of anchovies off northern Chile. Full article
(This article belongs to the Special Issue Foundations of Statistics)
Figures

Figure 1a

Open AccessArticle How Can We Fully Use Noiseless Feedback to Enhance the Security of the Broadcast Channel with Confidential Messages
Entropy 2017, 19(10), 529; doi:10.3390/e19100529
Received: 1 August 2017 / Revised: 1 October 2017 / Accepted: 2 October 2017 / Published: 6 October 2017
PDF Full-text (833 KB) | HTML Full-text | XML Full-text
Abstract
The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases
[...] Read more.
The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases the secrecy capacity region of the BC-CM. However, at present, the feedback coding scheme for the BC-CM only focuses on producing secret keys via noiseless feedback, and other usages of the feedback need to be further explored. In this paper, we propose a new feedback coding scheme for the BC-CM. The noiseless feedback in this new scheme is not only used to produce secret keys for the legitimate receiver and the transmitter but is also used to generate update information that allows both receivers (the legitimate receiver and the wiretapper) to improve their channel outputs. From a binary example, we show that this full utilization of noiseless feedback helps to increase the secrecy level of the previous feedback scheme for the BC-CM. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Bivariate Partial Information Decomposition: The Optimization Perspective
Entropy 2017, 19(10), 530; doi:10.3390/e19100530
Received: 7 July 2017 / Revised: 21 September 2017 / Accepted: 28 September 2017 / Published: 7 October 2017
PDF Full-text (444 KB)
Abstract
Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information MI(X:Y,Z) into shared, synergistic, and unique information by way of solving a convex optimization problem. In
[...] Read more.
Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In this paper, we discuss the solution of their Convex Program from theoretical and practical points of view. Full article
Open AccessArticle Multivariate Dependence beyond Shannon Information
Entropy 2017, 19(10), 531; doi:10.3390/e19100531
Received: 20 June 2017 / Revised: 18 September 2017 / Accepted: 24 September 2017 / Published: 7 October 2017
PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that
[...] Read more.
Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that Shannon information measures (entropy and mutual information, in their conditional and multivariate forms) can fail to accurately ascertain multivariate dependencies due to their conflation of qualitatively different relations among variables. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful relationships among variables within joint probability distributions. We close by demonstrating that such distributions exist across an arbitrary set of variables. Full article
Figures

Figure 1

Open AccessArticle Bowen Lemma in the Countable Symbolic Space
Entropy 2017, 19(10), 532; doi:10.3390/e19100532
Received: 14 August 2017 / Revised: 27 September 2017 / Accepted: 30 September 2017 / Published: 11 October 2017
PDF Full-text (261 KB) | HTML Full-text | XML Full-text
Abstract
We consider the sets of quasi-regular points in the countable symbolic space. We measure the sizes of the sets by Billingsley-Hausdorff dimension defined by Gibbs measures. It is shown that the dimensions of those sets, always bounded from below by the convergence exponent
[...] Read more.
We consider the sets of quasi-regular points in the countable symbolic space. We measure the sizes of the sets by Billingsley-Hausdorff dimension defined by Gibbs measures. It is shown that the dimensions of those sets, always bounded from below by the convergence exponent of the Gibbs measure, are given by a variational principle, which generalizes Li and Ma’s result and Bowen’s result. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Open AccessArticle Cross Entropy Method Based Hybridization of Dynamic Group Optimization Algorithm
Entropy 2017, 19(10), 533; doi:10.3390/e19100533
Received: 7 August 2017 / Revised: 21 September 2017 / Accepted: 29 September 2017 / Published: 9 October 2017
PDF Full-text (793 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a new algorithm named dynamic group optimization (DGO) has been proposed, which lends itself strongly to exploration and exploitation. Although DGO has demonstrated its efficacy in comparison to other classical optimization algorithms, DGO has two computational drawbacks. The first one is related
[...] Read more.
Recently, a new algorithm named dynamic group optimization (DGO) has been proposed, which lends itself strongly to exploration and exploitation. Although DGO has demonstrated its efficacy in comparison to other classical optimization algorithms, DGO has two computational drawbacks. The first one is related to the two mutation operators of DGO, where they may decrease the diversity of the population, limiting the search ability. The second one is the homogeneity of the updated population information which is selected only from the companions in the same group. It may result in premature convergence and deteriorate the mutation operators. In order to deal with these two problems in this paper, a new hybridized algorithm is proposed, which combines the dynamic group optimization algorithm with the cross entropy method. The cross entropy method takes advantage of sampling the problem space by generating candidate solutions using the distribution, then it updates the distribution based on the better candidate solution discovered. The cross entropy operator does not only enlarge the promising search area, but it also guarantees that the new solution is taken from all the surrounding useful information into consideration. The proposed algorithm is tested on 23 up-to-date benchmark functions; the experimental results verify that the proposed algorithm over the other contemporary population-based swarming algorithms is more effective and efficient. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Figure 1

Open AccessArticle An Approximated Box Height for Differential-Box-Counting Method to Estimate Fractal Dimensions of Gray-Scale Images
Entropy 2017, 19(10), 534; doi:10.3390/e19100534
Received: 7 June 2017 / Revised: 30 August 2017 / Accepted: 30 September 2017 / Published: 10 October 2017
PDF Full-text (3134 KB) | HTML Full-text | XML Full-text
Abstract
The Fractal Dimension (FD) of an image defines the roughness using a real number which is highly associated with the human perception of surface roughness. It has been applied successfully for many computer vision applications such as texture analysis, segmentation and classification. Several
[...] Read more.
The Fractal Dimension (FD) of an image defines the roughness using a real number which is highly associated with the human perception of surface roughness. It has been applied successfully for many computer vision applications such as texture analysis, segmentation and classification. Several techniques can be found in literature to estimate FD. One such technique is Differential Box Counting (DBC). Its performance is influenced by many parameters. In particular, the box height is directly related to the gray-level variations over image grid, which badly affects the performance of DBC. In this work, a new method for estimating box height is proposed without changing the other parameters of DBC. The proposed box height has been determined empirically and depends only on the image size. All the experiments have been performed on simulated Fractal Brownian Motion (FBM) Database and Brodatz Database. It has been proved experimentally that the proposed box height allow to improve the performance of DBC, Shifting DBC, Improved DBC and Improved Triangle DBC, which are closer to actual FD values of the simulated FBM images. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Contact Hamiltonian Dynamics: The Concept and Its Use
Entropy 2017, 19(10), 535; doi:10.3390/e19100535
Received: 13 September 2017 / Revised: 7 October 2017 / Accepted: 8 October 2017 / Published: 11 October 2017
PDF Full-text (239 KB) | HTML Full-text | XML Full-text
Abstract
We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights
[...] Read more.
We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights into possible future directions. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics II)
Open AccessArticle Hydrodynamics of a Granular Gas in a Heterogeneous Environment
Entropy 2017, 19(10), 536; doi:10.3390/e19100536
Received: 21 August 2017 / Revised: 2 October 2017 / Accepted: 9 October 2017 / Published: 11 October 2017
PDF Full-text (773 KB) | HTML Full-text | XML Full-text
Abstract
We analyze the transport properties of a low density ensemble of identical macroscopic particles immersed in an active fluid. The particles are modeled as inelastic hard spheres (granular gas). The non-homogeneous active fluid is modeled by means of a non-uniform stochastic thermostat. The
[...] Read more.
We analyze the transport properties of a low density ensemble of identical macroscopic particles immersed in an active fluid. The particles are modeled as inelastic hard spheres (granular gas). The non-homogeneous active fluid is modeled by means of a non-uniform stochastic thermostat. The theoretical results are validated with a numerical solution of the corresponding the kinetic equation (direct simulation Monte Carlo method). We show a steady flow in the system that is accurately described by Navier-Stokes (NS) hydrodynamics, even for high inelasticity. Surprisingly, we find that the deviations from NS hydrodynamics for this flow are stronger as the inelasticity decreases. The active fluid action is modeled here with a non-uniform fluctuating volume force. This is a relevant result given that hydrodynamics of particles in complex environments, such as biological crowded environments, is still a question under intense debate. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Investigating the Thermodynamic Performances of TO-Based Metamaterial Tunable Cells with an Entropy Generation Approach
Entropy 2017, 19(10), 538; doi:10.3390/e19100538
Received: 26 August 2017 / Revised: 27 September 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
PDF Full-text (2546 KB) | HTML Full-text | XML Full-text
Abstract
Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled
[...] Read more.
Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled on related positions following the TO design. However, owing to the integration and rotation of materials in tunable cells, they might lead to extra thermal losses as compared with the previous continuum design. This paper focuses on investigating the thermodynamic properties of tunable cells under related design parameters. The universal expression for the local entropy generation rate in such metamaterial systems is obtained considering the influence of rotation. A series of contrast schemes are established to describe the thermodynamic process and thermal energy distributions from the viewpoint of entropy analysis. Moreover, effects of design parameters on thermal dissipations and system irreversibility are investigated. In conclusion, more thermal dissipations and stronger thermodynamic processes occur in a system with larger conductivity ratios and rotation angles. This paper presents a detailed description of the thermodynamic properties of metamaterial tunable cells and provides reference for selecting appropriate design parameters on related positions to fabricate more efficient and energy-economical switchable TO devices. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessFeature PaperArticle Kovacs-Like Memory Effect in Athermal Systems: Linear Response Analysis
Entropy 2017, 19(10), 539; doi:10.3390/e19100539
Received: 19 September 2017 / Revised: 10 October 2017 / Accepted: 11 October 2017 / Published: 13 October 2017
PDF Full-text (4976 KB) | HTML Full-text | XML Full-text
Abstract
We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to
[...] Read more.
We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to a general class of models with conserved momentum and non-conserved energy. Our theoretical predictions, obtained within the first Sonine approximation, show an excellent agreement with the numerical results. Furthermore, we prove that the observed non-monotonic relaxation is consistent with the monotonic decay of the non-equilibrium entropy. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Complexity-Entropy Maps as a Tool for the Characterization of the Clinical Electrophysiological Evolution of Patients under Pharmacological Treatment with Psychotropic Drugs
Entropy 2017, 19(10), 540; doi:10.3390/e19100540
Received: 26 July 2017 / Revised: 5 October 2017 / Accepted: 6 October 2017 / Published: 13 October 2017
PDF Full-text (1437 KB) | HTML Full-text | XML Full-text
Abstract
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this
[...] Read more.
In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Stationary Wavelet Singular Entropy and Kernel Extreme Learning for Bearing Multi-Fault Diagnosis
Entropy 2017, 19(10), 541; doi:10.3390/e19100541
Received: 7 September 2017 / Revised: 5 October 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
PDF Full-text (402 KB) | HTML Full-text | XML Full-text
Abstract
The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy
[...] Read more.
The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy as an important fault signature pattern. To compute the entropy, we propose combining stationary wavelet transform and singular value decomposition. The resulting feature extraction method, that we call stationary wavelet singular entropy (SWSE), aims to improve the accuracy of the diagnostics of bearing failure by finding a small number of high-quality fault signature patterns. The features extracted by the SWSE are then passed on to a kernel extreme learning machine (KELM) classifier. The proposed SWSE-KELM algorithm is evaluated using two bearing vibration signal databases obtained from Case Western Reserve University. We compare our SWSE feature extraction method to other well-known methods in the literature such as stationary wavelet packet singular entropy (SWPSE) and decimated wavelet packet singular entropy (DWPSE). The experimental results show that the SWSE-KELM consistently outperforms both the SWPSE-KELM and DWPSE-KELM methods. Further, our SWSE method requires fewer features than the other two evaluated methods, which makes our SWSE-KELM algorithm simpler and faster. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Backtracking and Mixing Rate of Diffusion on Uncorrelated Temporal Networks
Entropy 2017, 19(10), 542; doi:10.3390/e19100542
Received: 31 August 2017 / Revised: 6 October 2017 / Accepted: 9 October 2017 / Published: 13 October 2017
PDF Full-text (345 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s
[...] Read more.
We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s tendency to backtrack at each step (return where he/she comes from), as well as the resulting effect on the mixing rate of the process. As we show through empirical data, non-Poisson dynamics may significantly slow down diffusion due to backtracking, by a mechanism intrinsically different from the standard bus paradox and related temporal mechanisms. We conclude by discussing the implications of our work for the interpretation of results generated by null models of temporal networks. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Is Cetacean Intelligence Special? New Perspectives on the Debate
Entropy 2017, 19(10), 543; doi:10.3390/e19100543
Received: 28 June 2017 / Revised: 30 August 2017 / Accepted: 2 September 2017 / Published: 13 October 2017
PDF Full-text (2097 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under
[...] Read more.
In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under significant scrutiny, in large part due to problems of how to define and test the cognitive performance of animals. This paper presents evidence supporting complex cognition in cetaceans obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge. This theoretical framework explaining animal intelligence in neural computational terms is supported using a new mathematical model. Two pathways leading to higher levels of intelligence in animals are identified, each reflecting a trade-off either in energetic requirements or the number of neurons used. A description of the evolutionary pathway that led to increased cognitive capacities in cetacean brains is detailed and evidence supporting complex cognition in cetaceans is presented. This paper also provides an interpretation of the adaptive function of cetacean neuronal traits. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Open AccessArticle Equilibration in the Nosé–Hoover Isokinetic Ensemble: Effect of Inter-Particle Interactions
Entropy 2017, 19(10), 544; doi:10.3390/e19100544
Received: 19 September 2017 / Revised: 11 October 2017 / Accepted: 11 October 2017 / Published: 14 October 2017
PDF Full-text (468 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics
[...] Read more.
We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics aim to generate the canonical equilibrium distribution of a system at a desired temperature by employing a set of time-reversible, deterministic equations of motion. A signature of canonical equilibrium is a single-particle momentum distribution that is Gaussian. We find that the equilibrium properties of the system within the Nosé–Hoover dynamics coincides with that within the canonical ensemble. Moreover, starting from out-of-equilibrium initial conditions, the average kinetic energy of the system relaxes to its target value over a size-independent timescale. However, quite surprisingly, our results indicate that under the same conditions and with only long-range interactions present in the system, the momentum distribution relaxes to its Gaussian form in equilibrium over a scale that diverges with the system size. On adding short-range interactions, the relaxation is found to occur over a timescale that has a much weaker dependence on system size. This system-size dependence of the timescale vanishes when only short-range interactions are present in the system. An implication of such an ultra-slow relaxation when only long-range interactions are present in the system is that macroscopic observables other than the average kinetic energy when estimated in the Nosé–Hoover dynamics may take an unusually long time to relax to its canonical equilibrium value. Our work underlines the crucial role that interactions play in deciding the equivalence between Nosé–Hoover and canonical equilibrium. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit
Entropy 2017, 19(10), 545; doi:10.3390/e19100545
Received: 5 July 2017 / Revised: 2 October 2017 / Accepted: 3 October 2017 / Published: 14 October 2017
PDF Full-text (311 KB)
Abstract
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret
[...] Read more.
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion by offering a more structured achieving scheme and some simpler conjectures to prove. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Open AccessArticle A Permutation Disalignment Index-Based Complex Network Approach to Evaluate Longitudinal Changes in Brain-Electrical Connectivity
Entropy 2017, 19(10), 548; doi:10.3390/e19100548
Received: 26 September 2017 / Revised: 12 October 2017 / Accepted: 13 October 2017 / Published: 17 October 2017
PDF Full-text (2554 KB) | XML Full-text
Abstract
In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI)
[...] Read more.
In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI) condition, when caused by a disorder degenerating into dementia, affects the brain connectivity. Motivated by the promising results achieved through the recently developed descriptor of coupling strength between EEG signals, the Permutation Disalignment Index (PDI), the present paper introduces a novel PDI-based complex network model to evaluate the longitudinal variations in brain-electrical connectivity. A group of 33 amnestic MCI subjects was enrolled and followed-up with over four months. The results were compared to MoCA (Montreal Cognitive Assessment) tests, which scores the cognitive abilities of the patient. A significant negative correlation could be observed between MoCA variation and the characteristic path length ( λ ) variation ( r = - 0 . 56 , p = 0 . 0006 ), whereas a significant positive correlation could be observed between MoCA variation and the variation of clustering coefficient (CC, r = 0 . 58 , p = 0 . 0004 ), global efficiency (GE, r = 0 . 57 , p = 0 . 0005 ) and small worldness (SW, r = 0 . 57 , p = 0 . 0005 ). Cognitive decline thus seems to reflect an underlying cortical “disconnection” phenomenon: worsened subjects indeed showed an increased λ and decreased CC, GE and SW. The PDI-based connectivity model, proposed in the present work, could be a novel tool for the objective quantification of longitudinal brain-electrical connectivity changes in MCI subjects. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Open AccessArticle Risk Assessment and Decision-Making under Uncertainty in Tunnel and Underground Engineering
Entropy 2017, 19(10), 549; doi:10.3390/e19100549
Received: 31 August 2017 / Revised: 29 September 2017 / Accepted: 16 October 2017 / Published: 18 October 2017
PDF Full-text (5869 KB) | HTML Full-text | XML Full-text
Abstract
The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this
[...] Read more.
The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this study, the existing entropy-risk decision model is first discussed and analyzed, and its deficiencies are improved upon and overcome. Then, this study addresses the fact that existing studies only consider parameter uncertainty and ignore the influence of the model uncertainty. Here, focus is on the issue of model uncertainty and differences in risk consciousness with different decision-makers. The utility theory is introduced in the model. Finally, a risk decision model is proposed based on the sensitivity analysis and the tolerance cost, which can improve decision-making efficiency. This research can provide guidance or reference for the evaluation and decision-making of complex systems engineering problems, and indicate a direction for further research of risk assessment and decision-making issues. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Figures

Figure 1

Open AccessFeature PaperArticle Entropy of Entropy: Measurement of Dynamical Complexity for Biological Systems
Entropy 2017, 19(10), 550; doi:10.3390/e19100550
Received: 13 September 2017 / Revised: 10 October 2017 / Accepted: 16 October 2017 / Published: 18 October 2017
PDF Full-text (856 KB) | XML Full-text
Abstract
Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis
[...] Read more.
Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis combines the features of MSE and an alternate measure of information, called superinformation, useful for DNA sequences. In this work, we apply the hybrid analysis to the cardiac interbeat interval time series. We find that the EoE value is significantly higher for the healthy than the pathologic groups. Particularly, short time series of 70 heart beats is sufficient for EoE analysis with an accuracy of 81% and longer series of 500 beats results in an accuracy of 90%. In addition, the EoE versus Shannon entropy plot of heart rate time series exhibits an inverted U relationship with the maximal EoE value appearing in the middle of extreme order and disorder. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Open AccessArticle Cosmographic Thermodynamics of Dark Energy
Entropy 2017, 19(10), 551; doi:10.3390/e19100551 (registering DOI)
Received: 4 August 2017 / Revised: 2 October 2017 / Accepted: 9 October 2017 / Published: 19 October 2017
PDF Full-text (373 KB) | HTML Full-text | XML Full-text
Abstract
Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their
[...] Read more.
Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their evolutions in terms of z. We demonstrate that dark energy may be modeled as perfect gas, only as the Mayer relation is preserved. In particular, we find that the Mayer relation holds if j q > 1 2 . The former result turns out to be general so that, even at the transition time, the jerk parameter j cannot violate the condition: j t r > 1 2 . This outcome rules out those models which predict opposite cases, whereas it turns out to be compatible with the concordance paradigm. We thus compare our bounds with the Λ CDM model, highlighting that a constant dark energy term seems to be compatible with the so-obtained specific heat thermodynamics, after a precise redshift domain. In our treatment, we show the degeneracy between unified dark energy models with zero sound speed and the concordance paradigm. Under this scheme, we suggest that the cosmological constant may be viewed as an effective approach to dark energy either at small or high redshift domains. Last but not least, we discuss how to reconstruct dark energy’s entropy from specific heats and we finally compute both entropy and specific heats into the luminosity distance d L , in order to fix constraints over them through cosmic data. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle Prediction Model of the Power System Frequency Using a Cross-Entropy Ensemble Algorithm
Entropy 2017, 19(10), 552; doi:10.3390/e19100552 (registering DOI)
Received: 20 September 2017 / Revised: 17 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
PDF Full-text (835 KB)
Abstract
Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional
[...] Read more.
Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional approaches. In this paper, an ensemble algorithm using cross-entropy as a combination strategy is presented to address the trade-off between prediction accuracy and calculation speed. The prediction difficulty caused by inadequate numbers of severe disturbance samples is also overcome by the ensemble model. In the proposed ensemble algorithm, base learners are selected following the principle of diversity, which guarantees the ensemble algorithm’s accuracy. Cross-entropy is applied to evaluate the fitting performance of the base learners and to set the weight coefficient in the ensemble algorithm. Subsequently, an online prediction model based on the algorithm is established that integrates training, prediction and updating. In the Western System Coordinating Council 9-bus (WSCC 9) system and the Institute of Electrical and Electronics Engineers 39-bus (IEEE 39) system, the algorithm is shown to significantly improve the prediction accuracy in both sample-rich and sample-poor situations, verifying the effectiveness and superiority of the proposed ensemble algorithm. Full article
(This article belongs to the Section Complexity)
Open AccessArticle Rainfall Network Optimization Using Radar and Entropy
Entropy 2017, 19(10), 553; doi:10.3390/e19100553 (registering DOI)
Received: 9 September 2017 / Revised: 14 October 2017 / Accepted: 16 October 2017 / Published: 19 October 2017
PDF Full-text (2844 KB) | XML Full-text
Abstract
In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it
[...] Read more.
In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it is possible to observe rainfall widely in a short time. However, the rainfall data obtained by radar do not necessarily correspond to that observed by ground-based rain gauges. The in-situ rainfall data from telemetering rain gauges were used to calibrate a radar system. Therefore, the rainfall intensity; as well as its distribution over the catchment can be obtained using radar. Once the rainfall data of past years at the desired locations over the catchment were generated, the entropy based on probability was applied to optimize the rainfall network. This method is applicable in remote and mountain areas. Its most important utility is to construct an optimal rainfall network in an ungauged catchment. The design of a rainfall network in the catchment of the Feitsui Reservoir was used to illustrate the various steps as well as the reliability of the method. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Open AccessArticle Upper Bounds for the Rate Distortion Function of Finite-Length Data Blocks of Gaussian WSS Sources
Entropy 2017, 19(10), 554; doi:10.3390/e19100554 (registering DOI)
Received: 19 September 2017 / Revised: 14 October 2017 / Accepted: 15 October 2017 / Published: 19 October 2017
PDF Full-text (256 KB) | XML Full-text
Abstract
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new
[...] Read more.
In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Open AccessArticle The Prior Can Often Only Be Understood in the Context of the Likelihood
Entropy 2017, 19(10), 555; doi:10.3390/e19100555 (registering DOI)
Received: 26 August 2017 / Revised: 30 September 2017 / Accepted: 14 October 2017 / Published: 19 October 2017
PDF Full-text (241 KB)
Abstract
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key
[...] Read more.
A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Open AccessArticle Multivariate Multiscale Symbolic Entropy Analysis of Human Gait Signals
Entropy 2017, 19(10), 557; doi:10.3390/e19100557 (registering DOI)
Received: 25 September 2017 / Revised: 13 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
PDF Full-text (1013 KB)
Abstract
The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales
[...] Read more.
The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales and multi-channel statistical dependence inherent in such time series. To overcome this problem, multivariate multiscale symbolic entropy is proposed in this paper to distinguish the complexity of human gait signals in health and disease. The embedding dimension, time delay and quantization levels are appropriately designed to construct similarity of signals for calculating complexity of human gait. The proposed method can accurately detect healthy and pathologic group from realistic multivariate human gait time series on multiple scales. It strongly supports wearable healthcare with simplicity, robustness, and fast computation. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)

Other

Jump to: Research

Open AccessLetter A Combinatorial Grassmannian Representation of the Magic Three-Qubit Veldkamp Line
Entropy 2017, 19(10), 556; doi:10.3390/e19100556 (registering DOI)
Received: 17 September 2017 / Revised: 13 October 2017 / Accepted: 17 October 2017 / Published: 19 October 2017
PDF Full-text (235 KB)
Abstract
It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G2(7), V(G2(7)). The lines of the ambient symplectic polar
[...] Read more.
It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G 2 ( 7 ) , V ( G 2 ( 7 ) ) . The lines of the ambient symplectic polar space are those lines of V ( G 2 ( 7 ) ) whose cores feature an odd number of points of G 2 ( 7 ) . After introducing the basic properties of three different types of points and seven distinct types of lines of V ( G 2 ( 7 ) ) , we explicitly show the combinatorial Grassmannian composition of the magic Veldkamp line; we first give representatives of points and lines of its core generalized quadrangle GQ ( 2 , 2 ) , and then additional points and lines of a specific elliptic quadric Q - (5, 2), a hyperbolic quadric Q + (5, 2), and a quadratic cone Q ^ (4, 2) that are centered on the GQ ( 2 , 2 ) . In particular, each point of Q + (5, 2) is represented by a Pasch configuration and its complementary line, the (Schläfli) double-six of points in Q - (5, 2) comprise six Cayley–Salmon configurations and six Desargues configurations with their complementary points, and the remaining Cayley–Salmon configuration stands for the vertex of Q ^ (4, 2). Full article
(This article belongs to the Special Issue Quantum Mechanics: From Foundations to Information Technologies)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top