Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 9 (September 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story In order to understand the logical architecture of living systems, von Neumann introduced the idea [...] Read more.
View options order results:
result details:
Displaying articles 1-73
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Nonequilibrium Phenomena in Confined Systems
Entropy 2017, 19(9), 507; doi:10.3390/e19090507
Received: 19 September 2017 / Revised: 19 September 2017 / Accepted: 19 September 2017 / Published: 20 September 2017
PDF Full-text (149 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)

Research

Jump to: Editorial, Review

Open AccessArticle Entropies of Weighted Sums in Cyclic Groups and an Application to Polar Codes
Entropy 2017, 19(9), 235; doi:10.3390/e19090235
Received: 14 February 2017 / Revised: 5 April 2017 / Accepted: 5 April 2017 / Published: 7 September 2017
PDF Full-text (309 KB) | HTML Full-text | XML Full-text
Abstract
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any
[...] Read more.
In this note, the following basic question is explored: in a cyclic group, how are the Shannon entropies of the sum and difference of i.i.d. random variables related to each other? For the integer group, we show that they can differ by any real number additively, but not too much multiplicatively; on the other hand, for Z / 3 Z , the entropy of the difference is always at least as large as that of the sum. These results are closely related to the study of more-sums-than-differences (i.e., MSTD) sets in additive combinatorics. We also investigate polar codes for q-ary input channels using non-canonical kernels to construct the generator matrix and present applications of our results to constructing polar codes with significantly improved error probability compared to the canonical construction. Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Figures

Figure 1

Open AccessArticle Investigating the Relationship between Classification Quality and SMT Performance in Discriminative Reordering Models
Entropy 2017, 19(9), 340; doi:10.3390/e19090340
Received: 2 June 2017 / Revised: 18 June 2017 / Accepted: 23 June 2017 / Published: 24 August 2017
PDF Full-text (1923 KB) | HTML Full-text | XML Full-text
Abstract
Reordering is one of the most important factors affecting the quality of the output in statistical machine translation (SMT). A considerable number of approaches that proposed addressing the reordering problem are discriminative reordering models (DRM). The core component of the DRMs is a
[...] Read more.
Reordering is one of the most important factors affecting the quality of the output in statistical machine translation (SMT). A considerable number of approaches that proposed addressing the reordering problem are discriminative reordering models (DRM). The core component of the DRMs is a classifier which tries to predict the correct word order of the sentence. Unfortunately, the relationship between classification quality and ultimate SMT performance has not been investigated to date. Understanding this relationship will allow researchers to select the classifier that results in the best possible MT quality. It might be assumed that there is a monotonic relationship between classification quality and SMT performance, i.e., any improvement in classification performance will be monotonically reflected in overall SMT quality. In this paper, we experimentally show that this assumption does not always hold, i.e., an improvement in classification performance might actually degrade the quality of an SMT system, from the point of view of MT automatic evaluation metrics. However, we show that if the improvement in the classification performance is high enough, we can expect the SMT quality to improve as well. In addition to this, we show that there is a negative relationship between classification accuracy and SMT performance in imbalanced parallel corpora. For these types of corpora, we provide evidence that, for the evaluation of the classifier, macro-averaged metrics such as macro-averaged F-measure are better suited than accuracy, the metric commonly used to date. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Entropic Data Envelopment Analysis: A Diversification Approach for Portfolio Optimization
Entropy 2017, 19(9), 352; doi:10.3390/e19090352
Received: 16 May 2017 / Revised: 7 July 2017 / Accepted: 7 July 2017 / Published: 15 September 2017
PDF Full-text (453 KB) | HTML Full-text | XML Full-text
Abstract
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in
[...] Read more.
Recently, different methods have been proposed for portfolio optimization and decision making on investment issues. This article aims to present a new method for portfolio formation based on Data Envelopment Analysis (DEA) and Entropy function. This new portfolio optimization method applies DEA in association with a model resulting from the insertion of the Entropy function directly into the optimization procedure. First, the DEA model was applied to perform a pre-selection of the assets. Then, assets given as efficient were submitted to the proposed model, resulting from the insertion of the Entropy function into the simplified Sharpe’s portfolio optimization model. As a result, an improved asset participation was provided in the portfolio. In the DEA model, several variables were evaluated and a low value of beta was achieved, guaranteeing greater robustness to the portfolio. Entropy function has provided not only greater diversity but also more feasible asset allocation. Additionally, the proposed method has obtained a better portfolio performance, measured by the Sharpe Ratio, in relation to the comparative methods. Full article
Figures

Figure 1

Open AccessArticle Decomposition of the Inequality of Income Distribution by Income Types—Application for Romania
Entropy 2017, 19(9), 430; doi:10.3390/e19090430
Received: 31 July 2017 / Revised: 15 August 2017 / Accepted: 16 August 2017 / Published: 1 September 2017
PDF Full-text (1780 KB) | HTML Full-text | XML Full-text
Abstract
This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies
[...] Read more.
This paper identifies the salient factors that characterize the inequality income distribution for Romania. Data analysis is rigorously carried out using sophisticated techniques borrowed from classical statistics (Theil). Decomposition of the inequalities measured by the Theil index is also performed. This study relies on an exhaustive (11.1 million records for 2014) data-set for total personal gross income of Romanian citizens. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Stochastic Thermodynamics of Brownian Motion
Entropy 2017, 19(9), 434; doi:10.3390/e19090434
Received: 29 June 2017 / Revised: 2 August 2017 / Accepted: 14 August 2017 / Published: 23 August 2017
PDF Full-text (387 KB) | HTML Full-text | XML Full-text
Abstract
A stochastic thermodynamics of Brownian motion is set up in which state functions are expressed in terms of state variables through the same relations as in classical irreversible thermodynamics, with the difference that the state variables are now random fields accounting for the
[...] Read more.
A stochastic thermodynamics of Brownian motion is set up in which state functions are expressed in terms of state variables through the same relations as in classical irreversible thermodynamics, with the difference that the state variables are now random fields accounting for the effect of fluctuations. Explicit expressions for the stochastic analog of entropy production and related quantities are derived for a dilute solution of Brownian particles in a fluid of light particles. Their statistical properties are analyzed and, in the light of the insights afforded, the thermodynamics of a single Brownian particle is revisited and the status of the second law of thermodynamics is discussed. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Contextuality and Indistinguishability
Entropy 2017, 19(9), 435; doi:10.3390/e19090435
Received: 14 July 2017 / Revised: 8 August 2017 / Accepted: 16 August 2017 / Published: 23 August 2017
PDF Full-text (314 KB) | HTML Full-text | XML Full-text
Abstract
It is well known that in quantum mechanics we cannot always define consistently properties that are context independent. Many approaches exist to describe contextual properties, such as Contextuality by Default (CbD), sheaf theory, topos theory, and non-standard or signed probabilities. In this paper,
[...] Read more.
It is well known that in quantum mechanics we cannot always define consistently properties that are context independent. Many approaches exist to describe contextual properties, such as Contextuality by Default (CbD), sheaf theory, topos theory, and non-standard or signed probabilities. In this paper, we propose a treatment of contextual properties that is specific to quantum mechanics, as it relies on the relationship between contextuality and indistinguishability. In particular, we propose that if we assume the ontological thesis that quantum particles or properties can be indistinguishable yet different, no contradiction arising from a Kochen–Specker-type argument appears: when we repeat an experiment, we are in reality performing an experiment measuring a property that is indistinguishable from the first, but not the same. We will discuss how the consequences of this move may help us understand quantum contextuality. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Open AccessFeature PaperArticle From Relativistic Mechanics towards Relativistic Statistical Mechanics
Entropy 2017, 19(9), 436; doi:10.3390/e19090436
Received: 22 May 2017 / Revised: 26 July 2017 / Accepted: 5 August 2017 / Published: 23 August 2017
PDF Full-text (397 KB) | HTML Full-text | XML Full-text
Abstract
Till now, kinetic theory and statistical mechanics of either free or interacting point particles were well defined only in non-relativistic inertial frames in the absence of the long-range inertial forces present in accelerated frames. As shown in the introductory review at the relativistic
[...] Read more.
Till now, kinetic theory and statistical mechanics of either free or interacting point particles were well defined only in non-relativistic inertial frames in the absence of the long-range inertial forces present in accelerated frames. As shown in the introductory review at the relativistic level, only a relativistic kinetic theory of “world-lines” in inertial frames was known till recently due to the problem of the elimination of the relative times. The recent Wigner-covariant formulation of relativistic classical and quantum mechanics of point particles required by the theory of relativistic bound states, with the elimination of the problem of relative times and with a clarification of the notion of the relativistic center of mass, allows one to give a definition of the distribution function of the relativistic micro-canonical ensemble in terms of the generators of the Poincaré algebra of a system of interacting particles both in inertial and in non-inertial rest frames. The non-relativistic limit allows one to get the ensemble in non-relativistic non-inertial frames. Assuming the existence of a relativistic Gibbs ensemble, also a “Lorentz-scalar micro-canonical temperature” can be defined. If the forces between the particles are short range in inertial frames, the notion of equilibrium can be extended from them to the non-inertial rest frames, and it is possible to go to the thermodynamic limit and to define a relativistic canonical temperature and a relativistic canonical ensemble. Finally, assuming that a Lorentz-scalar one-particle distribution function can be defined with a statistical average, an indication is given of which are the difficulties in solving the open problem of deriving the relativistic Boltzmann equation with the same methodology used in the non-relativistic case instead of postulating it as is usually done. There are also some comments on how it would be possible to have a hydrodynamical description of the relativistic kinetic theory of an isolated fluid in local equilibrium by means of an effective relativistic dissipative fluid described in the Wigner-covariant framework. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Open AccessArticle A Brief History of Long Memory: Hurst, Mandelbrot and the Road to ARFIMA, 1951–1980
Entropy 2017, 19(9), 437; doi:10.3390/e19090437
Received: 24 May 2017 / Revised: 3 August 2017 / Accepted: 18 August 2017 / Published: 23 August 2017
PDF Full-text (812 KB) | HTML Full-text | XML Full-text
Abstract
Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts
[...] Read more.
Long memory plays an important role in many fields by determining the behaviour and predictability of systems; for instance, climate, hydrology, finance, networks and DNA sequencing. In particular, it is important to test if a process is exhibiting long memory since that impacts the accuracy and confidence with which one may predict future events on the basis of a small amount of historical data. A major force in the development and study of long memory was the late Benoit B. Mandelbrot. Here, we discuss the original motivation of the development of long memory and Mandelbrot’s influence on this fascinating field. We will also elucidate the sometimes contrasting approaches to long memory in different scientific communities. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Open AccessFeature PaperArticle Rate-Distortion Theory for Clustering in the Perceptual Space
Entropy 2017, 19(9), 438; doi:10.3390/e19090438
Received: 7 July 2017 / Revised: 28 July 2017 / Accepted: 16 August 2017 / Published: 23 August 2017
PDF Full-text (2474 KB) | HTML Full-text | XML Full-text
Abstract
How to extract relevant information from large data sets has become a main challenge in data visualization. Clustering techniques that classify data into groups according to similarity metrics are a suitable strategy to tackle this problem. Generally, these techniques are applied in the
[...] Read more.
How to extract relevant information from large data sets has become a main challenge in data visualization. Clustering techniques that classify data into groups according to similarity metrics are a suitable strategy to tackle this problem. Generally, these techniques are applied in the data space as an independent step previous to visualization. In this paper, we propose clustering on the perceptual space by maximizing the mutual information between the original data and the final visualization. With this purpose, we present a new information-theoretic framework based on the rate-distortion theory that allows us to achieve a maximally compressed data with a minimal signal distortion. Using this framework, we propose a methodology to design a visualization process that minimizes the information loss during the clustering process. Three application examples of the proposed methodology in different visualization techniques such as scatterplot, parallel coordinates, and summary trees are presented. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Open AccessArticle Partial Discharge Feature Extraction Based on Ensemble Empirical Mode Decomposition and Sample Entropy
Entropy 2017, 19(9), 439; doi:10.3390/e19090439
Received: 8 July 2017 / Revised: 9 August 2017 / Accepted: 17 August 2017 / Published: 23 August 2017
Cited by 1 | PDF Full-text (4790 KB) | HTML Full-text | XML Full-text
Abstract
Partial Discharge (PD) pattern recognition plays an important part in electrical equipment fault diagnosis and maintenance. Feature extraction could greatly affect recognition results. Traditional PD feature extraction methods suffer from high-dimension calculation and signal attenuation. In this study, a novel feature extraction method
[...] Read more.
Partial Discharge (PD) pattern recognition plays an important part in electrical equipment fault diagnosis and maintenance. Feature extraction could greatly affect recognition results. Traditional PD feature extraction methods suffer from high-dimension calculation and signal attenuation. In this study, a novel feature extraction method based on Ensemble Empirical Mode Decomposition (EEMD) and Sample Entropy (SamEn) is proposed. In order to reduce the influence of noise, a wavelet method is applied to PD de-noising. Noise Rejection Ratio (NRR) and Mean Square Error (MSE) are adopted as the de-noising indexes. With EEMD, the de-noised signal is decomposed into a finite number of Intrinsic Mode Functions (IMFs). The IMFs, which contain the dominant information of PD, are selected using a correlation coefficient method. From that, the SamEn of selected IMFs are extracted as PD features. Finally, a Relevance Vector Machine (RVM) is utilized for pattern recognition using the features extracted. Experimental results demonstrate that the proposed method combines excellent properties of both EEMD and SamEn. The recognition results are encouraging with satisfactory accuracy. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Exact Negative Solutions for Guyer–Krumhansl Type Equation and the Maximum Principle Violation
Entropy 2017, 19(9), 440; doi:10.3390/e19090440
Received: 22 July 2017 / Revised: 17 August 2017 / Accepted: 21 August 2017 / Published: 24 August 2017
PDF Full-text (7717 KB) | HTML Full-text | XML Full-text
Abstract
Heat propagation in the Guyer–Krumhansl model is studied. The exact analytical solutions for the one-dimensional Guyer–Krumhansl equation are obtained. The operational formalism is employed. Some examples of initial functions are considered, modeling various initial heat pulses and distributions. The effect of the ballistic
[...] Read more.
Heat propagation in the Guyer–Krumhansl model is studied. The exact analytical solutions for the one-dimensional Guyer–Krumhansl equation are obtained. The operational formalism is employed. Some examples of initial functions are considered, modeling various initial heat pulses and distributions. The effect of the ballistic heat transfer in an over–diffusive regime is elucidated. The behavior of the solutions in such a regime is explored. The maximum principle and its violation for the obtained solutions are discussed in the framework of heat conduction. Examples of negative solutions for the Guyer–Krumhansl equation are demonstrated. Full article
(This article belongs to the Special Issue Selected Papers from 14th Joint European Thermodynamics Conference)
Figures

Figure 1

Open AccessArticle Interfering Relay Channels
Entropy 2017, 19(9), 441; doi:10.3390/e19090441
Received: 8 July 2017 / Revised: 9 August 2017 / Accepted: 21 August 2017 / Published: 23 August 2017
PDF Full-text (946 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces and studies a model in which two relay channels interfere with each other. Motivated by practical scenarios in heterogeneous wireless access networks, each relay is assumed to be connected to its intended receiver through a digital link with finite capacity.
[...] Read more.
This paper introduces and studies a model in which two relay channels interfere with each other. Motivated by practical scenarios in heterogeneous wireless access networks, each relay is assumed to be connected to its intended receiver through a digital link with finite capacity. Inner and outer bounds for achievable rates are derived and shown to be tight for new discrete memoryless classes, which generalize and unify several known cases involving interference and relay channels. Capacity region and sum capacity for multiple Gaussian scenarios are also characterized to within a constant gap. The results show the optimality or near-optimality of the quantize-bin-and-forward coding scheme for practically relevant relay-interference networks, which brings important engineering insight into the design of wireless communications systems. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Implications of Coupling in Quantum Thermodynamic Machines
Entropy 2017, 19(9), 442; doi:10.3390/e19090442
Received: 5 July 2017 / Revised: 13 August 2017 / Accepted: 17 August 2017 / Published: 8 September 2017
Cited by 1 | PDF Full-text (490 KB) | HTML Full-text | XML Full-text
Abstract
We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the
[...] Read more.
We study coupled quantum systems as the working media of thermodynamic machines. Under a suitable phase-space transformation, the coupled systems can be expressed as a composition of independent subsystems. We find that for the coupled systems, the figures of merit, that is the efficiency for engine and the coefficient of performance for refrigerator, are bounded (both from above and from below) by the corresponding figures of merit of the independent subsystems. We also show that the optimum work extractable from a coupled system is upper bounded by the optimum work obtained from the uncoupled system, thereby showing that the quantum correlations do not help in optimal work extraction. Further, we study two explicit examples; coupled spin- 1 / 2 systems and coupled quantum oscillators with analogous interactions. Interestingly, for particular kind of interactions, the efficiency of the coupled oscillators outperforms that of the coupled spin- 1 / 2 systems when they work as heat engines. However, for the same interaction, the coefficient of performance behaves in a reverse manner, while the systems work as the refrigerator. Thus, the same coupling can cause opposite effects in the figures of merit of heat engine and refrigerator. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle A Numerical Study on Entropy Generation in Two-Dimensional Rayleigh-Bénard Convection at Different Prandtl Number
Entropy 2017, 19(9), 443; doi:10.3390/e19090443
Received: 2 July 2017 / Revised: 16 August 2017 / Accepted: 21 August 2017 / Published: 30 August 2017
PDF Full-text (5583 KB) | HTML Full-text | XML Full-text
Abstract
Entropy generation in two-dimensional Rayleigh-Bénard convection at different Prandtl number (Pr) are investigated in the present paper by using the lattice Boltzmann Method. The major concern of the present paper is to explore the effects of Pr on the detailed information
[...] Read more.
Entropy generation in two-dimensional Rayleigh-Bénard convection at different Prandtl number (Pr) are investigated in the present paper by using the lattice Boltzmann Method. The major concern of the present paper is to explore the effects of Pr on the detailed information of local distributions of entropy generation in virtue of frictional and heat transfer irreversibility and the overall entropy generation in the whole flow field. The results of this work indicate that the significant viscous entropy generation rates (Su) gradually expand to bulk contributions of cavity with the increase of Pr, thermal entropy generation rates (Sθ) and total entropy generation rates (S) mainly concentrate in the steepest temperature gradient, the entropy generation in the flow is dominated by heat transfer irreversibility and for the same Rayleigh number, the amplitudes of Su, Sθ and S decrease with increasing Pr. It is found that that the amplitudes of the horizontally averaged viscous entropy generation rates, thermal entropy generation rates and total entropy generation rates decrease with increasing Pr. The probability density functions of Su, Sθ and S also indicate that a much thinner tail while the tails for large entropy generation values seem to fit the log-normal curve well with increasing Pr. The distribution and the departure from log-normality become robust with decreasing Pr. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Quantal Response Statistical Equilibrium in Economic Interactions: Theory and Estimation
Entropy 2017, 19(9), 444; doi:10.3390/e19090444
Received: 26 May 2017 / Revised: 10 August 2017 / Accepted: 23 August 2017 / Published: 25 August 2017
Cited by 1 | PDF Full-text (509 KB) | HTML Full-text | XML Full-text
Abstract
Social science addresses systems in which the individual actions of participants interacting in complex, non-additive ways through institutional structures determine social outcomes. In many cases, the institutions incorporate enough negative feedback to stabilize the resulting outcome as an equilibrium. We study a particular
[...] Read more.
Social science addresses systems in which the individual actions of participants interacting in complex, non-additive ways through institutional structures determine social outcomes. In many cases, the institutions incorporate enough negative feedback to stabilize the resulting outcome as an equilibrium. We study a particular type of such equilibria, quantal response statistical equilibrium (QRSE) using the tools of constrained maximum entropy modeling developed by E. T. Jaynes. We use Adam Smith’s theory of profit rate maximization through competition of freely mobile capitals as an example. Even in many cases where key model variables are unobserved, it is possible to infer the parameters characterizing the equilibrium through Bayesian methods. We apply this method to the Smithian theory of competition using data where firms’ profit rates are observed but the entry and exit decisions that determine the distribution of profit rates is unobserved, and confirm Smith’s prediction of the emergence of an average rate of profit, along with a characterization of equilibrium statistical fluctuations of individual rates of profit. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle A Chain, a Bath, a Sink, and a Wall
Entropy 2017, 19(9), 445; doi:10.3390/e19090445
Received: 22 June 2017 / Revised: 22 August 2017 / Accepted: 24 August 2017 / Published: 25 August 2017
PDF Full-text (4159 KB) | HTML Full-text | XML Full-text
Abstract
We numerically investigate out-of-equilibrium stationary processes emerging in a Discrete Nonlinear Schrödinger chain in contact with a heat reservoir (a bath) at temperature TL and a pure dissipator (a sink) acting on opposite edges. Long-time molecular-dynamics simulations are performed by evolving the
[...] Read more.
We numerically investigate out-of-equilibrium stationary processes emerging in a Discrete Nonlinear Schrödinger chain in contact with a heat reservoir (a bath) at temperature T L and a pure dissipator (a sink) acting on opposite edges. Long-time molecular-dynamics simulations are performed by evolving the equations of motion within a symplectic integration scheme. Mass and energy are steadily transported through the chain from the heat bath to the sink. We observe two different regimes. For small heat-bath temperatures T L and chemical-potentials, temperature profiles across the chain display a non-monotonous shape, remain remarkably smooth and even enter the region of negative absolute temperatures. For larger temperatures T L , the transport of energy is strongly inhibited by the spontaneous emergence of discrete breathers, which act as a thermal wall. A strongly intermittent energy flux is also observed, due to the irregular birth and death of breathers. The corresponding statistics exhibit the typical signature of rare events of processes with large deviations. In particular, the breather lifetime is found to be ruled by a stretched-exponential law. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Investigation on the Performances of Vuilleumier Cycle Heat Pump Adopting Mixture Refrigerants
Entropy 2017, 19(9), 446; doi:10.3390/e19090446
Received: 30 July 2017 / Revised: 22 August 2017 / Accepted: 24 August 2017 / Published: 26 August 2017
PDF Full-text (4060 KB) | HTML Full-text | XML Full-text
Abstract
The performances of thermodynamics cycles are dependent on the properties of refrigerants. The performances of Vuilleumier (VM) cycle heat pump adopting mixture refrigerants are analyzed by MATLAB software using REFPROP programming. At given operating parameters and configuration, performances of the VM cycle adopting
[...] Read more.
The performances of thermodynamics cycles are dependent on the properties of refrigerants. The performances of Vuilleumier (VM) cycle heat pump adopting mixture refrigerants are analyzed by MATLAB software using REFPROP programming. At given operating parameters and configuration, performances of the VM cycle adopting pure refrigerant, H2, He or N2 are compared. Thermodynamic properties of the four type mixtures, namely, He-H2, He-N2, H2-N2 and He-H2-N2, are obtained with total 16 mixing ratio, and the coefficient of performance and the exergy efficiency of these four mixture types in VM cycle heat pump are calculated. The results indicate that within the temperature of heat source 400–1000 K, helium is the best choice of pure refrigerant for VM cycle heat pump. The He-H2 mixture is the best among all binary refrigerant mixtures; the recommended proportion is 1:2. For trinary refrigerant mixture, suggested proportion of helium, hydrogen and nitrogen is 2:2:1. For these recommended mixtures, system COPs (coefficient of performances) are close to 3.3 and exergy efficiencies are about 0.2, which are close to pure refrigerant helium. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessFeature PaperArticle Irregularity and Variability Analysis of Airflow Recordings to Facilitate the Diagnosis of Paediatric Sleep Apnoea-Hypopnoea Syndrome
Entropy 2017, 19(9), 447; doi:10.3390/e19090447
Received: 28 July 2017 / Revised: 22 August 2017 / Accepted: 24 August 2017 / Published: 26 August 2017
PDF Full-text (1906 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to evaluate the evolution of irregularity and variability of airflow (AF) signals as sleep apnoea-hypopnoea syndrome (SAHS) severity increases in children. We analyzed 501 AF recordings from children 6.2 ± 3.4 years old. The respiratory rate variability
[...] Read more.
The aim of this paper is to evaluate the evolution of irregularity and variability of airflow (AF) signals as sleep apnoea-hypopnoea syndrome (SAHS) severity increases in children. We analyzed 501 AF recordings from children 6.2 ± 3.4 years old. The respiratory rate variability (RRV) signal, which is obtained from AF, was also estimated. The proposed methodology consisted of three phases: (i) extraction of spectral entropy (SE1), quadratic spectral entropy (SE2), cubic spectral entropy (SE3), and central tendency measure (CTM) to quantify irregularity and variability of AF and RRV; (ii) feature selection with forward stepwise logistic regression (FSLR), and (iii) classification of subjects using logistic regression (LR). SE1, SE2, SE3, and CTM were used to conduct exploratory analyses that showed increasing irregularity and decreasing variability in AF, and increasing variability in RRV as apnoea-hypopnoea index (AHI) was higher. These tendencies were clearer in children with a higher severity degree (from AHI ≥ 5 events/hour). Binary LR models achieved 60%, 76%, and 80% accuracy for the AHI cutoff points 1, 5, and 10 e/h, respectively. These results suggest that irregularity and variability measures are able to characterize paediatric SAHS in AF recordings. Hence, the use of these approaches could be helpful in automatically detecting SAHS in children. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Open AccessArticle Assessing the Role of Strategic Choice on Organizational Performance by Jacquemin–Berry Entropy Index
Entropy 2017, 19(9), 448; doi:10.3390/e19090448
Received: 31 July 2017 / Revised: 24 August 2017 / Accepted: 24 August 2017 / Published: 27 August 2017
PDF Full-text (264 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates effects of strategic choice on organizational performance for Romanian family-owned Small and Medium sized Enterprises (SMEs). Using adapted Jacquemin–Berry entropy index for both product and international diversification and using a regression model, our study discusses family involvement as a moderating
[...] Read more.
This paper investigates effects of strategic choice on organizational performance for Romanian family-owned Small and Medium sized Enterprises (SMEs). Using adapted Jacquemin–Berry entropy index for both product and international diversification and using a regression model, our study discusses family involvement as a moderating factor for organizational performance assessment. We discovered that there are multiple interactions between strategic choice and organizational performance while family involvement fails to have a significant role in moderating these interactions. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Open AccessArticle Harvesting Large Scale Entanglement in de Sitter Space with Multiple Detectors
Entropy 2017, 19(9), 449; doi:10.3390/e19090449
Received: 4 August 2017 / Revised: 23 August 2017 / Accepted: 23 August 2017 / Published: 28 August 2017
Cited by 1 | PDF Full-text (414 KB) | HTML Full-text | XML Full-text
Abstract
We consider entanglement harvesting in de Sitter space using a model of multiple qubit detectors. We obtain the formula of the entanglement negativity for this system. Applying the obtained formula, we find that it is possible to access to the entanglement on the
[...] Read more.
We consider entanglement harvesting in de Sitter space using a model of multiple qubit detectors. We obtain the formula of the entanglement negativity for this system. Applying the obtained formula, we find that it is possible to access to the entanglement on the super horizon scale if a sufficiently large number of detectors are prepared. This result indicates the effect of the multipartite entanglement is crucial for detection of large scale entanglement in de Sitter space. Full article
(This article belongs to the Special Issue Entropy and Information in the Foundation of Quantum Physics)
Figures

Figure 1

Open AccessArticle Sleep Information Gathering Protocol Using CoAP for Sleep Care
Entropy 2017, 19(9), 450; doi:10.3390/e19090450
Received: 9 June 2017 / Revised: 14 August 2017 / Accepted: 25 August 2017 / Published: 28 August 2017
PDF Full-text (4638 KB) | HTML Full-text | XML Full-text
Abstract
There has been a growing interest in sleep management recently, and sleep care services using mobile or wearable devices are under development. However, devices with one sensor have limitations in analyzing various sleep states. If Internet of Things (IoT) technology, which collects information
[...] Read more.
There has been a growing interest in sleep management recently, and sleep care services using mobile or wearable devices are under development. However, devices with one sensor have limitations in analyzing various sleep states. If Internet of Things (IoT) technology, which collects information from multiple sensors and analyzes them in an integrated manner, can be used then various sleep states can be more accurately measured. Therefore, in this paper, we propose a Smart Model for Sleep Care to provide a service to measure and analyze the sleep state using various sensors. In this model, we designed and implemented a Sleep Information Gathering Protocol to transmit the information measured between physical sensors and sleep sensors. Experiments were conducted to compare the throughput and the consumed power of this new protocol with those of the protocols used in the existing service—we achieved the throughput of about two times and 20% reduction in power consumption, which has confirmed the effectiveness of the proposed protocol. We judge that this protocol is meaningful as it can be applied to a Smart Model for Sleep Care that incorporates IoT technology and allows expanded sleep care if used together with services for treating sleep disorders. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Invariant Components of Synergy, Redundancy, and Unique Information among Three Variables
Entropy 2017, 19(9), 451; doi:10.3390/e19090451
Received: 27 June 2017 / Revised: 21 August 2017 / Accepted: 25 August 2017 / Published: 28 August 2017
Cited by 3 | PDF Full-text (1509 KB) | HTML Full-text | XML Full-text
Abstract
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among
[...] Read more.
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to describe the relationships among all PID lattices. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system’s entropy, hence it characterizes completely all the interdependencies in the system. Full article
Figures

Figure 1

Open AccessArticle An Approach for Determining the Number of Clusters in a Model-Based Cluster Analysis
Entropy 2017, 19(9), 452; doi:10.3390/e19090452
Received: 12 July 2017 / Revised: 27 August 2017 / Accepted: 27 August 2017 / Published: 29 August 2017
PDF Full-text (1722 KB) | HTML Full-text | XML Full-text
Abstract
To determine the number of clusters in the clustering analysis that has a broad range of applied sciences, such as physics, chemistry, biology, engineering, economics etc., many methods have been proposed in the literature. The aim of this paper is to determine the
[...] Read more.
To determine the number of clusters in the clustering analysis that has a broad range of applied sciences, such as physics, chemistry, biology, engineering, economics etc., many methods have been proposed in the literature. The aim of this paper is to determine the number of clusters of a dataset in a model-based clustering by using an Analytic Hierarchy Process (AHP). In this study, the AHP model has been created by using the information criteria Akaike’s Information Criterion (AIC), Approximate Weight of Evidence (AWE), Bayesian Information Criterion (BIC), Classification Likelihood Criterion (CLC), and Kullback Information Criterion (KIC). The achievement of the proposed approach has been tested on common real and synthetic datasets. The proposed approach based on the corresponding information criteria has produced accurate results. The currently produced results have been seen to be more accurate than those corresponding to the information criteria. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Disproportionate Allocation of Indirect Costs at Individual-Farm Level Using Maximum Entropy
Entropy 2017, 19(9), 453; doi:10.3390/e19090453
Received: 22 June 2017 / Revised: 14 August 2017 / Accepted: 18 August 2017 / Published: 29 August 2017
PDF Full-text (1755 KB) | HTML Full-text | XML Full-text
Abstract
This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing
[...] Read more.
This paper addresses the allocation of indirect or joint costs among farm enterprises, and elaborates two maximum entropy models, the basic CoreModel and the InequalityModel, which additionally includes inequality restrictions in order to incorporate knowledge from production technology. Representing the indirect costing approach, both models address the individual-farm level and use standard costs from farm-management literature as allocation bases. They provide a disproportionate allocation, with the distinctive feature that enterprises with large allocation bases face stronger adjustments than enterprises with small ones, approximating indirect costing with reality. Based on crop-farm observations from the Swiss Farm Accountancy Data Network (FADN), including up to 36 observations per enterprise, both models are compared with a proportional allocation as reference base. The mean differences of the enterprise’s allocated labour inputs and machinery costs are in a range of up to ±35% and ±20% for the CoreModel and InequalityModel, respectively. We conclude that the choice of allocation methods has a strong influence on the resulting indirect costs. Furthermore, the application of inequality restrictions is a precondition to make the merits of the maximum entropy principle accessible for the allocation of indirect costs. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Robust Automatic Modulation Classification Technique for Fading Channels via Deep Neural Network
Entropy 2017, 19(9), 454; doi:10.3390/e19090454
Received: 31 July 2017 / Revised: 25 August 2017 / Accepted: 28 August 2017 / Published: 30 August 2017
PDF Full-text (537 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a deep neural network (DNN)-based automatic modulation classification (AMC) for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN) channels, classification accuracy degrades for fading channels where the amplitude and phase of channel
[...] Read more.
In this paper, we propose a deep neural network (DNN)-based automatic modulation classification (AMC) for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN) channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle The Thermodynamical Arrow and the Historical Arrow; Are They Equivalent?
Entropy 2017, 19(9), 455; doi:10.3390/e19090455
Received: 29 June 2017 / Revised: 26 August 2017 / Accepted: 28 August 2017 / Published: 30 August 2017
PDF Full-text (275 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws)
[...] Read more.
In this paper, the relationship between the thermodynamic and historical arrows of time is studied. In the context of a simple combinatorial model, their definitions are made more precise and in particular strong versions (which are not compatible with time symmetric microscopic laws) and weak versions (which can be compatible with time symmetric microscopic laws) are given. This is part of a larger project that aims to explain the arrows as consequences of a common time symmetric principle in the set of all possible universes. However, even if we accept that both arrows may have the same origin, this does not imply that they are equivalent, and it is argued that there can be situations where one arrow may be well-defined but the other is not. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Morphological Computation: Synergy of Body and Brain
Entropy 2017, 19(9), 456; doi:10.3390/e19090456
Received: 9 July 2017 / Revised: 18 August 2017 / Accepted: 25 August 2017 / Published: 31 August 2017
PDF Full-text (1792 KB) | HTML Full-text | XML Full-text
Abstract
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that
[...] Read more.
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary) systems. Full article
Figures

Figure 1

Open AccessArticle Twisted Soft Photon Hair Implants on Black Holes
Entropy 2017, 19(9), 458; doi:10.3390/e19090458
Received: 20 July 2017 / Revised: 28 August 2017 / Accepted: 29 August 2017 / Published: 31 August 2017
PDF Full-text (390 KB) | HTML Full-text | XML Full-text
Abstract
Background: The Hawking–Perry–Strominger (HPS) work states a new controversial idea about the black hole (BH) information paradox , where BHs maximally entropize and encode information in their event horizon area , with no “hair” thought to reveal information outside but angular momentum, mass,
[...] Read more.
Background: The Hawking–Perry–Strominger (HPS) work states a new controversial idea about the black hole (BH) information paradox , where BHs maximally entropize and encode information in their event horizon area , with no “hair” thought to reveal information outside but angular momentum, mass, and electric charge only in a unique quantum gravity (QG) vacuum state. New conservation laws of gravitation and electromagnetism , appear to generate different QG vacua, preserving more information in soft photon/graviton hair implants. We find that BH photon hair implants can encode orbital angular momentum (OAM) and vorticity of the electromagnetic (EM) field. Methods: Numerical simulations are used to plot an EM field with OAM emitted by a set of dipolar currents together with the soft photon field they induce. The analytical results confirm that the soft photon hair implant carries OAM and vorticity. Results: a set of charges and currents generating real EM fields with precise values of OAM induce a “curly”, twisted, soft-hair implant on the BH with vorticity and OAM increased by one unit with respect to the initial real field. Conclusions: Soft photon implants can be spatially shaped ad hoc, encoding structured and densely organized information on the event horizon. Full article
(This article belongs to the Section Astrophysics and Cosmology)
Figures

Figure 1

Open AccessArticle The KCOD Model on (3,4,6,4) and (34,6) Archimedean Lattices
Entropy 2017, 19(9), 459; doi:10.3390/e19090459
Received: 23 July 2017 / Revised: 18 August 2017 / Accepted: 30 August 2017 / Published: 31 August 2017
PDF Full-text (703 KB) | HTML Full-text | XML Full-text
Abstract
Through Monte Carlo simulations, we studied the critical properties of kinetic models of continuous opinion dynamics on (3,4,6,4) and (34,6) Archimedean lattices. We obtain pc and the critical exponents’
[...] Read more.
Through Monte Carlo simulations, we studied the critical properties of kinetic models of continuous opinion dynamics on ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) Archimedean lattices. We obtain p c and the critical exponents’ ratio from extensive Monte Carlo studies and finite size scaling. The calculated values of the critical points and Binder cumulant are p c = 0 . 085 ( 6 ) and O 4 * = 0 . 605 ( 9 ) ; and p c = 0 . 146 ( 5 ) and O 4 * = 0 . 606 ( 3 ) for ( 3 , 4 , 6 , 4 ) and ( 3 4 , 6 ) lattices, respectively, while the exponent ratios β / ν , γ / ν and 1 / ν are, respectively: 0 . 126 ( 1 ) , 1 . 50 ( 7 ) , and 0 . 90 ( 5 ) for ( 3 , 4 , 6 , 4 ); and 0 . 125 ( 3 ) , 1 . 54 ( 6 ) , and 0 . 99 ( 3 ) for ( 3 4 , 6 ) lattices. Our new results agree with majority-vote model on previously studied regular lattices and disagree with the Ising model on square-lattice. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessFeature PaperArticle Influence of Parameter Selection in Fixed Sample Entropy of Surface Diaphragm Electromyography for Estimating Respiratory Activity
Entropy 2017, 19(9), 460; doi:10.3390/e19090460
Received: 17 May 2017 / Revised: 1 August 2017 / Accepted: 29 August 2017 / Published: 1 September 2017
PDF Full-text (5429 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Fixed sample entropy (fSampEn) is a robust technique that allows the evaluation of inspiratory effort in diaphragm electromyography (EMGdi) signals, and has potential utility in sleep studies. To appropriately estimate respiratory effort, fSampEn requires the adjustment of several parameters. The aims of the
[...] Read more.
Fixed sample entropy (fSampEn) is a robust technique that allows the evaluation of inspiratory effort in diaphragm electromyography (EMGdi) signals, and has potential utility in sleep studies. To appropriately estimate respiratory effort, fSampEn requires the adjustment of several parameters. The aims of the present study were to evaluate the influence of the embedding dimension m, the tolerance value r, the size of the moving window, and the sampling frequency, and to establish recommendations for estimating the respiratory activity when using the fSampEn on surface EMGdi recorded for different inspiratory efforts. Values of m equal to 1 and r ranging from 0.1 to 0.64, and m equal to 2 and r ranging from 0.13 to 0.45, were found to be suitable for evaluating respiratory activity. fSampEn was less affected by window size than classical amplitude parameters. Finally, variations in sampling frequency could influence fSampEn results. In conclusion, the findings suggest the potential utility of fSampEn for estimating muscle respiratory effort in further sleep studies. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle Physical Universality, State-Dependent Dynamical Laws and Open-Ended Novelty
Entropy 2017, 19(9), 461; doi:10.3390/e19090461
Received: 27 July 2017 / Revised: 22 August 2017 / Accepted: 23 August 2017 / Published: 1 September 2017
PDF Full-text (2491 KB) | HTML Full-text | XML Full-text
Abstract
A major conceptual step forward in understanding the logical architecture of living systems was advanced by von Neumann with his universal constructor, a physical device capable of self-reproduction. A necessary condition for a universal constructor to exist is that the laws of physics
[...] Read more.
A major conceptual step forward in understanding the logical architecture of living systems was advanced by von Neumann with his universal constructor, a physical device capable of self-reproduction. A necessary condition for a universal constructor to exist is that the laws of physics permit physical universality, such that any transformation (consistent with the laws of physics and availability of resources) can be caused to occur. While physical universality has been demonstrated in simple cellular automata models, so far these have not displayed a requisite feature of life—namely open-ended evolution—the explanation of which was also a prime motivator in von Neumann’s formulation of a universal constructor. Current examples of physical universality rely on reversible dynamical laws, whereas it is well-known that living processes are dissipative. Here we show that physical universality and open-ended dynamics should both be possible in irreversible dynamical systems if one entertains the possibility of state-dependent laws. We demonstrate with simple toy models how the accessibility of state space can yield open-ended trajectories, defined as trajectories that do not repeat within the expected Poincaré recurrence time and are not reproducible by an isolated system. We discuss implications for physical universality, or an approximation to it, as a foundational framework for developing a physics for life. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Energy Harvesting for Physical Layer Security in Cooperative Networks Based on Compressed Sensing
Entropy 2017, 19(9), 462; doi:10.3390/e19090462
Received: 20 July 2017 / Revised: 25 August 2017 / Accepted: 25 August 2017 / Published: 1 September 2017
PDF Full-text (1739 KB) | HTML Full-text | XML Full-text
Abstract
Energy harvesting (EH) has attracted a lot of attention in cooperative communication networks studies for its capability of transferring energy from sources to relays. In this paper, we study the secrecy capacity of a cooperative compressed sensing amplify and forward (CCS-AF) wireless network
[...] Read more.
Energy harvesting (EH) has attracted a lot of attention in cooperative communication networks studies for its capability of transferring energy from sources to relays. In this paper, we study the secrecy capacity of a cooperative compressed sensing amplify and forward (CCS-AF) wireless network in the presence of eavesdroppers based on an energy harvesting protocol. In this model, the source nodes send their information to the relays simultaneously, and then the relays perform EH from the received radio-frequency signals based on the power splitting-based relaying (PSR) protocol. The energy harvested by the relays will be used to amplify and forward the received information to the destination. The impacts of some key parameters, such as the power splitting ratio, energy conversion efficiency, relay location, and the number of relays, on the system secrecy capacity are analyzed through a group of experiments. Simulation results reveal that under certain conditions, the proposed EH relaying scheme can achieve higher secrecy capacity than traditional relaying strategies while consuming equal or even less power. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Fractional Derivative Phenomenology of Percolative Phonon-Assisted Hopping in Two-Dimensional Disordered Systems
Entropy 2017, 19(9), 463; doi:10.3390/e19090463
Received: 22 June 2017 / Revised: 28 August 2017 / Accepted: 28 August 2017 / Published: 1 September 2017
PDF Full-text (2982 KB) | HTML Full-text | XML Full-text
Abstract
Anomalous advection-diffusion in two-dimensional semiconductor systems with coexisting energetic and structural disorder is described in the framework of a generalized model of multiple trapping on a comb-like structure. The basic equations of the model contain fractional-order derivatives. To validate the model, we compare
[...] Read more.
Anomalous advection-diffusion in two-dimensional semiconductor systems with coexisting energetic and structural disorder is described in the framework of a generalized model of multiple trapping on a comb-like structure. The basic equations of the model contain fractional-order derivatives. To validate the model, we compare analytical solutions with results of a Monte Carlo simulation of phonon-assisted tunneling in two-dimensional patterns of a porous nanoparticle agglomerate and a phase-separated bulk heterojunction. To elucidate the role of directed percolation, we calculate transient current curves of the time-of-flight experiment and the evolution of the mean squared displacement averaged over medium realizations. The variations of the anomalous advection-diffusion parameters as functions of electric field intensity, levels of energetic, and structural disorder are presented. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Exergy Analysis of a Parallel-Plate Active Magnetic Regenerator with Nanofluids
Entropy 2017, 19(9), 464; doi:10.3390/e19090464
Received: 4 August 2017 / Revised: 28 August 2017 / Accepted: 31 August 2017 / Published: 2 September 2017
PDF Full-text (1408 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyzes the energetic and exergy performance of an active magnetic regenerative refrigerator using water-based Al2O3 nanofluids as heat transfer fluids. A 1D numerical model has been extensively used to quantify the exergy performance of a system composed of
[...] Read more.
This paper analyzes the energetic and exergy performance of an active magnetic regenerative refrigerator using water-based Al2O3 nanofluids as heat transfer fluids. A 1D numerical model has been extensively used to quantify the exergy performance of a system composed of a parallel-plate regenerator, magnetic source, pump, heat exchangers and control valves. Al2O3-water based nanofluids are tested thanks to CoolProp library, accounting for temperature-dependent properties, and appropriate correlations. The results are discussed in terms of the coefficient of performance, the exergy efficiency, and the cooling power as a function of the nanoparticle volume fraction and blowing time for a given geometrical configuration. It is shown that while the heat transfer between the fluid and solid is enhanced, it is accompanied by smaller temperature gradients within the fluid and larger pressure drops when increasing the nanoparticle concentration. It leads in all configurations to lower performance compared to the base case with pure liquid water. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Statistics of Binary Exchange of Energy or Money
Entropy 2017, 19(9), 465; doi:10.3390/e19090465
Received: 31 July 2017 / Revised: 24 August 2017 / Accepted: 30 August 2017 / Published: 2 September 2017
PDF Full-text (362 KB) | HTML Full-text | XML Full-text
Abstract
Why does the Maxwell-Boltzmann energy distribution for an ideal classical gas have an exponentially thin tail at high energies, while the Kaniadakis distribution for a relativistic gas has a power-law fat tail? We argue that a crucial role is played by the kinematics
[...] Read more.
Why does the Maxwell-Boltzmann energy distribution for an ideal classical gas have an exponentially thin tail at high energies, while the Kaniadakis distribution for a relativistic gas has a power-law fat tail? We argue that a crucial role is played by the kinematics of the binary collisions. In the classical case the probability of an energy exchange far from the average (i.e., close to 0% or 100%) is quite large, while in the extreme relativistic case it is small. We compare these properties with the concept of “saving propensity”, employed in econophysics to define the fraction of their money that individuals put at stake in economic interactions. Full article
(This article belongs to the Special Issue Advances in Relativistic Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Novel Early EEG Measures Predicting Brain Recovery after Cardiac Arrest
Entropy 2017, 19(9), 466; doi:10.3390/e19090466
Received: 11 July 2017 / Revised: 28 August 2017 / Accepted: 30 August 2017 / Published: 2 September 2017
PDF Full-text (811 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose novel quantitative electroencephalogram (qEEG) measures by exploiting three critical and distinct phases (isoelectric, fast progression, and slow progression) of qEEG time evolution. Critical time points where the phase transition occurs are calculated. Most conventional measures have two major
[...] Read more.
In this paper, we propose novel quantitative electroencephalogram (qEEG) measures by exploiting three critical and distinct phases (isoelectric, fast progression, and slow progression) of qEEG time evolution. Critical time points where the phase transition occurs are calculated. Most conventional measures have two major disadvantages. Firstly, to obtain meaningful time-evolution over raw electroencephalogram (EEG), these measures require baseline EEG activities before the subject’s injury. Secondly, conventional qEEG measures need at least 2∼3 h recording of EEG signals to predict meaningful long-term neurological outcomes. Unlike the conventional qEEG measures, the two measures do not require the baseline EEG information before injury and furthermore can be calculated only with the EEG data of 20∼30 min after cardiopulmonary resuscitation (CPR). Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Open AccessFeature PaperArticle Channel Coding and Source Coding With Increased Partial Side Information
Entropy 2017, 19(9), 467; doi:10.3390/e19090467
Received: 30 June 2017 / Revised: 25 August 2017 / Accepted: 30 August 2017 / Published: 2 September 2017
PDF Full-text (1092 KB) | HTML Full-text | XML Full-text
Abstract
Let (S1,i,S2,i)i.i.dp(s1,s2), i=1,2, be a memoryless, correlated partial side information sequence. In
[...] Read more.
Let ( S 1 , i , S 2 , i ) i . i . d p ( s 1 , s 2 ) , i = 1 , 2 , be a memoryless, correlated partial side information sequence. In this work, we study channel coding and source coding problems where the partial side information ( S 1 , S 2 ) is available at the encoder and the decoder, respectively, and, additionally, either the encoder’s or the decoder’s side information is increased by a limited-rate description of the other’s partial side information. We derive six special cases of channel coding and source coding problems and we characterize the capacity and the rate-distortion functions for the different cases. We present a duality between the channel capacity and the rate-distortion cases we study. In order to find numerical solutions for our channel capacity and rate-distortion problems, we use the Blahut-Arimoto algorithm and convex optimization tools. Finally, we provide several examples corresponding to the channel capacity and the rate-distortion cases we presented. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Life on the Edge: Latching Dynamics in a Potts Neural Network
Entropy 2017, 19(9), 468; doi:10.3390/e19090468
Received: 2 August 2017 / Revised: 25 August 2017 / Accepted: 29 August 2017 / Published: 3 September 2017
PDF Full-text (14477 KB) | HTML Full-text | XML Full-text
Abstract
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching
[...] Read more.
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the pS plane, and sublinearly in pC. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Information Theoretical Study of Cross-Talk Mediated Signal Transduction in MAPK Pathways
Entropy 2017, 19(9), 469; doi:10.3390/e19090469
Received: 28 June 2017 / Revised: 30 August 2017 / Accepted: 1 September 2017 / Published: 5 September 2017
PDF Full-text (5481 KB) | HTML Full-text | XML Full-text
Abstract
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals
[...] Read more.
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals cross-talk is responsible for developing coordinated fluctuations between the pathways. The extent of correlation evaluated in terms of the information theoretic measure provides directionality to net information propagation. Stochastic time series suggest that the cross-talk generates synchronisation in a cell. In addition, the cross-interaction develops correlation between two different phosphorylated kinases expressed in each of the cells in a population of genetically identical cells. Depending on the number of inputs and outputs, we identify signal integration and signal bifurcation motif that arise due to inter-pathway connectivity in the composite network. Analysis using partial information decomposition, an extended formalism of multivariate information calculation, also quantifies the net synergy in the information propagation through the branched pathways. Under this formalism, signature of synergy or redundancy is observed due to the architectural difference in the branched pathways. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Second-Law Analysis of Irreversible Losses in Gas Turbines
Entropy 2017, 19(9), 470; doi:10.3390/e19090470
Received: 26 July 2017 / Revised: 29 August 2017 / Accepted: 31 August 2017 / Published: 4 September 2017
PDF Full-text (3871 KB) | HTML Full-text | XML Full-text
Abstract
Several fundamental concepts with respect to the second-law analysis (SLA) of the turbulent flows in gas turbines are discussed in this study. Entropy and exergy equations for compressible/incompressible flows in a rotating/non-rotating frame have been derived. The exergy transformation efficiency of a gas
[...] Read more.
Several fundamental concepts with respect to the second-law analysis (SLA) of the turbulent flows in gas turbines are discussed in this study. Entropy and exergy equations for compressible/incompressible flows in a rotating/non-rotating frame have been derived. The exergy transformation efficiency of a gas turbine as well as the exergy transformation number for a single process step have been proposed. The exergy transformation number will indicate the overall performance of a single process in a gas turbine, including the local irreversible losses in it and its contribution to the exergy obtained the combustion chamber. A more general formula for calculating local entropy generation rate densities is suggested. A test case of a compressor cascade has been employed to demonstrate the application of the developed concepts. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Lifespan Development of the Human Brain Revealed by Large-Scale Network Eigen-Entropy
Entropy 2017, 19(9), 471; doi:10.3390/e19090471
Received: 3 August 2017 / Revised: 25 August 2017 / Accepted: 1 September 2017 / Published: 4 September 2017
PDF Full-text (2150 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific
[...] Read more.
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific maturational patterns. Normal aging is related to some resting state brain networks disruption, which are associated with certain cognitive decline. It is a big challenge to design an integral metric to track connectome evolution patterns across the lifespan, which is to understand the principles of network organization in the human brain. In this study, we first defined a brain network eigen-entropy (NEE) based on the energy probability (EP) of each brain node. Next, we used the NEE to characterize the lifespan orderness trajectory of the whole-brain functional connectivity of 173 healthy individuals ranging in age from 7 to 85 years. The results revealed that during the lifespan, the whole-brain NEE exhibited a significant non-linear decrease and that the EP distribution shifted from concentration to wide dispersion, implying orderness enhancement of functional connectome over age. Furthermore, brain regions with significant EP changes from the flourishing (7–20 years) to the youth period (23–38 years) were mainly located in the right prefrontal cortex and basal ganglia, and were involved in emotion regulation and executive function in coordination with the action of the sensory system, implying that self-awareness and voluntary control performance significantly changed during neurodevelopment. However, the changes from the youth period to middle age (40–59 years) were located in the mesial temporal lobe and caudate, which are associated with long-term memory, implying that the memory of the human brain begins to decline with age during this period. Overall, the findings suggested that the human connectome shifted from a relatively anatomical driven state to an orderly organized state with lower entropy. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Molecular Heat Engines: Quantum Coherence Effects
Entropy 2017, 19(9), 472; doi:10.3390/e19090472
Received: 29 July 2017 / Revised: 18 August 2017 / Accepted: 2 September 2017 / Published: 4 September 2017
PDF Full-text (973 KB) | HTML Full-text | XML Full-text
Abstract
Recent developments in nanoscale experimental techniques made it possible to utilize single molecule junctions as devices for electronics and energy transfer with quantum coherence playing an important role in their thermoelectric characteristics. Theoretical studies on the efficiency of nanoscale devices usually employ rate
[...] Read more.
Recent developments in nanoscale experimental techniques made it possible to utilize single molecule junctions as devices for electronics and energy transfer with quantum coherence playing an important role in their thermoelectric characteristics. Theoretical studies on the efficiency of nanoscale devices usually employ rate (Pauli) equations, which do not account for quantum coherence. Therefore, the question whether quantum coherence could improve the efficiency of a molecular device cannot be fully addressed within such considerations. Here, we employ a nonequilibrium Green function approach to study the effects of quantum coherence and dephasing on the thermoelectric performance of molecular heat engines. Within a generic bichromophoric donor-bridge-acceptor junction model, we show that quantum coherence may increase efficiency compared to quasi-classical (rate equation) predictions and that pure dephasing and dissipation destroy this effect. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle Structure of Multipartite Entanglement in Random Cluster-Like Photonic Systems
Entropy 2017, 19(9), 473; doi:10.3390/e19090473
Received: 24 July 2017 / Revised: 15 August 2017 / Accepted: 2 September 2017 / Published: 5 September 2017
PDF Full-text (586 KB) | HTML Full-text | XML Full-text
Abstract
Quantum networks are natural scenarios for the communication of information among distributed parties, and the arena of promising schemes for distributed quantum computation. Measurement-based quantum computing is a prominent example of how quantum networking, embodied by the generation of a special class of
[...] Read more.
Quantum networks are natural scenarios for the communication of information among distributed parties, and the arena of promising schemes for distributed quantum computation. Measurement-based quantum computing is a prominent example of how quantum networking, embodied by the generation of a special class of multipartite states called cluster states, can be used to achieve a powerful paradigm for quantum information processing. Here we analyze randomly generated cluster states in order to address the emergence of correlations as a function of the density of edges in a given underlying graph. We find that the most widespread multipartite entanglement does not correspond to the highest amount of edges in the cluster. We extend the analysis to higher dimensions, finding similar results, which suggest the establishment of small world structures in the entanglement sharing of randomised cluster states, which can be exploited in engineering more efficient quantum information carriers. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle The Partial Information Decomposition of Generative Neural Network Models
Entropy 2017, 19(9), 474; doi:10.3390/e19090474
Received: 8 July 2017 / Revised: 13 August 2017 / Accepted: 1 September 2017 / Published: 6 September 2017
PDF Full-text (297 KB) | HTML Full-text | XML Full-text
Abstract
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch
[...] Read more.
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations. Full article
Figures

Figure 1

Open AccessArticle Evaluation of Uncertainties in the Design Process of Complex Mechanical Systems
Entropy 2017, 19(9), 475; doi:10.3390/e19090475
Received: 20 July 2017 / Revised: 30 August 2017 / Accepted: 31 August 2017 / Published: 6 September 2017
Cited by 1 | PDF Full-text (235 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the problem of the evaluation of the uncertainties that originate in the complex design process of a new system is analyzed, paying particular attention to multibody mechanical systems. To this end, the Wiener-Shannon’s axioms are extended to non-probabilistic events and
[...] Read more.
In this paper, the problem of the evaluation of the uncertainties that originate in the complex design process of a new system is analyzed, paying particular attention to multibody mechanical systems. To this end, the Wiener-Shannon’s axioms are extended to non-probabilistic events and a theory of information for non-repetitive events is used as a measure of the reliability of data. The selection of the solutions consistent with the values of the design constraints is performed by analyzing the complexity of the relation matrix and using the idea of information in the metric space. Comparing the alternatives in terms of the amount of entropy resulting from the various distribution, this method is capable of finding the optimal solution that can be obtained with the available resources. In the paper, the algorithmic steps of the proposed method are discussed and an illustrative numerical example is provided. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Open AccessArticle A Fuzzy-Based Adaptive Streaming Algorithm for Reducing Entropy Rate of DASH Bitrate Fluctuation to Improve Mobile Quality of Service
Entropy 2017, 19(9), 477; doi:10.3390/e19090477
Received: 27 July 2017 / Revised: 3 September 2017 / Accepted: 4 September 2017 / Published: 7 September 2017
PDF Full-text (1032 KB) | HTML Full-text | XML Full-text
Abstract
Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming
[...] Read more.
Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming fluctuates along with the network changes, and it might reduce the quality of service. In recent years, many researchers have proposed several adaptive streaming algorithms to reduce such changes. However, these algorithms only consider the current state of a network. Thus, these algorithms might result in inaccurate estimates of a video quality in the near term. Therefore, in this paper, we propose a method using fuzzy logic and a mathematics moving average technique, in order to reduce mobile video quality fluctuation in Dynamic Adaptive Streaming over HTTP (DASH). First, we calculate the moving average of the bandwidth and buffer values for a given period. On the basis of differences between real and average values, we propose a fuzzy logic system to deduce the value of the video quality representation for the next request. In addition, we use the entropy rate of a bandwidth measurement sequence to measure the predictable/stabilization of our method. The experiment results show that our proposed method reduces video quality fluctuation as well as improves 40% of bandwidth utilization compared to existing methods. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Coupled Effects of Turing and Neimark-Sacker Bifurcations on Vegetation Pattern Self-Organization in a Discrete Vegetation-Sand Model
Entropy 2017, 19(9), 478; doi:10.3390/e19090478
Received: 12 July 2017 / Revised: 11 August 2017 / Accepted: 4 September 2017 / Published: 8 September 2017
PDF Full-text (20915 KB) | HTML Full-text | XML Full-text
Abstract
Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are
[...] Read more.
Wind-induced vegetation patterns were proposed a long time ago but only recently a dynamic vegetation-sand relationship has been established. In this research, we transformed the continuous vegetation-sand model into a discrete model. Fixed points and stability analyses were then studied. Bifurcation analyses are done around the fixed point, including Neimark-Sacker and Turing bifurcation. Then we simulated the parameter space for both bifurcations. Based on the bifurcation conditions, simulations are carried out around the bifurcation point. Simulation results showed that Neimark-Sacker bifurcation and Turing bifurcation can induce the self-organization of complex vegetation patterns, among which labyrinth and striped patterns are the key results that can be presented by the continuous model. Under the coupled effects of the two bifurcations, simulation results show that vegetation patterns can also be self-organized, but vegetation type changed. The type of the patterns can be Turing type, Neimark-Sacker type, or some other special type. The difference may depend on the relative intensity of each bifurcation. The calculation of entropy may help understand the variance of pattern types. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1a

Open AccessArticle Robust Biometric Authentication from an Information Theoretic Perspective
Entropy 2017, 19(9), 480; doi:10.3390/e19090480
Received: 22 June 2017 / Revised: 28 August 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source
[...] Read more.
Robust biometric authentication is studied from an information theoretic perspective. Compound sources are used to account for uncertainty in the knowledge of the source statistics and are further used to model certain attack classes. It is shown that authentication is robust against source uncertainty and a special class of attacks under the strong secrecy condition. A single-letter characterization of the privacy secrecy capacity region is derived for the generated and chosen secret key model. Furthermore, the question is studied whether small variations of the compound source lead to large losses of the privacy secrecy capacity region. It is shown that biometric authentication is robust in the sense that its privacy secrecy capacity region depends continuously on the compound source. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Entropy Analysis on Electro-Kinetically Modulated Peristaltic Propulsion of Magnetized Nanofluid Flow through a Microchannel
Entropy 2017, 19(9), 481; doi:10.3390/e19090481
Received: 4 August 2017 / Revised: 2 September 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
Cited by 1 | PDF Full-text (5855 KB) | HTML Full-text | XML Full-text
Abstract
A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in
[...] Read more.
A theoretical and a mathematical model is presented to determine the entropy generation on electro-kinetically modulated peristaltic propulsion on the magnetized nanofluid flow through a microchannel with joule heating. The mathematical modeling is based on the energy, momentum, continuity, and entropy equation in the Cartesian coordinate system. The effects of viscous dissipation, heat absorption, magnetic field, and electrokinetic body force are also taken into account. The electric field terms are helpful to model the electrical potential terms by means of Poisson–Boltzmann equations, ionic Nernst–Planck equation, and Debye length approximation. A perturbation method has been applied to solve the coupled nonlinear partial differential equations and a series solution is obtained up to second order. The physical behavior of all the governing parameters is discussed for pressure rise, velocity profile, entropy profile, and temperature profile. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model
Entropy 2017, 19(9), 482; doi:10.3390/e19090482
Received: 20 July 2017 / Revised: 4 September 2017 / Accepted: 7 September 2017 / Published: 9 September 2017
PDF Full-text (653 KB) | HTML Full-text | XML Full-text
Abstract
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative
[...] Read more.
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle On the Fragility of Bulk Metallic Glass Forming Liquids
Entropy 2017, 19(9), 483; doi:10.3390/e19090483
Received: 25 August 2017 / Revised: 5 September 2017 / Accepted: 7 September 2017 / Published: 10 September 2017
Cited by 1 | PDF Full-text (10408 KB) | HTML Full-text | XML Full-text
Abstract
In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the
[...] Read more.
In contrast to pure metals and most non-glass forming alloys, metallic glass-formers are moderately strong liquids in terms of fragility. The notion of fragility of an undercooling liquid reflects the sensitivity of the viscosity of the liquid to temperature changes and describes the degree of departure of the liquid kinetics from the Arrhenius equation. In general, the fragility of metallic glass-formers increases with the complexity of the alloy with differences between the alloy families, e.g., Pd-based alloys being more fragile than Zr-based alloys, which are more fragile than Mg-based alloys. Here, experimental data are assessed for 15 bulk metallic glasses-formers including the novel and technologically important systems based on Ni-Cr-Nb-P-B, Fe-Mo-Ni-Cr-P-C-B, and Au-Ag-Pd-Cu-Si. The data for the equilibrium viscosity are analyzed using the Vogel–Fulcher–Tammann (VFT) equation, the Mauro–Yue–Ellison–Gupta–Allan (MYEGA) equation, and the Adam–Gibbs approach based on specific heat capacity data. An overall larger trend of the excess specific heat for the more fragile supercooled liquids is experimentally observed than for the stronger liquids. Moreover, the stronger the glass, the higher the free enthalpy barrier to cooperative rearrangements is, suggesting the same microscopic origin and rigorously connecting the kinetic and thermodynamic aspects of fragility. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Open AccessArticle Simultaneous Wireless Information and Power Transfer for MIMO Interference Channel Networks Based on Interference Alignment
Entropy 2017, 19(9), 484; doi:10.3390/e19090484
Received: 1 July 2017 / Revised: 2 September 2017 / Accepted: 8 September 2017 / Published: 13 September 2017
PDF Full-text (911 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into
[...] Read more.
This paper considers power splitting (PS)-based simultaneous wireless information and power transfer (SWIPT) for multiple-input multiple-output (MIMO) interference channel networks where multiple transceiver pairs share the same frequency spectrum. As the PS model is adopted, an individual receiver splits the received signal into two parts for information decoding (ID) and energy harvesting (EH), respectively. Aiming to minimize the total transmit power, transmit precoders, receive filters and PS ratios are jointly designed under a predefined signal-to-interference-plus-noise ratio (SINR) and EH constraints. The formulated joint transceiver design and power splitting problem is non-convex and thus difficult to solve directly. In order to effectively obtain its solution, the feasibility conditions of the formulated non-convex problem are first analyzed. Based on the analysis, an iterative algorithm is proposed by alternatively optimizing the transmitters together with the power splitting factors and the receivers based on semidefinite programming (SDP) relaxation. Moreover, considering the prohibitive computational cost of the SDP for practical applications, a low-complexity suboptimal scheme is proposed by separately designing interference-suppressing transceivers based on interference alignment (IA) and optimizing the transmit power allocation together with splitting factors. The transmit power allocation and receive power splitting problem is then recast as a convex optimization problem and solved efficiently. To further reduce the computational complexity, a low-complexity scheme is proposed by calculating the transmit power allocation and receive PS ratios in closed-form. Simulation results show the effectiveness of the proposed schemes in achieving SWIPT for MIMO interference channel (IC) networks. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Optomechanical Analogy for Toy Cosmology with Quantized Scale Factor
Entropy 2017, 19(9), 485; doi:10.3390/e19090485
Received: 30 June 2017 / Revised: 23 August 2017 / Accepted: 8 September 2017 / Published: 12 September 2017
PDF Full-text (397 KB) | HTML Full-text | XML Full-text
Abstract
The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model.
[...] Read more.
The simplest cosmology—the Friedmann–Robertson–Walker–Lemaître (FRW) model— describes a spatially homogeneous and isotropic universe where the scale factor is the only dynamical parameter. Here we consider how quantized electromagnetic fields become entangled with the scale factor in a toy version of the FRW model. A system consisting of a photon, source, and detector is described in such a universe, and we find that the detection of a redshifted photon by the detector system constrains possible scale factor superpositions. Thus, measuring the redshift of the photon is equivalent to a weak measurement of the underlying cosmology. We also consider a potential optomechanical analogy system that would enable experimental exploration of these concepts. The analogy focuses on the effects of photon redshift measurement as a quantum back-action on metric variables, where the position of a movable mirror plays the role of the scale factor. By working in the rotating frame, an effective Hubble equation can be simulated with a simple free moving mirror. Full article
(This article belongs to the collection Quantum Information)
Figures

Figure 1

Open AccessArticle Use of the Principles of Maximum Entropy and Maximum Relative Entropy for the Determination of Uncertain Parameter Distributions in Engineering Applications
Entropy 2017, 19(9), 486; doi:10.3390/e19090486
Received: 31 July 2017 / Revised: 8 September 2017 / Accepted: 9 September 2017 / Published: 12 September 2017
PDF Full-text (8487 KB) | HTML Full-text | XML Full-text
Abstract
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model
[...] Read more.
The determination of the probability distribution function (PDF) of uncertain input and model parameters in engineering application codes is an issue of importance for uncertainty quantification methods. One of the approaches that can be used for the PDF determination of input and model parameters is the application of methods based on the maximum entropy principle (MEP) and the maximum relative entropy (MREP). These methods determine the PDF that maximizes the information entropy when only partial information about the parameter distribution is known, such as some moments of the distribution and its support. In addition, this paper shows the application of the MREP to update the PDF when the parameter must fulfill some technical specifications (TS) imposed by the regulations. Three computer programs have been developed: GEDIPA, which provides the parameter PDF using empirical distribution function (EDF) methods; UNTHERCO, which performs the Monte Carlo sampling on the parameter distribution; and DCP, which updates the PDF considering the TS and the MREP. Finally, the paper displays several applications and examples for the determination of the PDF applying the MEP and the MREP, and the influence of several factors on the PDF. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessFeature PaperArticle Automated Diagnosis of Myocardial Infarction ECG Signals Using Sample Entropy in Flexible Analytic Wavelet Transform Framework
Entropy 2017, 19(9), 488; doi:10.3390/e19090488
Received: 28 July 2017 / Revised: 8 September 2017 / Accepted: 8 September 2017 / Published: 13 September 2017
PDF Full-text (834 KB) | HTML Full-text | XML Full-text
Abstract
Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification
[...] Read more.
Myocardial infarction (MI) is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG) is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT) method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt) is computed from these subband signals and fed to the random forest (RF), J48 decision tree, back propagation neural network (BPNN), and least-squares support vector machine (LS-SVM) classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs) of hospitals to aid the clinicians in confirming their diagnosis. Full article
Figures

Figure 1

Open AccessArticle Use of Mutual Information and Transfer Entropy to Assess Interaction between Parasympathetic and Sympathetic Activities of Nervous System from HRV
Entropy 2017, 19(9), 489; doi:10.3390/e19090489
Received: 17 July 2017 / Revised: 9 September 2017 / Accepted: 11 September 2017 / Published: 13 September 2017
PDF Full-text (2347 KB) | HTML Full-text | XML Full-text
Abstract
Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time
[...] Read more.
Obstructive sleep apnea (OSA) is a common sleep disorder that often associates with reduced heart rate variability (HRV) indicating autonomic dysfunction. HRV is mainly composed of high frequency components attributed to parasympathetic activity and low frequency components attributed to sympathetic activity. Although, time domain and frequency domain features of HRV have been used to sleep studies, the complex interaction between nonlinear independent frequency components with OSA is less known. This study included 30 electrocardiogram recordings (20 OSA patient recording and 10 healthy subjects) with apnea or normal label in 1-min segment. All segments were divided into three groups: N-N group (normal segments of normal subjects), P-N group (normal segments of OSA subjects) and P-OSA group (apnea segments of OSA subjects). Frequency domain indices and interaction indices were extracted from segmented RR intervals. Frequency domain indices included nuLF, nuHF, and LF/HF ratio; interaction indices included mutual information (MI) and transfer entropy (TE (H→L) and TE (L→H)). Our results demonstrated that LF/HF ratio was significant higher in P-OSA group than N-N group and P-N group. MI was significantly larger in P-OSA group than P-N group. TE (H→L) and TE (L→H) showed a significant decrease in P-OSA group, compared to P-N group and N-N group. TE (H→L) were significantly negative correlation with LF/HF ratio in P-N group (r = −0.789, p = 0.000) and P-OSA group (r = −0.661, p = 0.002). Our results indicated that MI and TE is powerful tools to evaluate sympathovagal modulation in OSA. Moreover, sympathovagal modulation is more imbalance in OSA patients while suffering from apnea event compared to free event. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Figures

Figure 1

Open AccessArticle Semantic Security with Practical Transmission Schemes over Fading Wiretap Channels
Entropy 2017, 19(9), 491; doi:10.3390/e19090491
Received: 17 July 2017 / Revised: 8 September 2017 / Accepted: 9 September 2017 / Published: 13 September 2017
PDF Full-text (560 KB) | HTML Full-text | XML Full-text
Abstract
We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of
[...] Read more.
We propose and assess an on–off protocol for communication over wireless wiretap channels with security at the physical layer. By taking advantage of suitable cryptographic primitives, the protocol we propose allows two legitimate parties to exchange confidential messages with some chosen level of semantic security against passive eavesdroppers, and without needing either pre-shared secret keys or public keys. The proposed method leverages the noisy and fading nature of the channel and exploits coding and all-or-nothing transforms to achieve the desired level of semantic security. We show that the use of fake packets in place of skipped transmissions during low channel quality periods yields significant advantages in terms of time needed to complete transmission of a secret message. Numerical examples are provided considering coding and modulation schemes included in the WiMax standard, thus showing that the proposed approach is feasible even with existing practical devices. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Open AccessArticle Eigentimes and Very Slow Processes
Entropy 2017, 19(9), 492; doi:10.3390/e19090492
Received: 1 July 2017 / Revised: 28 August 2017 / Accepted: 8 September 2017 / Published: 14 September 2017
PDF Full-text (1712 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we
[...] Read more.
We investigate the importance of the time and length scales at play in our descriptions of Nature. What can we observe at the atomic scale, at the laboratory (human) scale, and at the galactic scale? Which variables make sense? For every scale we wish to understand we need a set of variables which are linked through closed equations, i.e., everything can meaningfully be described in terms of those variables without the need to investigate other scales. Examples from physics, chemistry, and evolution are presented. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessFeature PaperArticle On Generalized Stam Inequalities and Fisher–Rényi Complexity Measures
Entropy 2017, 19(9), 493; doi:10.3390/e19090493
Received: 21 August 2017 / Revised: 8 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
PDF Full-text (489 KB) | HTML Full-text | XML Full-text
Abstract
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of
[...] Read more.
Information-theoretic inequalities play a fundamental role in numerous scientific and technological areas (e.g., estimation and communication theories, signal and information processing, quantum physics, …) as they generally express the impossibility to have a complete description of a system via a finite number of information measures. In particular, they gave rise to the design of various quantifiers (statistical complexity measures) of the internal complexity of a (quantum) system. In this paper, we introduce a three-parametric Fisher–Rényi complexity, named ( p , β , λ ) -Fisher–Rényi complexity, based on both a two-parametic extension of the Fisher information and the Rényi entropies of a probability density function ρ characteristic of the system. This complexity measure quantifies the combined balance of the spreading and the gradient contents of ρ , and has the three main properties of a statistical complexity: the invariance under translation and scaling transformations, and a universal bounding from below. The latter is proved by generalizing the Stam inequality, which lowerbounds the product of the Shannon entropy power and the Fisher information of a probability density function. An extension of this inequality was already proposed by Bercher and Lutwak, a particular case of the general one, where the three parameters are linked, allowing to determine the sharp lower bound and the associated probability density with minimal complexity. Using the notion of differential-escort deformation, we are able to determine the sharp bound of the complexity measure even when the three parameters are decoupled (in a certain range). We determine as well the distribution that saturates the inequality: the ( p , β , λ ) -Gaussian distribution, which involves an inverse incomplete beta function. Finally, the complexity measure is calculated for various quantum-mechanical states of the harmonic and hydrogenic systems, which are the two main prototypes of physical systems subject to a central potential. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessFeature PaperArticle Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
Entropy 2017, 19(9), 494; doi:10.3390/e19090494
Received: 13 July 2017 / Revised: 12 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
PDF Full-text (4808 KB) | HTML Full-text | XML Full-text
Abstract
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information
[...] Read more.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems. Full article
Figures

Figure 1

Open AccessArticle On the Capacity and the Optimal Sum-Rate of a Class of Dual-Band Interference Channels
Entropy 2017, 19(9), 495; doi:10.3390/e19090495
Received: 29 June 2017 / Revised: 8 September 2017 / Accepted: 11 September 2017 / Published: 14 September 2017
PDF Full-text (632 KB) | HTML Full-text | XML Full-text
Abstract
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the
[...] Read more.
We study a class of two-transmitter two-receiver dual-band Gaussian interference channels (GIC) which operates over the conventional microwave and the unconventional millimeter-wave (mm-wave) bands. This study is motivated by future 5G networks where additional spectrum in the mm-wave band complements transmission in the incumbent microwave band. The mm-wave band has a key modeling feature: due to severe path loss and relatively small wavelength, a transmitter must employ highly directional antenna arrays to reach its desired receiver. This feature causes the mm-wave channels to become highly directional, and thus can be used by a transmitter to transmit to its designated receiver or the other receiver. We consider two classes of such channels, where the underlying GIC in the microwave band has weak and strong interference, and obtain sufficient channel conditions under which the capacity is characterized. Moreover, we assess the impact of the additional mm-wave band spectrum on the performance, by characterizing the transmit power allocation for the direct and cross channels that maximizes the sum-rate of this dual-band channel. The solution reveals conditions under which different power allocations, such as allocating the power budget only to direct or only to cross channels, or sharing it among them, becomes optimal. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Open AccessArticle Log Likelihood Spectral Distance, Entropy Rate Power, and Mutual Information with Applications to Speech Coding
Entropy 2017, 19(9), 496; doi:10.3390/e19090496
Received: 22 August 2017 / Revised: 9 September 2017 / Accepted: 10 September 2017 / Published: 14 September 2017
PDF Full-text (1194 KB) | HTML Full-text | XML Full-text
Abstract
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential
[...] Read more.
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Sum Capacity for Single-Cell Multi-User Systems with M-Ary Inputs
Entropy 2017, 19(9), 497; doi:10.3390/e19090497
Received: 30 June 2017 / Revised: 30 August 2017 / Accepted: 12 September 2017 / Published: 15 September 2017
PDF Full-text (1699 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios.
[...] Read more.
This paper investigates the sum capacity of a single-cell multi-user system under the constraint that the transmitted signal is adopted from M-ary two-dimensional constellation with equal probability for both uplink, i.e., multiple access channel (MAC), and downlink, i.e., broadcast channel (BC) scenarios. Based on the successive interference cancellation (SIC) and the entropy power Gaussian approximation, it is shown that both the multi-user MAC and BC can be approximated to a bank of parallel channels with the channel gains being modified by an extra attenuate factor that equals to the negative exponential of the capacity of interfering users. With this result, the capacity of MAC and BC with arbitrary number of users and arbitrary constellations can be easily calculated which in sharp contrast with using traditional Monte Carlo simulation that the calculating amount increases exponentially with the increase of the number of users. Further, the sum capacity of multi-user under different power allocation strategies including equal power allocation, equal capacity power allocation and maximum capacity power allocation is also investigated. For the equal capacity power allocation, a recursive relation for the solution of power allocation is derived. For the maximum capacity power allocation, the necessary condition for optimal power allocation is obtained and an optimal algorithm for the power allocation optimization problem is proposed based on the necessary condition. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Second Law Analysis for Couple Stress Fluid Flow through a Porous Medium with Constant Heat Flux
Entropy 2017, 19(9), 498; doi:10.3390/e19090498
Received: 16 August 2017 / Revised: 4 September 2017 / Accepted: 11 September 2017 / Published: 18 September 2017
PDF Full-text (1758 KB) | HTML Full-text | XML Full-text
Abstract
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel
[...] Read more.
In the present work, entropy generation in the flow and heat transfer of couple stress fluid through an infinite inclined channel embedded in a saturated porous medium is presented. Due to the channel geometry, the asymmetrical slip conditions are imposed on the channel walls. The upper wall of the channel is subjected to a constant heat flux while the lower wall is insulated. The equations governing the fluid flow are formulated, non-dimensionalized and solved by using the Adomian decomposition method. The Adomian series solutions for the velocity and temperature fields are then used to compute the entropy generation rate and inherent heat irreversibility in the flow domain. The effects of various fluid parameters are presented graphically and discussed extensively. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Thermodynamics of Small Magnetic Particles
Entropy 2017, 19(9), 499; doi:10.3390/e19090499
Received: 2 August 2017 / Revised: 1 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
PDF Full-text (568 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called
[...] Read more.
In the present paper, we discuss the interpretation of some of the results of the thermodynamics in the case of very small systems. Most of the usual statistical physics is done for systems with a huge number of elements in what is called the thermodynamic limit, but not all of the approximations done for those conditions can be extended to all properties in the case of objects with less than a thousand elements. The starting point is the Ising model in two dimensions (2D) where an analytic solution exits, which allows validating the numerical techniques used in the present article. From there on, we introduce several variations bearing in mind the small systems such as the nanoscopic or even subnanoscopic particles, which are nowadays produced for several applications. Magnetization is the main property investigated aimed for two singular possible devices. The size of the systems (number of magnetic sites) is decreased so as to appreciate the departure from the results valid in the thermodynamic limit; periodic boundary conditions are eliminated to approach the reality of small particles; 1D, 2D and 3D systems are examined to appreciate the differences established by dimensionality is this small world; upon diluting the lattices, the effect of coordination number (bonding) is also explored; since the 2D Ising model is equivalent to the clock model with q = 2 degrees of freedom, we combine previous results with the supplementary degrees of freedom coming from the variation of q up to q = 20 . Most of the previous results are numeric; however, for the case of a very small system, we obtain the exact partition function to compare with the conclusions coming from our numerical results. Conclusions can be summarized in the following way: the laws of thermodynamics remain the same, but the interpretation of the results, averages and numerical treatments need special care for systems with less than about a thousand constituents, and this might need to be adapted for different properties or devices. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle Recall Performance for Content-Addressable Memory Using Adiabatic Quantum Optimization
Entropy 2017, 19(9), 500; doi:10.3390/e19090500
Received: 24 May 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 15 September 2017
PDF Full-text (1152 KB) | HTML Full-text | XML Full-text
Abstract
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps
[...] Read more.
A content-addressable memory (CAM) stores key-value associations such that the key is recalled by providing its associated value. While CAM recall is traditionally performed using recurrent neural network models, we show how to solve this problem using adiabatic quantum optimization. Our approach maps the recurrent neural network to a commercially available quantum processing unit by taking advantage of the common underlying Ising spin model. We then assess the accuracy of the quantum processor to store key-value associations by quantifying recall performance against an ensemble of problem sets. We observe that different learning rules from the neural network community influence recall accuracy but performance appears to be limited by potential noise in the processor. The strong connection established between quantum processors and neural network problems supports the growing intersection of these two ideas. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Attribute Value Weighted Average of One-Dependence Estimators
Entropy 2017, 19(9), 501; doi:10.3390/e19090501
Received: 8 July 2017 / Revised: 16 August 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is
[...] Read more.
Of numerous proposals to improve the accuracy of naive Bayes by weakening its attribute independence assumption, semi-naive Bayesian classifiers which utilize one-dependence estimators (ODEs) have been shown to be able to approximate the ground-truth attribute dependencies; meanwhile, the probability estimation in ODEs is effective, thus leading to excellent performance. In previous studies, ODEs were exploited directly in a simple way. For example, averaged one-dependence estimators (AODE) weaken the attribute independence assumption by directly averaging all of a constrained class of classifiers. However, all one-dependence estimators in AODE have the same weights and are treated equally. In this study, we propose a new paradigm based on a simple, efficient, and effective attribute value weighting approach, called attribute value weighted average of one-dependence estimators (AVWAODE). AVWAODE assigns discriminative weights to different ODEs by computing the correlation between the different root attribute value and the class. Our approach uses two different attribute value weighting measures: the Kullback–Leibler (KL) measure and the information gain (IG) measure, and thus two different versions are created, which are simply denoted by AVWAODE-KL and AVWAODE-IG, respectively. We experimentally tested them using a collection of 36 University of California at Irvine (UCI) datasets and found that they both achieved better performance than some other state-of-the-art Bayesian classifiers used for comparison. Full article
Figures

Figure 1

Open AccessArticle Modeling NDVI Using Joint Entropy Method Considering Hydro-Meteorological Driving Factors in the Middle Reaches of Hei River Basin
Entropy 2017, 19(9), 502; doi:10.3390/e19090502
Received: 4 August 2017 / Revised: 8 September 2017 / Accepted: 13 September 2017 / Published: 15 September 2017
PDF Full-text (4562 KB) | HTML Full-text | XML Full-text
Abstract
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled
[...] Read more.
Terrestrial vegetation dynamics are closely influenced by both hydrological process and climate change. This study investigated the relationships between vegetation pattern and hydro-meteorological elements. The joint entropy method was employed to evaluate the dependence between the normalized difference vegetation index (NDVI) and coupled variables in the middle reaches of the Hei River basin. Based on the spatial distribution of mutual information, the whole study area was divided into five sub-regions. In each sub-region, nested statistical models were applied to model the NDVI on the grid and regional scales, respectively. Results showed that the annual average NDVI increased at a rate of 0.005/a over the past 11 years. In the desert regions, the NDVI increased significantly with an increase in precipitation and temperature, and a high accuracy of retrieving NDVI model was obtained by coupling precipitation and temperature, especially in sub-region I. In the oasis regions, groundwater was also an important factor driving vegetation growth, and the rise of the groundwater level contributed to the growth of vegetation. However, the relationship was weaker in artificial oasis regions (sub-region III and sub-region V) due to the influence of human activities such as irrigation. The overall correlation coefficient between the observed NDVI and modeled NDVI was observed to be 0.97. The outcomes of this study are suitable for ecosystem monitoring, especially in the realm of climate change. Further studies are necessary and should consider more factors, such as runoff and irrigation. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Traction Inverter Open Switch Fault Diagnosis Based on Choi–Williams Distribution Spectral Kurtosis and Wavelet-Packet Energy Shannon Entropy
Entropy 2017, 19(9), 504; doi:10.3390/e19090504
Received: 4 September 2017 / Revised: 10 September 2017 / Accepted: 11 September 2017 / Published: 16 September 2017
PDF Full-text (5347 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively
[...] Read more.
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle An Efficient Advantage Distillation Scheme for Bidirectional Secret-Key Agreement
Entropy 2017, 19(9), 505; doi:10.3390/e19090505
Received: 30 July 2017 / Revised: 30 August 2017 / Accepted: 14 September 2017 / Published: 18 September 2017
PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted
[...] Read more.
The classical secret-key agreement (SKA) scheme includes three phases: (a) advantage distillation (AD), (b) reconciliation, and (c) privacy amplification. Define the transmission rate as the ratio between the number of raw key bits obtained by the AD phase and the number of transmitted bits in the AD. The unidirectional SKA, whose transmission rate is 0 . 5, can be realized by using the original two-way wiretap channel as the AD phase. In this paper, we establish an efficient bidirectional SKA whose transmission rate is nearly 1 by modifying the two-way wiretap channel and using the modified two-way wiretap channel as the AD phase. The bidirectional SKA can be extended to multiple rounds of SKA with the same performance and transmission rate. For multiple rounds of bidirectional SKA, we have provided the bit error rate performance of the main channel and eavesdropper’s channel and the secret-key capacity. It is shown that the bit error rate (BER) of the main channel was lower than the eavesdropper’s channel and we prove that the transmission rate was nearly 1 when the number of rounds was large. Moreover, the secret-key capacity C s was from 0 . 04 to 0 . 1 as the error probability of channel was from 0 . 01 to 0 . 15 in binary symmetric channel (BSC). The secret-key capacity was close to 0 . 3 as the signal-to-noise ratio increased in the additive white Gaussian noise (AWGN) channel. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview Economics and Finance: q-Statistical Stylized Features Galore
Entropy 2017, 19(9), 457; doi:10.3390/e19090457
Received: 4 August 2017 / Revised: 28 August 2017 / Accepted: 29 August 2017 / Published: 31 August 2017
PDF Full-text (5389 KB) | HTML Full-text | XML Full-text
Abstract
The Boltzmann–Gibbs (BG) entropy and its associated statistical mechanics were generalized, three decades ago, on the basis of the nonadditive entropy Sq (qR), which recovers the BG entropy in the q1 limit. The optimization of S
[...] Read more.
The Boltzmann–Gibbs (BG) entropy and its associated statistical mechanics were generalized, three decades ago, on the basis of the nonadditive entropy S q ( q R ), which recovers the BG entropy in the q 1 limit. The optimization of S q under appropriate simple constraints straightforwardly yields the so-called q-exponential and q-Gaussian distributions, respectively generalizing the exponential and Gaussian ones, recovered for q = 1 . These generalized functions ubiquitously emerge in complex systems, especially as economic and financial stylized features. These include price returns and volumes distributions, inter-occurrence times, characterization of wealth distributions and associated inequalities, among others. Here, we briefly review the basic concepts of this q-statistical generalization and focus on its rapidly growing applications in economics and finance. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessReview Born-Kothari Condensation for Fermions
Entropy 2017, 19(9), 479; doi:10.3390/e19090479
Received: 12 June 2017 / Revised: 2 September 2017 / Accepted: 6 September 2017 / Published: 13 September 2017
PDF Full-text (920 KB) | HTML Full-text | XML Full-text
Abstract
In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic
[...] Read more.
In the spirit of Bose–Einstein condensation, we present a detailed account of the statistical description of the condensation phenomena for a Fermi–Dirac gas following the works of Born and Kothari. For bosons, while the condensed phase below a certain critical temperature, permits macroscopic occupation at the lowest energy single particle state, for fermions, due to Pauli exclusion principle, the condensed phase occurs only in the form of a single occupancy dense modes at the highest energy state. In spite of these rudimentary differences, our recent findings [Ghosh and Ray, 2017] identify the foregoing phenomenon as condensation-like coherence among fermions in an analogous way to Bose–Einstein condensate which is collectively described by a coherent matter wave. To reach the above conclusion, we employ the close relationship between the statistical methods of bosonic and fermionic fields pioneered by Cahill and Glauber. In addition to our previous results, we described in this mini-review that the highest momentum (energy) for individual fermions, prerequisite for the condensation process, can be specified in terms of the natural length and energy scales of the problem. The existence of such condensed phases, which are of obvious significance in the context of elementary particles, have also been scrutinized. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Back to Top