Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 4 (April 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the [...] Read more.
View options order results:
result details:
Displaying articles 1-47
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Entropic Aspects of Nonlinear Partial Differential Equations: Classical and Quantum Mechanical Perspectives
Entropy 2017, 19(4), 166; doi:10.3390/e19040166
Received: 10 April 2017 / Revised: 10 April 2017 / Accepted: 10 April 2017 / Published: 12 April 2017
PDF Full-text (144 KB) | HTML Full-text | XML Full-text
Abstract
There has been increasing research activity in recent years concerning the properties and the applications of nonlinear partial differential equations that are closely related to nonstandard entropic functionals, such as the Tsallis and Renyi entropies.[...] Full article

Research

Jump to: Editorial, Review

Open AccessArticle Thermal Ratchet Effect in Confining Geometries
Entropy 2017, 19(4), 119; doi:10.3390/e19040119
Received: 31 January 2017 / Revised: 6 March 2017 / Accepted: 8 March 2017 / Published: 23 March 2017
Cited by 2 | PDF Full-text (1367 KB) | HTML Full-text | XML Full-text
Abstract
The stochastic model of the Feynman–Smoluchowski ratchet is proposed and solved using generalization of the Fick–Jacobs theory. The theory fully captures nonlinear response of the ratchet to the difference of heat bath temperatures. The ratchet performance is discussed using the mean velocity, the
[...] Read more.
The stochastic model of the Feynman–Smoluchowski ratchet is proposed and solved using generalization of the Fick–Jacobs theory. The theory fully captures nonlinear response of the ratchet to the difference of heat bath temperatures. The ratchet performance is discussed using the mean velocity, the average heat flow between the two heat reservoirs and the figure of merit, which quantifies energetic cost for attaining a certain mean velocity. Limits of the theory are tested comparing its predictions to numerics. We also demonstrate connection between the ratchet effect emerging in the model and rotations of the probability current and explain direction of the mean velocity using simple discrete analogue of the model. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Impact Location and Quantification on an Aluminum Sandwich Panel Using Principal Component Analysis and Linear Approximation with Maximum Entropy
Entropy 2017, 19(4), 137; doi:10.3390/e19040137
Received: 4 January 2017 / Revised: 7 March 2017 / Accepted: 19 March 2017 / Published: 25 March 2017
PDF Full-text (1478 KB) | HTML Full-text | XML Full-text
Abstract
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This
[...] Read more.
To avoid structural failures it is of critical importance to detect, locate and quantify impact damage as soon as it occurs. This can be achieved by impact identification methodologies, which continuously monitor the structure, detecting, locating, and quantifying impacts as they occur. This article presents an improved impact identification algorithm that uses principal component analysis (PCA) to extract features from the monitored signals and an algorithm based on linear approximation with maximum entropy to estimate the impacts. The proposed methodology is validated with two experimental applications, which include an aluminum plate and an aluminum sandwich panel. The results are compared with those of other impact identification algorithms available in literature, demonstrating that the proposed method outperforms these algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Leveraging Receiver Message Side Information in Two-Receiver Broadcast Channels: A General Approach †
Entropy 2017, 19(4), 138; doi:10.3390/e19040138
Received: 9 February 2017 / Revised: 16 March 2017 / Accepted: 20 March 2017 / Published: 23 March 2017
PDF Full-text (822 KB) | HTML Full-text | XML Full-text
Abstract
We consider two-receiver broadcast channels where each receiver may know a priori some of the messages requested by the other receiver as receiver message side information (RMSI). We devise a general approach to leverage RMSI in these channels. To this end, we first
[...] Read more.
We consider two-receiver broadcast channels where each receiver may know a priori some of the messages requested by the other receiver as receiver message side information (RMSI). We devise a general approach to leverage RMSI in these channels. To this end, we first propose a pre-coding scheme considering the general message setup where each receiver requests both common and private messages and knows a priori part of the private message requested by the other receiver as RMSI. We then construct the transmission scheme of a two-receiver channel with RMSI by applying the proposed pre-coding scheme to the best transmission scheme for the channel without RMSI. To demonstrate the effectiveness of our approach, we apply our pre-coding scheme to three categories of the two-receiver discrete memoryless broadcast channel: (i) channel without state; (ii) channel with states known causally to the transmitter; and (iii) channel with states known non-causally to the transmitter. We then derive a unified inner bound for all three categories. We show that our inner bound is tight for some new cases in each of the three categories, as well as all cases whose capacity region was known previously. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Tensor Singular Spectrum Decomposition Algorithm Based on Permutation Entropy for Rolling Bearing Fault Diagnosis
Entropy 2017, 19(4), 139; doi:10.3390/e19040139
Received: 13 March 2017 / Revised: 21 March 2017 / Accepted: 21 March 2017 / Published: 23 March 2017
Cited by 3 | PDF Full-text (6566 KB) | HTML Full-text | XML Full-text
Abstract
Mechanical vibration signal mapped into a high-dimensional space tends to exhibit a special distribution and movement characteristics, which can further reveal the dynamic behavior of the original time series. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure
[...] Read more.
Mechanical vibration signal mapped into a high-dimensional space tends to exhibit a special distribution and movement characteristics, which can further reveal the dynamic behavior of the original time series. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, the tensor decomposition algorithm has broad application prospects in signal processing. High-dimensional tensor can be obtained from a one-dimensional vibration signal by using phase space reconstruction, which is called the tensorization of data. As a new signal decomposition method, tensor-based singular spectrum algorithm (TSSA) fully combines the advantages of phase space reconstruction and tensor decomposition. However, TSSA has some problems, mainly in estimating the rank of tensor and selecting the optimal reconstruction tensor. In this paper, the improved TSSA algorithm based on convex-optimization and permutation entropy (PE) is proposed. Firstly, aiming to accurately estimate the rank of tensor decomposition, this paper presents a convex optimization algorithm using non-convex penalty functions based on singular value decomposition (SVD). Then, PE is employed to evaluate the desired tensor and improve the denoising performance. In order to verify the effectiveness of proposed algorithm, both numerical simulation and experimental bearing failure data are analyzed. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Ionic Liquids Confined in Silica Ionogels: Structural, Thermal, and Dynamical Behaviors
Entropy 2017, 19(4), 140; doi:10.3390/e19040140
Received: 31 January 2017 / Revised: 17 March 2017 / Accepted: 20 March 2017 / Published: 24 March 2017
Cited by 1 | PDF Full-text (3265 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Ionogels are porous monoliths providing nanometer-scale confinement of an ionic liquid within an oxide network. Various dynamic parameters and the detailed nature of phase transitions were investigated by using a neutron scattering technique, giving smaller time and space scales compared to earlier results
[...] Read more.
Ionogels are porous monoliths providing nanometer-scale confinement of an ionic liquid within an oxide network. Various dynamic parameters and the detailed nature of phase transitions were investigated by using a neutron scattering technique, giving smaller time and space scales compared to earlier results from other techniques. By investigating the nature of the hydrogen mean square displacement (local mobility), qualitative information on diffusion and different phase transitions were obtained. The results presented herein show similar short-time molecular dynamics between pristine ionic liquids and confined ionic liquids through residence time and diffusion coefficient values, thus, explaining in depth the good ionic conductivity of ionogels. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Permutation Entropy for the Characterisation of Brain Activity Recorded with Magnetoencephalograms in Healthy Ageing
Entropy 2017, 19(4), 141; doi:10.3390/e19040141
Received: 30 January 2017 / Revised: 14 March 2017 / Accepted: 22 March 2017 / Published: 25 March 2017
Cited by 1 | PDF Full-text (2443 KB) | HTML Full-text | XML Full-text
Abstract
The characterisation of healthy ageing of the brain could help create a fingerprint of normal ageing that might assist in the early diagnosis of neurodegenerative conditions. This study examined changes in resting state magnetoencephalogram (MEG) permutation entropy due to age and gender in
[...] Read more.
The characterisation of healthy ageing of the brain could help create a fingerprint of normal ageing that might assist in the early diagnosis of neurodegenerative conditions. This study examined changes in resting state magnetoencephalogram (MEG) permutation entropy due to age and gender in a sample of 220 healthy participants (98 males and 122 females, ages ranging between 7 and 84). Entropy was quantified using normalised permutation entropy and modified permutation entropy, with an embedding dimension of 5 and a lag of 1 as the input parameters for both algorithms. Effects of age were observed over the five regions of the brain, i.e., anterior, central, posterior, and left and right lateral, with the anterior and central regions containing the highest permutation entropy. Statistically significant differences due to age were observed in the different brain regions for both genders, with the evolutions described using the fitting of polynomial regressions. Nevertheless, no significant differences between the genders were observed across all ages. These results suggest that the evolution of entropy in the background brain activity, quantified with permutation entropy algorithms, might be considered an alternative illustration of a ‘nominal’ physiological rhythm. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle A Novel Framework for Shock Filter Using Partial Differential Equations
Entropy 2017, 19(4), 142; doi:10.3390/e19040142
Received: 31 December 2016 / Revised: 12 March 2017 / Accepted: 22 March 2017 / Published: 26 March 2017
PDF Full-text (10577 KB) | HTML Full-text | XML Full-text
Abstract
In dilation or erosion processes, a shock filter is widely used in signal enhancing or image deburring. Traditionally, sign function is employed in shock filtering for reweighting of edge-detection in images and decides whether a pixel should dilate to the local maximum or
[...] Read more.
In dilation or erosion processes, a shock filter is widely used in signal enhancing or image deburring. Traditionally, sign function is employed in shock filtering for reweighting of edge-detection in images and decides whether a pixel should dilate to the local maximum or evolve to the local minimum. Some researchers replace sign function with tanh function or arctan function, trying to change the evolution tracks of the pixels when filtering is in progress. However, analysis here reveals that only function replacement does usually not work. This paper revisits first shock filters and their modifications. Then, a fuzzy shock filter is proposed after a membership function in a shock filter model is adopted to adjust the evolve rate of image pixels. The proposed filter is a parameter tuning system, which unites several formulations of shock filters into one fuzzy framework. Experimental results show that the new filter is flexible and robust and can converge fast. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Paradigms of Cognition
Entropy 2017, 19(4), 143; doi:10.3390/e19040143
Received: 19 December 2016 / Revised: 23 February 2017 / Accepted: 10 March 2017 / Published: 27 March 2017
PDF Full-text (732 KB) | HTML Full-text | XML Full-text
Abstract
An abstract, quantitative theory which connects elements of information —key ingredients in the cognitive proces—is developed. Seemingly unrelated results are thereby unified. As an indication of this, consider results in classical probabilistic information theory involving information projections and so-called Pythagorean inequalities. This
[...] Read more.
An abstract, quantitative theory which connects elements of information —key ingredients in the cognitive proces—is developed. Seemingly unrelated results are thereby unified. As an indication of this, consider results in classical probabilistic information theory involving information projections and so-called Pythagorean inequalities. This has a certain resemblance to classical results in geometry bearing Pythagoras’ name. By appealing to the abstract theory presented here, you have a common point of reference for these results. In fact, the new theory provides a general framework for the treatment of a multitude of global optimization problems across a range of disciplines such as geometry, statistics and statistical physics. Several applications are given, among them an “explanation” of Tsallis entropy is suggested. For this, as well as for the general development of the abstract underlying theory, emphasis is placed on interpretations and associated philosophical considerations. Technically, game theory is the key tool. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Figures

Figure 1

Open AccessArticle The Many Classical Faces of Quantum Structures
Entropy 2017, 19(4), 144; doi:10.3390/e19040144
Received: 9 January 2017 / Revised: 22 February 2017 / Accepted: 23 March 2017 / Published: 29 March 2017
Cited by 1 | PDF Full-text (323 KB) | HTML Full-text | XML Full-text
Abstract
Interpretational problems with quantum mechanics can be phrased precisely by only talking about empirically accessible information. This prompts a mathematical reformulation of quantum mechanics in terms of classical mechanics. We survey this programme in terms of algebraic quantum theory. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Multiscale Cross-Approximate Entropy Analysis of Bilateral Fingertips Photoplethysmographic Pulse Amplitudes among Middle-to-Old Aged Individuals with or without Type 2 Diabetes
Entropy 2017, 19(4), 145; doi:10.3390/e19040145
Received: 30 January 2017 / Revised: 27 March 2017 / Accepted: 28 March 2017 / Published: 30 March 2017
Cited by 1 | PDF Full-text (897 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Multiscale cross-approximate entropy (MC-ApEn) between two different physiological signals could evaluate cardiovascular health in diabetes. Whether MC-ApEn analysis between two similar signals such as photoplethysmographic (PPG) pulse amplitudes of bilateral fingertips can reflect diabetes status is unknown. From a middle-to-old-aged population free of
[...] Read more.
Multiscale cross-approximate entropy (MC-ApEn) between two different physiological signals could evaluate cardiovascular health in diabetes. Whether MC-ApEn analysis between two similar signals such as photoplethysmographic (PPG) pulse amplitudes of bilateral fingertips can reflect diabetes status is unknown. From a middle-to-old-aged population free of prior cardiovascular disease, we selected the unaffected (no type 2 diabetes, n = 36), the well-controlled diabetes (glycated hemoglobin (HbA1c) < 8%, n = 30), and the poorly- controlled diabetes (HbA1c ≥ 8%, n = 26) groups. MC-ApEn indexes were calculated from simultaneous consecutive 1500 PPG pulse amplitudes signals of bilateral index fingertips. The average of scale factors 1–5 (MC-ApEnSS) and of scale factors 6–10 (MC-ApEnLS) were defined as the small- and large-scales MC-ApEn, respectively. The MC-ApEnLS index was highest in the unaffected, followed by the well-controlled diabetes, and then the poorly-controlled diabetes (0.70, 0.62, and 0.53; all paired p-values were <0.05); in contrast, the MC-ApEnSS index did not differ between groups. Our findings suggested that the bilateral fingertips large-scale MC-ApEnLS index of PPG pulse amplitudes might be able to evaluate the glycemic status and detect subtle vascular disease in type 2 diabetes. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics II)
Figures

Figure 1

Open AccessArticle Design and Implementation of SOC Prediction for a Li-Ion Battery Pack in an Electric Car with an Embedded System
Entropy 2017, 19(4), 146; doi:10.3390/e19040146
Received: 21 January 2017 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 17 April 2017
PDF Full-text (10716 KB) | HTML Full-text | XML Full-text
Abstract
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the
[...] Read more.
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the charging process will be cut off. Incorrect predictions may cause overcharging or over-discharging of the battery. In this study, a low-cost embedded system is used to determine the state of charge of an electric car. A Li-Ion battery cell is trained using a feed-forward neural network via Matlab/Neural Network Toolbox. The trained cell is adapted to the whole battery pack of the electric car and embedded via Matlab/Simulink to a low-cost microcontroller that proposed a system in real-time. The experimental results indicated that accurate robust estimation results could be obtained by the proposed system. Full article
Figures

Figure 1

Open AccessArticle Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information
Entropy 2017, 19(4), 148; doi:10.3390/e19040148
Received: 20 January 2017 / Revised: 17 March 2017 / Accepted: 28 March 2017 / Published: 31 March 2017
PDF Full-text (1116 KB) | HTML Full-text | XML Full-text
Abstract
This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete) symbols from which the embedded dynamic information
[...] Read more.
This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete) symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series). The main challenges here are: (1) definition of the symbol assignment for the time series; (2) identification of the partitioning segment locations in the signal space of time series; and (3) construction of probabilistic finite-state automata (PFSA) from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle A Distribution Family Bridging the Gaussian and the Laplace Laws, Gram–Charlier Expansions, Kurtosis Behaviour, and Entropy Features
Entropy 2017, 19(4), 149; doi:10.3390/e19040149
Received: 10 February 2017 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 31 March 2017
PDF Full-text (2010 KB) | HTML Full-text | XML Full-text
Abstract
The paper devises a family of leptokurtic bell-shaped distributions which is based on the hyperbolic secant raised to a positive power, and bridges the Laplace and Gaussian laws on asymptotic arguments. Moment and cumulant generating functions are then derived and represented in terms
[...] Read more.
The paper devises a family of leptokurtic bell-shaped distributions which is based on the hyperbolic secant raised to a positive power, and bridges the Laplace and Gaussian laws on asymptotic arguments. Moment and cumulant generating functions are then derived and represented in terms of polygamma functions. The behaviour of shape parameters, namely kurtosis and entropy, is investigated. In addition, Gram–Charlier-type (GCT) expansions, based on the aforementioned distributions and their orthogonal polynomials, are specified, and an operational criterion is provided to meet modelling requirements in a possibly severe kurtosis and skewness environment. The role played by entropy within the kurtosis ranges of GCT expansions is also examined. Full article
Figures

Figure 1

Open AccessArticle Minimum Sample Size for Reliable Causal Inference Using Transfer Entropy
Entropy 2017, 19(4), 150; doi:10.3390/e19040150
Received: 20 February 2017 / Revised: 29 March 2017 / Accepted: 29 March 2017 / Published: 31 March 2017
PDF Full-text (698 KB) | HTML Full-text | XML Full-text
Abstract
Transfer Entropy has been applied to experimental datasets to unveil causality between variables. In particular, its application to non-stationary systems has posed a great challenge due to restrictions on the sample size. Here, we have investigated the minimum sample size that produces a
[...] Read more.
Transfer Entropy has been applied to experimental datasets to unveil causality between variables. In particular, its application to non-stationary systems has posed a great challenge due to restrictions on the sample size. Here, we have investigated the minimum sample size that produces a reliable causal inference. The methodology has been applied to two prototypical models: the linear model autoregressive-moving average and the non-linear logistic map. The relationship between the Transfer Entropy value and the sample size has been systematically examined. Additionally, we have shown the dependence of the reliable sample size and the strength of coupling between the variables. Our methodology offers a realistic lower bound for the sample size to produce a reliable outcome. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle A Combined Entropy/Phase-Field Approach to Gravity
Entropy 2017, 19(4), 151; doi:10.3390/e19040151
Received: 9 March 2017 / Revised: 28 March 2017 / Accepted: 29 March 2017 / Published: 31 March 2017
PDF Full-text (1538 KB) | HTML Full-text | XML Full-text
Abstract
Terms related to gradients of scalar fields are introduced as scalar products into the formulation of entropy. A Lagrange density is then formulated by adding constraints based on known conservation laws. Applying the Lagrange formalism to the resulting Lagrange density leads to the
[...] Read more.
Terms related to gradients of scalar fields are introduced as scalar products into the formulation of entropy. A Lagrange density is then formulated by adding constraints based on known conservation laws. Applying the Lagrange formalism to the resulting Lagrange density leads to the Poisson equation of gravitation and also includes terms which are related to the curvature of space. The formalism further leads to terms possibly explaining nonlinear extensions known from modified Newtonian dynamics approaches. The article concludes with a short discussion of the presented methodology and provides an outlook on other phenomena which might be dealt with using this new approach. Full article
(This article belongs to the Section Astrophysics and Cosmology)
Figures

Figure 1

Open AccessArticle An Approach to the Evaluation of the Quality of Accounting Information Based on Relative Entropy in Fuzzy Linguistic Environments
Entropy 2017, 19(4), 152; doi:10.3390/e19040152
Received: 13 March 2017 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 5 April 2017
PDF Full-text (624 KB) | HTML Full-text | XML Full-text
Abstract
There is a risk when company stakeholders make decisions using accounting information with varied qualities in the same way. In order to evaluate the accounting information quality, this paper proposed an approach to the evaluation of the quality of accounting information based on
[...] Read more.
There is a risk when company stakeholders make decisions using accounting information with varied qualities in the same way. In order to evaluate the accounting information quality, this paper proposed an approach to the evaluation of the quality of accounting information based on relative entropy in fuzzy linguistic environments. Firstly, the accounting information quality evaluation criteria are constructed not only from the quality of the accounting information content but also from the accounting information generation environment. Considering that the rating values with respect to the criteria are in linguistic forms with different granularities, the method to deal with the linguistic rating values is given. In the method, the linguistic terms are modeled with the 2-tuple linguistic model. Relative entropy is used to calculate the information consistency, which is used to derive the weight of experts and criteria. Finally, the example is given to illustrate the feasibility and practicability of the proposed method. Full article
Figures

Figure 1

Open AccessArticle Maxentropic Solutions to a Convex Interpolation Problem Motivated by Utility Theory
Entropy 2017, 19(4), 153; doi:10.3390/e19040153
Received: 17 February 2017 / Revised: 21 March 2017 / Accepted: 27 March 2017 / Published: 1 April 2017
PDF Full-text (348 KB) | HTML Full-text | XML Full-text
Abstract
Here, we consider the following inverse problem: Determination of an increasing continuous function U(x) on an interval [a,b] from the knowledge of the integrals U(x)dFXi(x)
[...] Read more.
Here, we consider the following inverse problem: Determination of an increasing continuous function U ( x ) on an interval [ a , b ] from the knowledge of the integrals U ( x ) d F X i ( x ) = π i where the X i are random variables taking values on [ a , b ] and π i are given numbers. This is a linear integral equation with discrete data, which can be transformed into a generalized moment problem when U ( x ) is supposed to have a positive derivative, and it becomes a classical interpolation problem if the X i are deterministic. In some cases, e.g., in utility theory in economics, natural growth and convexity constraints are required on the function, which makes the inverse problem more interesting. Not only that, the data may be provided in intervals and/or measured up to an additive error. It is the purpose of this work to show how the standard method of maximum entropy, as well as the method of maximum entropy in the mean, provides an efficient method to deal with these problems. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Is Turbulence a State of Maximum Energy Dissipation?
Entropy 2017, 19(4), 154; doi:10.3390/e19040154
Received: 2 February 2017 / Revised: 23 March 2017 / Accepted: 28 March 2017 / Published: 31 March 2017
PDF Full-text (993 KB) | HTML Full-text | XML Full-text
Abstract
Turbulent flows are known to enhance turbulent transport. It has then even been suggested that turbulence is a state of maximum energy dissipation. In this paper, we re-examine critically this suggestion in light of several recent works around the Maximum Entropy Production
[...] Read more.
Turbulent flows are known to enhance turbulent transport. It has then even been suggested that turbulence is a state of maximum energy dissipation. In this paper, we re-examine critically this suggestion in light of several recent works around the Maximum Entropy Production principle (MEP) that has been used in several out-of-equilibrium systems. We provide a set of four different optimization principles, based on maximization of energy dissipation, entropy production, Kolmogorov–Sinai entropy and minimization of mixing time, and study the connection between these principles using simple out-of-equilibrium models describing mixing of a scalar quantity. We find that there is a chained-relationship between most probable stationary states of the system, and their ability to obey one of the four principles. This provides an empirical justification of the Maximum Entropy Production principle in this class of systems, including some turbulent flows, for special boundary conditions. Otherwise, we claim that the minimization of the mixing time would be a more appropriate principle. We stress that this principle might actually be limited to flows where symmetry or dynamics impose pure mixing of a quantity (like angular momentum, momentum or temperature). The claim that turbulence is a state of maximum energy dissipation, a quantity intimately related to entropy production, is therefore limited to special situations that nevertheless include classical systems such as shear flows, Rayleigh–Bénard convection and von Kármán flows, forced with constant velocity or temperature conditions. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Random Walks Associated with Nonlinear Fokker–Planck Equations
Entropy 2017, 19(4), 155; doi:10.3390/e19040155
Received: 24 February 2017 / Revised: 28 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
Cited by 1 | PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
A nonlinear random walk related to the porous medium equation (nonlinear Fokker–Planck equation) is investigated. This random walk is such that when the number of steps is sufficiently large, the probability of finding the walker in a certain position after taking a determined
[...] Read more.
A nonlinear random walk related to the porous medium equation (nonlinear Fokker–Planck equation) is investigated. This random walk is such that when the number of steps is sufficiently large, the probability of finding the walker in a certain position after taking a determined number of steps approximates to a q-Gaussian distribution ( G q , β ( x ) [ 1 ( 1 q ) β x 2 ] 1 / ( 1 q ) ), which is a solution of the porous medium equation. This can be seen as a verification of a generalized central limit theorem where the attractor is a q-Gaussian distribution, reducing to the Gaussian one when the linearity is recovered ( q 1 ). In addition, motivated by this random walk, a nonlinear Markov chain is suggested. Full article
Figures

Figure 1

Open AccessArticle Use of Exergy Analysis to Quantify the Effect of Lithium Bromide Concentration in an Absorption Chiller
Entropy 2017, 19(4), 156; doi:10.3390/e19040156
Received: 24 February 2017 / Revised: 27 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
Cited by 1 | PDF Full-text (1892 KB) | HTML Full-text | XML Full-text
Abstract
Absorption chillers present opportunities to utilize sustainable fuels in the production of chilled water. An assessment of the steam driven absorption chiller at the University of Idaho, was performed to quantify the current exergy destruction rates. Measurements of external processes and flows were
[...] Read more.
Absorption chillers present opportunities to utilize sustainable fuels in the production of chilled water. An assessment of the steam driven absorption chiller at the University of Idaho, was performed to quantify the current exergy destruction rates. Measurements of external processes and flows were used to create a mathematical model. Using engineering equation solver to analyze and identify the major sources of exergy destruction within the chiller. It was determined that the absorber, generator and condenser are the largest contribution to the exergy destruction at 30%, 31% and 28% of the respectively. The exergetic efficiency is found to be 16% with a Coefficient of performance (COP) of 0.65. Impacts of weak solution concentration of lithium bromide on the exergy destruction rates were evaluated using parametric studies. The studies reveled an optimum concentration that could be obtained by increasing the weak solution concentration from 56% to 58.8% a net decrease in 0.4% of the exergy destruction caused by the absorption chiller can be obtained. The 2.8% increase in lithium-bromide concentration decreases the exergy destruction primarily within the absorber with a decrease of 5.1%. This increase in concentration is shown to also decrease the maximum cooling capacity by 3% and increase the exergy destruction of the generator by 4.9%. The study also shows that the increase in concentration will change the internal temperatures by 3 to 7 °C. Conversely, reducing the weak solution concentration results is also shown to increase the exergetic destruction rates while also potentially increasing the cooling capacity. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Quadratic Mutual Information Feature Selection
Entropy 2017, 19(4), 157; doi:10.3390/e19040157
Received: 13 December 2016 / Revised: 27 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
PDF Full-text (1567 KB) | HTML Full-text | XML Full-text
Abstract
We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second
[...] Read more.
We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second order non-linear relations. Its main advantages are: (i) unified analysis of discrete and continuous data, excluding any discretization; and (ii) its parameter-free design. The effectiveness of the proposed method is demonstrated through an extensive comparison with mutual information feature selection (MIFS), minimum redundancy maximum relevance (MRMR), and joint mutual information (JMI) on classification and regression problem domains. The experiments show that proposed method performs comparably to the other methods when applied to classification problems, except it is considerably faster. In the case of regression, it compares favourably to the others, but is slower. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Nonequilibrium Thermodynamics and Steady State Density Matrix for Quantum Open Systems
Entropy 2017, 19(4), 158; doi:10.3390/e19040158
Received: 8 March 2017 / Revised: 28 March 2017 / Accepted: 30 March 2017 / Published: 2 April 2017
Cited by 1 | PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
We consider the generic model of a finite-size quantum electron system connected to two (temperature and particle) reservoirs. The quantum open system is driven out of equilibrium by the presence of both potential temperature and chemical differences between the two reservoirs. The nonequilibrium
[...] Read more.
We consider the generic model of a finite-size quantum electron system connected to two (temperature and particle) reservoirs. The quantum open system is driven out of equilibrium by the presence of both potential temperature and chemical differences between the two reservoirs. The nonequilibrium (NE) thermodynamical properties of such a quantum open system are studied for the steady state regime. In such a regime, the corresponding NE density matrix is built on the so-called generalised Gibbs ensembles. From different expressions of the NE density matrix, we can identify the terms related to the entropy production in the system. We show, for a simple model, that the entropy production rate is always a positive quantity. Alternative expressions for the entropy production are also obtained from the Gibbs–von Neumann conventional formula and discussed in detail. Our results corroborate and expand earlier works found in the literature. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle A Study of the Transfer Entropy Networks on Industrial Electricity Consumption
Entropy 2017, 19(4), 159; doi:10.3390/e19040159
Received: 11 January 2017 / Revised: 29 March 2017 / Accepted: 3 April 2017 / Published: 13 April 2017
PDF Full-text (2525 KB) | HTML Full-text | XML Full-text
Abstract
We study information transfer routes among cross-industry and cross-region electricity consumption data based on transfer entropy and the MST (Minimum Spanning Tree) model. First, we characterize the information transfer routes with transfer entropy matrixes, and find that the total entropy transfer of the
[...] Read more.
We study information transfer routes among cross-industry and cross-region electricity consumption data based on transfer entropy and the MST (Minimum Spanning Tree) model. First, we characterize the information transfer routes with transfer entropy matrixes, and find that the total entropy transfer of the relatively developed Guangdong Province is lower than others, with significant industrial cluster within the province. Furthermore, using a reshuffling method, we find that driven industries contain much more information flows than driving industries, and are more influential on the degree of order of regional industries. Finally, based on the Chu-Liu-Edmonds MST algorithm, we extract the minimum spanning trees of provincial industries. Individual MSTs show that the MSTs follow a chain-like formation in developed provinces and star-like structures in developing provinces. Additionally, all MSTs with the root of minimal information outflow industrial sector are of chain-form. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessArticle Consistent Estimation of Partition Markov Models
Entropy 2017, 19(4), 160; doi:10.3390/e19040160
Received: 1 March 2017 / Revised: 31 March 2017 / Accepted: 4 April 2017 / Published: 6 April 2017
Cited by 1 | PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions:
[...] Read more.
The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Open AccessArticle P-Adic Analog of Navier–Stokes Equations: Dynamics of Fluid’s Flow in Percolation Networks (from Discrete Dynamics with Hierarchic Interactions to Continuous Universal Scaling Model)
Entropy 2017, 19(4), 161; doi:10.3390/e19040161
Received: 15 March 2017 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 7 April 2017
PDF Full-text (3677 KB) | HTML Full-text | XML Full-text
Abstract
Recently p-adic (and, more generally, ultrametric) spaces representing tree-like networks of percolation, and as a special case of capillary patterns in porous media, started to be used to model the propagation of fluids (e.g., oil, water, oil-in-water, and water-in-oil emulsion). The aim
[...] Read more.
Recently p-adic (and, more generally, ultrametric) spaces representing tree-like networks of percolation, and as a special case of capillary patterns in porous media, started to be used to model the propagation of fluids (e.g., oil, water, oil-in-water, and water-in-oil emulsion). The aim of this note is to derive p-adic dynamics described by fractional differential operators (Vladimirov operators) starting with discrete dynamics based on hierarchically-structured interactions between the fluids’ volumes concentrated at different levels of the percolation tree and coming to the multiscale universal topology of the percolating nets. Similar systems of discrete hierarchic equations were widely applied to modeling of turbulence. However, in the present work this similarity is only formal since, in our model, the trees are real physical patterns with a tree-like topology of capillaries (or fractures) in random porous media (not cascade trees, as in the case of turbulence, which we will be discussed elsewhere for the spinner flowmeter commonly used in the petroleum industry). By going to the “continuous limit” (with respect to the p-adic topology) we represent the dynamics on the tree-like configuration space as an evolutionary nonlinear p-adic fractional (pseudo-) differential equation, the tree-like analog of the Navier–Stokes equation. We hope that our work helps to come closer to a nonlinear equation solution, taking into account the scaling, hierarchies, and formal derivations, imprinted from the similar properties of the real physical world. Once this coupling is resolved, the more problematic question of information scaling in industrial applications will be achieved. Full article
Figures

Figure 1

Open AccessArticle Situatedness and Embodiment of Computational Systems
Entropy 2017, 19(4), 162; doi:10.3390/e19040162
Received: 26 February 2017 / Revised: 1 April 2017 / Accepted: 4 April 2017 / Published: 7 April 2017
PDF Full-text (232 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued
[...] Read more.
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. Full article
Open AccessArticle Modelling Urban Sprawl Using Remotely Sensed Data: A Case Study of Chennai City, Tamilnadu
Entropy 2017, 19(4), 163; doi:10.3390/e19040163
Received: 4 January 2017 / Revised: 1 April 2017 / Accepted: 5 April 2017 / Published: 7 April 2017
PDF Full-text (7403 KB) | HTML Full-text | XML Full-text
Abstract
Urban sprawl (US), propelled by rapid population growth leads to the shrinkage of productive agricultural lands and pristine forests in the suburban areas and, in turn, adversely affects the provision of ecosystem services. The quantification of US is thus crucial for effective urban
[...] Read more.
Urban sprawl (US), propelled by rapid population growth leads to the shrinkage of productive agricultural lands and pristine forests in the suburban areas and, in turn, adversely affects the provision of ecosystem services. The quantification of US is thus crucial for effective urban planning and environmental management. Like many megacities in fast growing developing countries, Chennai, the capital of Tamilnadu and one of the business hubs in India, has experienced extensive US triggered by the doubling of total population over the past three decades. However, the extent and level of US has not yet been quantified and a prediction for future extent of US is lacking. We employed the Random Forest (RF) classification on Landsat imageries from 1991, 2003, and 2016, and computed six landscape metrics to delineate the extent of urban areas within a 10 km suburban buffer of Chennai. The level of US was then quantified using Renyi’s entropy. A land change model was subsequently used to project land cover for 2027. A 70.35% expansion in urban areas was observed mainly towards the suburban periphery of Chennai between 1991 and 2016. The Renyi’s entropy value for year 2016 was 0.9, exhibiting a two-fold level of US when compared to 1991. The spatial metrics values indicate that the existing urban areas became denser and the suburban agricultural, forests and particularly barren lands were transformed into fragmented urban settlements. The forecasted land cover for 2027 indicates a conversion of 13,670.33 ha (16.57% of the total landscape) of existing forests and agricultural lands into urban areas with an associated increase in the entropy value to 1.7, indicating a tremendous level of US. Our study provides useful metrics for urban planning authorities to address the social-ecological consequences of US and to protect ecosystem services. Full article
(This article belongs to the Special Issue Entropy for Sustainable and Resilient Urban Future)
Figures

Figure 1

Open AccessArticle Heisenberg and Entropic Uncertainty Measures for Large-Dimensional Harmonic Systems
Entropy 2017, 19(4), 164; doi:10.3390/e19040164
Received: 8 March 2017 / Revised: 30 March 2017 / Accepted: 6 April 2017 / Published: 9 April 2017
Cited by 1 | PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
The D-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) is, together with the hydrogenic system, the main prototype of the physics of multidimensional quantum systems. In this work, we rigorously determine the leading term of the
[...] Read more.
The D-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) is, together with the hydrogenic system, the main prototype of the physics of multidimensional quantum systems. In this work, we rigorously determine the leading term of the Heisenberg-like and entropy-like uncertainty measures of this system as given by the radial expectation values and the Rényi entropies, respectively, at the limit of large D. The associated multidimensional position-momentum uncertainty relations are discussed, showing that they saturate the corresponding general ones. A conjecture about the Shannon-like uncertainty relation is given, and an interesting phenomenon is observed: the Heisenberg-like and Rényi-entropy-based equality-type uncertainty relations for all of the D-dimensional harmonic oscillator states in the pseudoclassical ( D ) limit are the same as the corresponding ones for the hydrogenic systems, despite the so different character of the oscillator and Coulomb potentials. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Open AccessArticle An Entropy-Based Approach for Evaluating Travel Time Predictability Based on Vehicle Trajectory Data
Entropy 2017, 19(4), 165; doi:10.3390/e19040165
Received: 29 January 2017 / Revised: 24 March 2017 / Accepted: 7 April 2017 / Published: 11 April 2017
Cited by 1 | PDF Full-text (2716 KB) | HTML Full-text | XML Full-text
Abstract
With the great development of intelligent transportation systems (ITS), travel time prediction has attracted the interest of many researchers, and a large number of prediction methods have been developed. However, as an unavoidable topic, the predictability of travel time series is the basic
[...] Read more.
With the great development of intelligent transportation systems (ITS), travel time prediction has attracted the interest of many researchers, and a large number of prediction methods have been developed. However, as an unavoidable topic, the predictability of travel time series is the basic premise for travel time prediction, which has received less attention than the methodology. Based on the analysis of the complexity of the travel time series, this paper defines travel time predictability to express the probability of correct travel time prediction, and proposes an entropy-based method to measure the upper bound of travel time predictability. Multiscale entropy is employed to quantify the complexity of the travel time series, and the relationships between entropy and the upper bound of travel time predictability are presented. Empirical studies are made with vehicle trajectory data in an express road section to shape the features of travel time predictability. The effectiveness of time scales, tolerance, and series length to entropy and travel time predictability are analyzed, and some valuable suggestions about the accuracy of travel time predictability are discussed. Finally, comparisons between travel time predictability and actual prediction results from two prediction models, ARIMA and BPNN, are made. Experimental results demonstrate the validity and reliability of the proposed travel time predictability. Full article
Figures

Figure 1

Open AccessArticle Application of the Fuzzy Oil Drop Model Describes Amyloid as a Ribbonlike Micelle
Entropy 2017, 19(4), 167; doi:10.3390/e19040167
Received: 6 March 2017 / Revised: 7 April 2017 / Accepted: 11 April 2017 / Published: 14 April 2017
PDF Full-text (9194 KB) | HTML Full-text | XML Full-text
Abstract
We propose a mathematical model describing the formation of micellar forms—whether spherical, globular, cylindrical, or ribbonlike—as well as its adaptation to protein structure. Our model, based on the fuzzy oil drop paradigm, assumes that in a spherical micelle the distribution of hydrophobicity produced
[...] Read more.
We propose a mathematical model describing the formation of micellar forms—whether spherical, globular, cylindrical, or ribbonlike—as well as its adaptation to protein structure. Our model, based on the fuzzy oil drop paradigm, assumes that in a spherical micelle the distribution of hydrophobicity produced by the alignment of polar molecules with the external water environment can be modeled by a 3D Gaussian function. Perturbing this function by changing the values of its sigma parameters leads to a variety of conformations—the model is therefore applicable to globular, cylindrical, and ribbonlike micelles. In the context of protein structures ranging from globular to ribbonlike, our model can explain the emergence of fibrillar forms; particularly amyloids. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Where There is Life There is Mind: In Support of a Strong Life-Mind Continuity Thesis
Entropy 2017, 19(4), 169; doi:10.3390/e19040169
Received: 22 February 2017 / Revised: 10 April 2017 / Accepted: 11 April 2017 / Published: 14 April 2017
Cited by 3 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act
[...] Read more.
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act to maintain themselves in their expected biological and cognitive states, and that they can do so only by minimizing their free energy given that the long-term average of free energy is entropy. The paper then argues that there is no singular interpretation of the FEP for thinking about the relation between life and mind. Some FEP formulations express what we call an independence view of life and mind. One independence view is a cognitivist view of the FEP. It turns on information processing with semantic content, thus restricting the range of systems capable of exhibiting mentality. Other independence views exemplify what we call an overly generous non-cognitivist view of the FEP, and these appear to go in the opposite direction. That is, they imply that mentality is nearly everywhere. The paper proceeds to argue that non-cognitivist FEP, and its implications for thinking about the relation between life and mind, can be usefully constrained by key ideas in recent enactive approaches to cognitive science. We conclude that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic). Full article
Open AccessArticle Dynamic Rankings for Seed Selection in Complex Networks: Balancing Costs and Coverage
Entropy 2017, 19(4), 170; doi:10.3390/e19040170
Received: 27 February 2017 / Revised: 7 April 2017 / Accepted: 12 April 2017 / Published: 15 April 2017
PDF Full-text (2681 KB) | HTML Full-text | XML Full-text
Abstract
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous
[...] Read more.
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous research revealed the advantage of using a sequence of seeds instead of a single stage approach. The current study extends sequential seeding and further improves results with the use of dynamic rankings, which are created by recalculation of network measures used for additional seed selection during the process instead of static ranking computed only once at the beginning. For calculation of network centrality measures such as degree, only non-infected nodes are taken into account. Results showed increased coverage represented by a percentage of activated nodes dependent on intervals between recalculations as well as the trade-off between outcome and computational costs. For over 90% of simulation cases, dynamic rankings with a high frequency of recalculations delivered better coverage than approaches based on static rankings. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Entropy Generation of Double Diffusive Forced Convection in Porous Channels with Thick Walls and Soret Effect
Entropy 2017, 19(4), 171; doi:10.3390/e19040171
Received: 14 March 2017 / Revised: 13 April 2017 / Accepted: 13 April 2017 / Published: 15 April 2017
PDF Full-text (5780 KB) | HTML Full-text | XML Full-text
Abstract
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the
[...] Read more.
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the porous-solid walls interfaces. This investigation is focused on two principal types of boundary conditions. The first assumes a constant temperature condition at the outer surfaces of the solid walls, and the second assumes a constant heat flux at the lower wall and convection heat transfer at the upper wall. After obtaining the velocity, temperature and concentration distributions, the local and total entropy generation formulations were used to visualize the second law performance of the two cases. The results indicate that the total entropy generation rate is directly related to the lower wall thickness. Interestingly, it was observed that the total entropy generation rate for the second case reaches a minimum value, if the upper and lower wall thicknesses are chosen correctly. However, this observation was not true for the first case. These analyses can be useful for the design of microreactors and microcombustor systems when the second law analysis is taken into account. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Multilevel Integration Entropies: The Case of Reconstruction of Structural Quasi-Stability in Building Complex Datasets
Entropy 2017, 19(4), 172; doi:10.3390/e19040172
Received: 27 February 2017 / Revised: 12 April 2017 / Accepted: 14 April 2017 / Published: 18 April 2017
PDF Full-text (3060 KB) | HTML Full-text | XML Full-text
Abstract
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy
[...] Read more.
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy measures for the characterization of the multidimensional structure extracted from complex datasets are proposed, complementing the conventionally-applied algebraic topology methods. Derived from topological relationships embedded in datasets, multilevel entropy measures are used to track transitions in building the high dimensional structure of datasets captured by the stratified partition of a simplicial complex. The proposed entropies are found suitable for defining and operationalizing the intuitive notions of structural relationships in a cumulative experience of a taxi driver’s cognitive map formed by origins and destinations. The comparison of multilevel integration entropies calculated after each new added ride to the data structure indicates slowing the pace of change over time in the origin-destination structure. The repetitiveness in taxi driver rides, and the stability of origin-destination structure, exhibits the relative invariance of rides in space and time. These results shed light on taxi driver’s ride habits, as well as on the commuting of persons whom he/she drove. Full article
Figures

Figure 1

Open AccessArticle Leaks: Quantum, Classical, Intermediate and More
Entropy 2017, 19(4), 174; doi:10.3390/e19040174
Received: 26 January 2017 / Revised: 30 March 2017 / Accepted: 12 April 2017 / Published: 19 April 2017
Cited by 3 | PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence
[...] Read more.
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence of classical theory by adjoining decoherence leaks to quantum theory. Finally, we show that defining a notion of purity for processes in general process theories has to make reference to the leaks of that theory, a feature missing in standard definitions; hence, we propose a refined definition and study the resulting notion of purity for quantum, classical and intermediate theories. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Second Law Analysis of a Mobile Air Conditioning System with Internal Heat Exchanger Using Low GWP Refrigerants
Entropy 2017, 19(4), 175; doi:10.3390/e19040175
Received: 10 March 2017 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 19 April 2017
Cited by 1 | PDF Full-text (2922 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline.
[...] Read more.
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline. System simulation is performed considering the maximum value of entropy generated in the IHX. The maximum entropy production occurs at an effectiveness of 66% for both R152a and R134a, whereas for the cases of R1234yf and R1234ze occurs at 55%. Sub-cooling and superheating effects are evaluated for each one of the cases. It is also found that the sub-cooling effect shows the greatest impact on the cycle efficiency. The results also show the influence of isentropic efficiency on relative exergy destruction, resulting that the most affected components are the compressor and the condenser for all of the refrigerants studied herein. It is also found that the most efficient operation of the system resulted to be when using the R1234ze refrigerant. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis
Entropy 2017, 19(4), 176; doi:10.3390/e19040176
Received: 8 January 2017 / Revised: 3 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
Cited by 6 | PDF Full-text (2881 KB) | HTML Full-text | XML Full-text
Abstract
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right
[...] Read more.
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect. First, the vibration signals of the rolling bearing are decomposed into several product function (PF) components by improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) method and the false nearest neighbor (FNN) method to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the MPE features of rolling bearings are extracted. Finally, the features of MPE are used as HMM training and diagnosis. The experimental results show that the proposed method can effectively identify the different faults of the rolling bearing. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Entropy in Natural Time and the Associated Complexity Measures
Entropy 2017, 19(4), 177; doi:10.3390/e19040177
Received: 29 March 2017 / Revised: 16 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
Cited by 1 | PDF Full-text (961 KB) | HTML Full-text | XML Full-text
Abstract
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In
[...] Read more.
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In this new time domain, an entropy has been defined, and complexity measures based on this entropy, as well as its value under time-reversal have been introduced and found applications in various complex systems. Here, we review these applications in the electric signals that precede rupture, e.g., earthquakes, in the analysis of electrocardiograms, as well as in global atmospheric phenomena, like the El Niño/La Niña Southern Oscillation. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Entropy “2”-Soft Classification of Objects
Entropy 2017, 19(4), 178; doi:10.3390/e19040178
Received: 10 March 2017 / Revised: 10 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (1300 KB) | HTML Full-text | XML Full-text
Abstract
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A
[...] Read more.
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A decision rule of randomized parameters and probability density function (PDF) is formed, which is determined by the solution of the problem of the functional entropy linear programming. A procedure for “2”-soft classification is developed, consisting of the computer simulation of the randomized decision rule with optimal entropy PDF parameters. Examples are provided. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle On the Definition of Diversity Order Based on Renyi Entropy for Frequency Selective Fading Channels
Entropy 2017, 19(4), 179; doi:10.3390/e19040179
Received: 23 November 2016 / Revised: 11 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (3296 KB) | HTML Full-text | XML Full-text
Abstract
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the
[...] Read more.
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the diversity order can be used to roughly but quickly compare the techniques for a wide range of operating environments. For a system transmitting over frequency selective fading channels, the diversity order can be defined as the number of multi-paths if multi-paths have all equal energy. However, diversity order may not be adequately defined when the energy values are different. In order to obtain a rough value of diversity order, one may use the number of multi-paths or the reciprocal value of the multi-path energy variance. Such definitions are not very useful for evaluating the performance of diversity techniques since the former is meaningful only when the target outage probability is extremely small, while the latter is reasonable when the target outage probability is very large. In this paper, we propose a new definition of diversity order for frequency selective fading channels. The proposed scheme is based on Renyi entropy, which is widely used in biology and many other fields. We provide various simulation results to show that the diversity order using the proposed definition is tightly correlated with the corresponding outage probability, and thus the proposed scheme can be used for quickly selecting the best diversity technique among multiple candidates. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Using Measured Values in Bell’s Inequalities Entails at Least One Hypothesis in Addition to Local Realism
Entropy 2017, 19(4), 180; doi:10.3390/e19040180
Received: 22 February 2017 / Revised: 17 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (1780 KB) | HTML Full-text | XML Full-text
Abstract
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising
[...] Read more.
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising way out from the old controversy between quantum mechanics and local realism. Here, I review the reason why such a hypothesis (actually, it is one of a set of related hypotheses) in addition to local realism is necessary, and present a simple example, related to Bell’s inequalities, where the hypothesis is violated. This example shows that the violation of the additional hypothesis is necessary, but not sufficient, to violate Bell’s inequalities without violating local realism. The example also provides some clues that may reveal the violation of the additional hypothesis in an experiment. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems
Entropy 2017, 19(4), 181; doi:10.3390/e19040181
Received: 30 December 2016 / Revised: 14 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (20121 KB) | HTML Full-text | XML Full-text
Abstract
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science
[...] Read more.
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates the quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database’s algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to an exhaustive optimization. Conjectures of criticality are obtained on the self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database—the process that inevitably involves human subjectivity for management within open complex systems. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Carnot-Like Heat Engines Versus Low-Dissipation Models
Entropy 2017, 19(4), 182; doi:10.3390/e19040182
Received: 20 March 2017 / Revised: 18 April 2017 / Accepted: 20 April 2017 / Published: 23 April 2017
PDF Full-text (1189 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into
[...] Read more.
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into account by explicit entropy generation laws. We analyze the mathematical relation between the natural variables of both models and from this the resulting thermodynamic implications. Among them, particular emphasis has been placed on the physical consistency between the heat leak and time evolution on the one side, and between parabolic and loop-like behaviors of the parametric power-efficiency plots. A detailed analysis for different heat transfer laws in the Carnot-like model in terms of the maximum power efficiencies given by the Low-Dissipation model is also presented. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Open AccessArticle Low Complexity List Decoding for Polar Codes with Multiple CRC Codes
Entropy 2017, 19(4), 183; doi:10.3390/e19040183
Received: 7 February 2017 / Revised: 22 March 2017 / Accepted: 11 April 2017 / Published: 24 April 2017
PDF Full-text (445 KB) | HTML Full-text | XML Full-text
Abstract
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result,
[...] Read more.
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC) codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview The Quantum Harmonic Otto Cycle
Entropy 2017, 19(4), 136; doi:10.3390/e19040136
Received: 21 February 2017 / Revised: 18 March 2017 / Accepted: 20 March 2017 / Published: 23 March 2017
Cited by 11 | PDF Full-text (2649 KB) | HTML Full-text | XML Full-text
Abstract
The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element. We compile recent studies of the quantum Otto cycle with a harmonic oscillator as a working
[...] Read more.
The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the quantum regime of thermal devices composed from a single element. We compile recent studies of the quantum Otto cycle with a harmonic oscillator as a working medium. This model has the advantage that it is analytically trackable. In addition, an experimental realization has been achieved, employing a single ion in a harmonic trap. The review is embedded in the field of quantum thermodynamics and quantum open systems. The basic principles of the theory are explained by a specific example illuminating the basic definitions of work and heat. The relation between quantum observables and the state of the system is emphasized. The dynamical description of the cycle is based on a completely positive map formulated as a propagator for each stroke of the engine. Explicit solutions for these propagators are described on a vector space of quantum thermodynamical observables. These solutions which employ different assumptions and techniques are compared. The tradeoff between power and efficiency is the focal point of finite-time-thermodynamics. The dynamical model enables the study of finite time cycles limiting time on the adiabatic and the thermalization times. Explicit finite time solutions are found which are frictionless (meaning that no coherence is generated), and are also known as shortcuts to adiabaticity.The transition from frictionless to sudden adiabats is characterized by a non-hermitian degeneracy in the propagator. In addition, the influence of noise on the control is illustrated. These results are used to close the cycles either as engines or as refrigerators. The properties of the limit cycle are described. Methods to optimize the power by controlling the thermalization time are also introduced. At high temperatures, the Novikov–Curzon–Ahlborn efficiency at maximum power is obtained. The sudden limit of the engine which allows finite power at zero cycle time is shown. The refrigerator cycle is described within the frictionless limit, with emphasis on the cooling rate when the cold bath temperature approaches zero. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessReview Slow Dynamics and Structure of Supercooled Water in Confinement
Entropy 2017, 19(4), 185; doi:10.3390/e19040185
Received: 22 November 2016 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 24 April 2017
Cited by 1 | PDF Full-text (1513 KB) | HTML Full-text | XML Full-text
Abstract
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water
[...] Read more.
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water in both environments is well interpreted in terms of the Mode Coupling Theory of glassy dynamics. Moreover, we find a crossover from a fragile to a strong regime. We relate this crossover to the crossing of the Widom line emanating from the liquid-liquid critical point, and in confinement we connect this crossover also to a crossover of the two body excess entropy of water upon cooling. Hydration water exhibits a second, distinctly slower relaxation caused by its dynamical coupling with the protein. The crossover upon cooling of this long relaxation is related to the protein dynamics. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top