Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 4 (April 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The quantum Otto cycle serves as a bridge between the macroscopic world of heat engines and the [...] Read more.
View options order results:
result details:
Displaying articles 1-47
Export citation of selected articles as:
Open AccessReview Slow Dynamics and Structure of Supercooled Water in Confinement
Entropy 2017, 19(4), 185; https://doi.org/10.3390/e19040185
Received: 22 November 2016 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 24 April 2017
Cited by 2 | PDF Full-text (1513 KB) | HTML Full-text | XML Full-text
Abstract
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water
[...] Read more.
We review our simulation results on properties of supercooled confined water. We consider two situations: water confined in a hydrophilic pore that mimics an MCM-41 environment and water at interface with a protein. The behavior upon cooling of the α relaxation of water in both environments is well interpreted in terms of the Mode Coupling Theory of glassy dynamics. Moreover, we find a crossover from a fragile to a strong regime. We relate this crossover to the crossing of the Widom line emanating from the liquid-liquid critical point, and in confinement we connect this crossover also to a crossover of the two body excess entropy of water upon cooling. Hydration water exhibits a second, distinctly slower relaxation caused by its dynamical coupling with the protein. The crossover upon cooling of this long relaxation is related to the protein dynamics. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Figures

Figure 1

Open AccessArticle Low Complexity List Decoding for Polar Codes with Multiple CRC Codes
Entropy 2017, 19(4), 183; https://doi.org/10.3390/e19040183
Received: 7 February 2017 / Revised: 22 March 2017 / Accepted: 11 April 2017 / Published: 24 April 2017
Cited by 2 | PDF Full-text (445 KB) | HTML Full-text | XML Full-text
Abstract
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result,
[...] Read more.
Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC) codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Carnot-Like Heat Engines Versus Low-Dissipation Models
Entropy 2017, 19(4), 182; https://doi.org/10.3390/e19040182
Received: 20 March 2017 / Revised: 18 April 2017 / Accepted: 20 April 2017 / Published: 23 April 2017
Cited by 4 | PDF Full-text (1189 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into
[...] Read more.
In this paper, a comparison between two well-known finite time heat engine models is presented: the Carnot-like heat engine based on specific heat transfer laws between the cyclic system and the external heat baths and the Low-Dissipation model where irreversibilities are taken into account by explicit entropy generation laws. We analyze the mathematical relation between the natural variables of both models and from this the resulting thermodynamic implications. Among them, particular emphasis has been placed on the physical consistency between the heat leak and time evolution on the one side, and between parabolic and loop-like behaviors of the parametric power-efficiency plots. A detailed analysis for different heat transfer laws in the Carnot-like model in terms of the maximum power efficiencies given by the Low-Dissipation model is also presented. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Open AccessArticle Citizen Science and Topology of Mind: Complexity, Computation and Criticality in Data-Driven Exploration of Open Complex Systems
Entropy 2017, 19(4), 181; https://doi.org/10.3390/e19040181
Received: 30 December 2016 / Revised: 14 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (20121 KB) | HTML Full-text | XML Full-text
Abstract
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science
[...] Read more.
Recently emerging data-driven citizen sciences need to harness an increasing amount of massive data with varying quality. This paper develops essential theoretical frameworks, example models, and a general definition of complexity measure, and examines its computational complexity for an interactive data-driven citizen science within the context of guided self-organization. We first define a conceptual model that incorporates the quality of observation in terms of accuracy and reproducibility, ranging between subjectivity, inter-subjectivity, and objectivity. Next, we examine the database’s algebraic and topological structure in relation to informational complexity measures, and evaluate its computational complexities with respect to an exhaustive optimization. Conjectures of criticality are obtained on the self-organizing processes of observation and dynamical model development. Example analysis is demonstrated with the use of biodiversity assessment database—the process that inevitably involves human subjectivity for management within open complex systems. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Using Measured Values in Bell’s Inequalities Entails at Least One Hypothesis in Addition to Local Realism
Entropy 2017, 19(4), 180; https://doi.org/10.3390/e19040180
Received: 22 February 2017 / Revised: 17 April 2017 / Accepted: 20 April 2017 / Published: 22 April 2017
PDF Full-text (1780 KB) | HTML Full-text | XML Full-text
Abstract
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising
[...] Read more.
The recent loophole-free experiments have confirmed the violation of Bell’s inequalities in nature. Yet, in order to insert measured values in Bell’s inequalities, it is unavoidable to make a hypothesis similar to “ergodicity at the hidden variables level”. This possibility opens a promising way out from the old controversy between quantum mechanics and local realism. Here, I review the reason why such a hypothesis (actually, it is one of a set of related hypotheses) in addition to local realism is necessary, and present a simple example, related to Bell’s inequalities, where the hypothesis is violated. This example shows that the violation of the additional hypothesis is necessary, but not sufficient, to violate Bell’s inequalities without violating local realism. The example also provides some clues that may reveal the violation of the additional hypothesis in an experiment. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Figures

Figure 1

Open AccessArticle On the Definition of Diversity Order Based on Renyi Entropy for Frequency Selective Fading Channels
Entropy 2017, 19(4), 179; https://doi.org/10.3390/e19040179
Received: 23 November 2016 / Revised: 11 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (3296 KB) | HTML Full-text | XML Full-text
Abstract
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the
[...] Read more.
Outage probabilities are important measures of the performance of wireless communication systems, but to obtain outage probabilities it is necessary to first determine detailed system parameters, followed by complicated calculations. When there are multiple candidates of diversity techniques applicable for a system, the diversity order can be used to roughly but quickly compare the techniques for a wide range of operating environments. For a system transmitting over frequency selective fading channels, the diversity order can be defined as the number of multi-paths if multi-paths have all equal energy. However, diversity order may not be adequately defined when the energy values are different. In order to obtain a rough value of diversity order, one may use the number of multi-paths or the reciprocal value of the multi-path energy variance. Such definitions are not very useful for evaluating the performance of diversity techniques since the former is meaningful only when the target outage probability is extremely small, while the latter is reasonable when the target outage probability is very large. In this paper, we propose a new definition of diversity order for frequency selective fading channels. The proposed scheme is based on Renyi entropy, which is widely used in biology and many other fields. We provide various simulation results to show that the diversity order using the proposed definition is tightly correlated with the corresponding outage probability, and thus the proposed scheme can be used for quickly selecting the best diversity technique among multiple candidates. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Figures

Figure 1

Open AccessArticle Entropy “2”-Soft Classification of Objects
Entropy 2017, 19(4), 178; https://doi.org/10.3390/e19040178
Received: 10 March 2017 / Revised: 10 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
PDF Full-text (1300 KB) | HTML Full-text | XML Full-text
Abstract
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A
[...] Read more.
A proposal for a new method of classification of objects of various nature, named “2”-soft classification, which allows for referring objects to one of two types with optimal entropy probability for available collection of learning data with consideration of additive errors therein. A decision rule of randomized parameters and probability density function (PDF) is formed, which is determined by the solution of the problem of the functional entropy linear programming. A procedure for “2”-soft classification is developed, consisting of the computer simulation of the randomized decision rule with optimal entropy PDF parameters. Examples are provided. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Entropy in Natural Time and the Associated Complexity Measures
Entropy 2017, 19(4), 177; https://doi.org/10.3390/e19040177
Received: 29 March 2017 / Revised: 16 April 2017 / Accepted: 18 April 2017 / Published: 20 April 2017
Cited by 2 | PDF Full-text (961 KB) | HTML Full-text | XML Full-text
Abstract
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In
[...] Read more.
Natural time is a new time domain introduced in 2001. The analysis of time series associated with a complex system in natural time may provide useful information and may reveal properties that are usually hidden when studying the system in conventional time. In this new time domain, an entropy has been defined, and complexity measures based on this entropy, as well as its value under time-reversal have been introduced and found applications in various complex systems. Here, we review these applications in the electric signals that precede rupture, e.g., earthquakes, in the analysis of electrocardiograms, as well as in global atmospheric phenomena, like the El Niño/La Niña Southern Oscillation. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis
Entropy 2017, 19(4), 176; https://doi.org/10.3390/e19040176
Received: 8 January 2017 / Revised: 3 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
Cited by 25 | PDF Full-text (2881 KB) | HTML Full-text | XML Full-text
Abstract
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right
[...] Read more.
Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect. First, the vibration signals of the rolling bearing are decomposed into several product function (PF) components by improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) method and the false nearest neighbor (FNN) method to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the MPE features of rolling bearings are extracted. Finally, the features of MPE are used as HMM training and diagnosis. The experimental results show that the proposed method can effectively identify the different faults of the rolling bearing. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Figures

Figure 1

Open AccessArticle Second Law Analysis of a Mobile Air Conditioning System with Internal Heat Exchanger Using Low GWP Refrigerants
Entropy 2017, 19(4), 175; https://doi.org/10.3390/e19040175
Received: 10 March 2017 / Revised: 14 April 2017 / Accepted: 17 April 2017 / Published: 19 April 2017
Cited by 5 | PDF Full-text (2922 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline.
[...] Read more.
This paper investigates the results of a Second Law analysis applied to a mobile air conditioning system (MACs) integrated with an internal heat exchanger (IHX) by considering R152a, R1234yf and R1234ze as low global warming potential (GWP) refrigerants and establishing R134a as baseline. System simulation is performed considering the maximum value of entropy generated in the IHX. The maximum entropy production occurs at an effectiveness of 66% for both R152a and R134a, whereas for the cases of R1234yf and R1234ze occurs at 55%. Sub-cooling and superheating effects are evaluated for each one of the cases. It is also found that the sub-cooling effect shows the greatest impact on the cycle efficiency. The results also show the influence of isentropic efficiency on relative exergy destruction, resulting that the most affected components are the compressor and the condenser for all of the refrigerants studied herein. It is also found that the most efficient operation of the system resulted to be when using the R1234ze refrigerant. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Leaks: Quantum, Classical, Intermediate and More
Entropy 2017, 19(4), 174; https://doi.org/10.3390/e19040174
Received: 26 January 2017 / Revised: 30 March 2017 / Accepted: 12 April 2017 / Published: 19 April 2017
Cited by 6 | PDF Full-text (335 KB) | HTML Full-text | XML Full-text
Abstract
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence
[...] Read more.
We introduce the notion of a leak for general process theories and identify quantum theory as a theory with minimal leakage, while classical theory has maximal leakage. We provide a construction that adjoins leaks to theories, an instance of which describes the emergence of classical theory by adjoining decoherence leaks to quantum theory. Finally, we show that defining a notion of purity for processes in general process theories has to make reference to the leaks of that theory, a feature missing in standard definitions; hence, we propose a refined definition and study the resulting notion of purity for quantum, classical and intermediate theories. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle Multilevel Integration Entropies: The Case of Reconstruction of Structural Quasi-Stability in Building Complex Datasets
Entropy 2017, 19(4), 172; https://doi.org/10.3390/e19040172
Received: 27 February 2017 / Revised: 12 April 2017 / Accepted: 14 April 2017 / Published: 18 April 2017
Cited by 1 | PDF Full-text (3060 KB) | HTML Full-text | XML Full-text
Abstract
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy
[...] Read more.
The emergence of complex datasets permeates versatile research disciplines leading to the necessity to develop methods for tackling complexity through finding the patterns inherent in datasets. The challenge lies in transforming the extracted patterns into pragmatic knowledge. In this paper, new information entropy measures for the characterization of the multidimensional structure extracted from complex datasets are proposed, complementing the conventionally-applied algebraic topology methods. Derived from topological relationships embedded in datasets, multilevel entropy measures are used to track transitions in building the high dimensional structure of datasets captured by the stratified partition of a simplicial complex. The proposed entropies are found suitable for defining and operationalizing the intuitive notions of structural relationships in a cumulative experience of a taxi driver’s cognitive map formed by origins and destinations. The comparison of multilevel integration entropies calculated after each new added ride to the data structure indicates slowing the pace of change over time in the origin-destination structure. The repetitiveness in taxi driver rides, and the stability of origin-destination structure, exhibits the relative invariance of rides in space and time. These results shed light on taxi driver’s ride habits, as well as on the commuting of persons whom he/she drove. Full article
Figures

Figure 1

Open AccessArticle Design and Implementation of SOC Prediction for a Li-Ion Battery Pack in an Electric Car with an Embedded System
Entropy 2017, 19(4), 146; https://doi.org/10.3390/e19040146
Received: 21 January 2017 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 17 April 2017
Cited by 1 | PDF Full-text (10716 KB) | HTML Full-text | XML Full-text
Abstract
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the
[...] Read more.
Li-Ion batteries are widely preferred in electric vehicles. The charge status of batteries is a critical evaluation issue, and many researchers are studying in this area. State of charge gives information about how much longer the battery can be used and when the charging process will be cut off. Incorrect predictions may cause overcharging or over-discharging of the battery. In this study, a low-cost embedded system is used to determine the state of charge of an electric car. A Li-Ion battery cell is trained using a feed-forward neural network via Matlab/Neural Network Toolbox. The trained cell is adapted to the whole battery pack of the electric car and embedded via Matlab/Simulink to a low-cost microcontroller that proposed a system in real-time. The experimental results indicated that accurate robust estimation results could be obtained by the proposed system. Full article
Figures

Figure 1

Open AccessArticle Entropy Generation of Double Diffusive Forced Convection in Porous Channels with Thick Walls and Soret Effect
Entropy 2017, 19(4), 171; https://doi.org/10.3390/e19040171
Received: 14 March 2017 / Revised: 13 April 2017 / Accepted: 13 April 2017 / Published: 15 April 2017
Cited by 5 | PDF Full-text (5780 KB) | HTML Full-text | XML Full-text
Abstract
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the
[...] Read more.
The second law performance of double diffusive forced convection in a horizontal porous channel with thick walls was considered. The Soret effect is included in the concentration equation and the first order chemical reaction was chosen for the concentration boundary conditions at the porous-solid walls interfaces. This investigation is focused on two principal types of boundary conditions. The first assumes a constant temperature condition at the outer surfaces of the solid walls, and the second assumes a constant heat flux at the lower wall and convection heat transfer at the upper wall. After obtaining the velocity, temperature and concentration distributions, the local and total entropy generation formulations were used to visualize the second law performance of the two cases. The results indicate that the total entropy generation rate is directly related to the lower wall thickness. Interestingly, it was observed that the total entropy generation rate for the second case reaches a minimum value, if the upper and lower wall thicknesses are chosen correctly. However, this observation was not true for the first case. These analyses can be useful for the design of microreactors and microcombustor systems when the second law analysis is taken into account. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Dynamic Rankings for Seed Selection in Complex Networks: Balancing Costs and Coverage
Entropy 2017, 19(4), 170; https://doi.org/10.3390/e19040170
Received: 27 February 2017 / Revised: 7 April 2017 / Accepted: 12 April 2017 / Published: 15 April 2017
Cited by 5 | PDF Full-text (2681 KB) | HTML Full-text | XML Full-text
Abstract
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous
[...] Read more.
Information spreading processes within the complex networks are usually initiated by a selection of highly influential nodes in accordance with the used seeding strategy. The majority of earlier studies assumed the usage of selected seeds at the beginning of the process. Our previous research revealed the advantage of using a sequence of seeds instead of a single stage approach. The current study extends sequential seeding and further improves results with the use of dynamic rankings, which are created by recalculation of network measures used for additional seed selection during the process instead of static ranking computed only once at the beginning. For calculation of network centrality measures such as degree, only non-infected nodes are taken into account. Results showed increased coverage represented by a percentage of activated nodes dependent on intervals between recalculations as well as the trade-off between outcome and computational costs. For over 90% of simulation cases, dynamic rankings with a high frequency of recalculations delivered better coverage than approaches based on static rankings. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle Where There is Life There is Mind: In Support of a Strong Life-Mind Continuity Thesis
Entropy 2017, 19(4), 169; https://doi.org/10.3390/e19040169
Received: 22 February 2017 / Revised: 10 April 2017 / Accepted: 11 April 2017 / Published: 14 April 2017
Cited by 11 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act
[...] Read more.
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act to maintain themselves in their expected biological and cognitive states, and that they can do so only by minimizing their free energy given that the long-term average of free energy is entropy. The paper then argues that there is no singular interpretation of the FEP for thinking about the relation between life and mind. Some FEP formulations express what we call an independence view of life and mind. One independence view is a cognitivist view of the FEP. It turns on information processing with semantic content, thus restricting the range of systems capable of exhibiting mentality. Other independence views exemplify what we call an overly generous non-cognitivist view of the FEP, and these appear to go in the opposite direction. That is, they imply that mentality is nearly everywhere. The paper proceeds to argue that non-cognitivist FEP, and its implications for thinking about the relation between life and mind, can be usefully constrained by key ideas in recent enactive approaches to cognitive science. We conclude that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic). Full article
Open AccessArticle Application of the Fuzzy Oil Drop Model Describes Amyloid as a Ribbonlike Micelle
Entropy 2017, 19(4), 167; https://doi.org/10.3390/e19040167
Received: 6 March 2017 / Revised: 7 April 2017 / Accepted: 11 April 2017 / Published: 14 April 2017
Cited by 2 | PDF Full-text (9194 KB) | HTML Full-text | XML Full-text
Abstract
We propose a mathematical model describing the formation of micellar forms—whether spherical, globular, cylindrical, or ribbonlike—as well as its adaptation to protein structure. Our model, based on the fuzzy oil drop paradigm, assumes that in a spherical micelle the distribution of hydrophobicity produced
[...] Read more.
We propose a mathematical model describing the formation of micellar forms—whether spherical, globular, cylindrical, or ribbonlike—as well as its adaptation to protein structure. Our model, based on the fuzzy oil drop paradigm, assumes that in a spherical micelle the distribution of hydrophobicity produced by the alignment of polar molecules with the external water environment can be modeled by a 3D Gaussian function. Perturbing this function by changing the values of its sigma parameters leads to a variety of conformations—the model is therefore applicable to globular, cylindrical, and ribbonlike micelles. In the context of protein structures ranging from globular to ribbonlike, our model can explain the emergence of fibrillar forms; particularly amyloids. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle A Study of the Transfer Entropy Networks on Industrial Electricity Consumption
Entropy 2017, 19(4), 159; https://doi.org/10.3390/e19040159
Received: 11 January 2017 / Revised: 29 March 2017 / Accepted: 3 April 2017 / Published: 13 April 2017
Cited by 1 | PDF Full-text (2525 KB) | HTML Full-text | XML Full-text
Abstract
We study information transfer routes among cross-industry and cross-region electricity consumption data based on transfer entropy and the MST (Minimum Spanning Tree) model. First, we characterize the information transfer routes with transfer entropy matrixes, and find that the total entropy transfer of the
[...] Read more.
We study information transfer routes among cross-industry and cross-region electricity consumption data based on transfer entropy and the MST (Minimum Spanning Tree) model. First, we characterize the information transfer routes with transfer entropy matrixes, and find that the total entropy transfer of the relatively developed Guangdong Province is lower than others, with significant industrial cluster within the province. Furthermore, using a reshuffling method, we find that driven industries contain much more information flows than driving industries, and are more influential on the degree of order of regional industries. Finally, based on the Chu-Liu-Edmonds MST algorithm, we extract the minimum spanning trees of provincial industries. Individual MSTs show that the MSTs follow a chain-like formation in developed provinces and star-like structures in developing provinces. Additionally, all MSTs with the root of minimal information outflow industrial sector are of chain-form. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications)
Figures

Figure 1

Open AccessEditorial Entropic Aspects of Nonlinear Partial Differential Equations: Classical and Quantum Mechanical Perspectives
Entropy 2017, 19(4), 166; https://doi.org/10.3390/e19040166
Received: 10 April 2017 / Revised: 10 April 2017 / Accepted: 10 April 2017 / Published: 12 April 2017
PDF Full-text (144 KB) | HTML Full-text | XML Full-text
Abstract
There has been increasing research activity in recent years concerning the properties and the applications of nonlinear partial differential equations that are closely related to nonstandard entropic functionals, such as the Tsallis and Renyi entropies.[...] Full article
Open AccessArticle An Entropy-Based Approach for Evaluating Travel Time Predictability Based on Vehicle Trajectory Data
Entropy 2017, 19(4), 165; https://doi.org/10.3390/e19040165
Received: 29 January 2017 / Revised: 24 March 2017 / Accepted: 7 April 2017 / Published: 11 April 2017
Cited by 2 | PDF Full-text (2716 KB) | HTML Full-text | XML Full-text
Abstract
With the great development of intelligent transportation systems (ITS), travel time prediction has attracted the interest of many researchers, and a large number of prediction methods have been developed. However, as an unavoidable topic, the predictability of travel time series is the basic
[...] Read more.
With the great development of intelligent transportation systems (ITS), travel time prediction has attracted the interest of many researchers, and a large number of prediction methods have been developed. However, as an unavoidable topic, the predictability of travel time series is the basic premise for travel time prediction, which has received less attention than the methodology. Based on the analysis of the complexity of the travel time series, this paper defines travel time predictability to express the probability of correct travel time prediction, and proposes an entropy-based method to measure the upper bound of travel time predictability. Multiscale entropy is employed to quantify the complexity of the travel time series, and the relationships between entropy and the upper bound of travel time predictability are presented. Empirical studies are made with vehicle trajectory data in an express road section to shape the features of travel time predictability. The effectiveness of time scales, tolerance, and series length to entropy and travel time predictability are analyzed, and some valuable suggestions about the accuracy of travel time predictability are discussed. Finally, comparisons between travel time predictability and actual prediction results from two prediction models, ARIMA and BPNN, are made. Experimental results demonstrate the validity and reliability of the proposed travel time predictability. Full article
Figures

Figure 1

Open AccessArticle Heisenberg and Entropic Uncertainty Measures for Large-Dimensional Harmonic Systems
Entropy 2017, 19(4), 164; https://doi.org/10.3390/e19040164
Received: 8 March 2017 / Revised: 30 March 2017 / Accepted: 6 April 2017 / Published: 9 April 2017
Cited by 1 | PDF Full-text (329 KB) | HTML Full-text | XML Full-text
Abstract
The D-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) is, together with the hydrogenic system, the main prototype of the physics of multidimensional quantum systems. In this work, we rigorously determine the leading term of the
[...] Read more.
The D-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) is, together with the hydrogenic system, the main prototype of the physics of multidimensional quantum systems. In this work, we rigorously determine the leading term of the Heisenberg-like and entropy-like uncertainty measures of this system as given by the radial expectation values and the Rényi entropies, respectively, at the limit of large D. The associated multidimensional position-momentum uncertainty relations are discussed, showing that they saturate the corresponding general ones. A conjecture about the Shannon-like uncertainty relation is given, and an interesting phenomenon is observed: the Heisenberg-like and Rényi-entropy-based equality-type uncertainty relations for all of the D-dimensional harmonic oscillator states in the pseudoclassical ( D ) limit are the same as the corresponding ones for the hydrogenic systems, despite the so different character of the oscillator and Coulomb potentials. Full article
(This article belongs to the Special Issue Foundations of Quantum Mechanics)
Open AccessArticle Modelling Urban Sprawl Using Remotely Sensed Data: A Case Study of Chennai City, Tamilnadu
Entropy 2017, 19(4), 163; https://doi.org/10.3390/e19040163
Received: 4 January 2017 / Revised: 1 April 2017 / Accepted: 5 April 2017 / Published: 7 April 2017
Cited by 6 | PDF Full-text (7403 KB) | HTML Full-text | XML Full-text
Abstract
Urban sprawl (US), propelled by rapid population growth leads to the shrinkage of productive agricultural lands and pristine forests in the suburban areas and, in turn, adversely affects the provision of ecosystem services. The quantification of US is thus crucial for effective urban
[...] Read more.
Urban sprawl (US), propelled by rapid population growth leads to the shrinkage of productive agricultural lands and pristine forests in the suburban areas and, in turn, adversely affects the provision of ecosystem services. The quantification of US is thus crucial for effective urban planning and environmental management. Like many megacities in fast growing developing countries, Chennai, the capital of Tamilnadu and one of the business hubs in India, has experienced extensive US triggered by the doubling of total population over the past three decades. However, the extent and level of US has not yet been quantified and a prediction for future extent of US is lacking. We employed the Random Forest (RF) classification on Landsat imageries from 1991, 2003, and 2016, and computed six landscape metrics to delineate the extent of urban areas within a 10 km suburban buffer of Chennai. The level of US was then quantified using Renyi’s entropy. A land change model was subsequently used to project land cover for 2027. A 70.35% expansion in urban areas was observed mainly towards the suburban periphery of Chennai between 1991 and 2016. The Renyi’s entropy value for year 2016 was 0.9, exhibiting a two-fold level of US when compared to 1991. The spatial metrics values indicate that the existing urban areas became denser and the suburban agricultural, forests and particularly barren lands were transformed into fragmented urban settlements. The forecasted land cover for 2027 indicates a conversion of 13,670.33 ha (16.57% of the total landscape) of existing forests and agricultural lands into urban areas with an associated increase in the entropy value to 1.7, indicating a tremendous level of US. Our study provides useful metrics for urban planning authorities to address the social-ecological consequences of US and to protect ecosystem services. Full article
(This article belongs to the Special Issue Entropy for Sustainable and Resilient Urban Future)
Figures

Figure 1

Open AccessArticle Situatedness and Embodiment of Computational Systems
Entropy 2017, 19(4), 162; https://doi.org/10.3390/e19040162
Received: 26 February 2017 / Revised: 1 April 2017 / Accepted: 4 April 2017 / Published: 7 April 2017
PDF Full-text (232 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued
[...] Read more.
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. Full article
Open AccessArticle P-Adic Analog of Navier–Stokes Equations: Dynamics of Fluid’s Flow in Percolation Networks (from Discrete Dynamics with Hierarchic Interactions to Continuous Universal Scaling Model)
Entropy 2017, 19(4), 161; https://doi.org/10.3390/e19040161
Received: 15 March 2017 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 7 April 2017
Cited by 1 | PDF Full-text (3677 KB) | HTML Full-text | XML Full-text
Abstract
Recently p-adic (and, more generally, ultrametric) spaces representing tree-like networks of percolation, and as a special case of capillary patterns in porous media, started to be used to model the propagation of fluids (e.g., oil, water, oil-in-water, and water-in-oil emulsion). The aim
[...] Read more.
Recently p-adic (and, more generally, ultrametric) spaces representing tree-like networks of percolation, and as a special case of capillary patterns in porous media, started to be used to model the propagation of fluids (e.g., oil, water, oil-in-water, and water-in-oil emulsion). The aim of this note is to derive p-adic dynamics described by fractional differential operators (Vladimirov operators) starting with discrete dynamics based on hierarchically-structured interactions between the fluids’ volumes concentrated at different levels of the percolation tree and coming to the multiscale universal topology of the percolating nets. Similar systems of discrete hierarchic equations were widely applied to modeling of turbulence. However, in the present work this similarity is only formal since, in our model, the trees are real physical patterns with a tree-like topology of capillaries (or fractures) in random porous media (not cascade trees, as in the case of turbulence, which we will be discussed elsewhere for the spinner flowmeter commonly used in the petroleum industry). By going to the “continuous limit” (with respect to the p-adic topology) we represent the dynamics on the tree-like configuration space as an evolutionary nonlinear p-adic fractional (pseudo-) differential equation, the tree-like analog of the Navier–Stokes equation. We hope that our work helps to come closer to a nonlinear equation solution, taking into account the scaling, hierarchies, and formal derivations, imprinted from the similar properties of the real physical world. Once this coupling is resolved, the more problematic question of information scaling in industrial applications will be achieved. Full article
Figures

Figure 1

Open AccessArticle Consistent Estimation of Partition Markov Models
Entropy 2017, 19(4), 160; https://doi.org/10.3390/e19040160
Received: 1 March 2017 / Revised: 31 March 2017 / Accepted: 4 April 2017 / Published: 6 April 2017
Cited by 3 | PDF Full-text (291 KB) | HTML Full-text | XML Full-text
Abstract
The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions:
[...] Read more.
The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Open AccessArticle An Approach to the Evaluation of the Quality of Accounting Information Based on Relative Entropy in Fuzzy Linguistic Environments
Entropy 2017, 19(4), 152; https://doi.org/10.3390/e19040152
Received: 13 March 2017 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 5 April 2017
Cited by 1 | PDF Full-text (624 KB) | HTML Full-text | XML Full-text
Abstract
There is a risk when company stakeholders make decisions using accounting information with varied qualities in the same way. In order to evaluate the accounting information quality, this paper proposed an approach to the evaluation of the quality of accounting information based on
[...] Read more.
There is a risk when company stakeholders make decisions using accounting information with varied qualities in the same way. In order to evaluate the accounting information quality, this paper proposed an approach to the evaluation of the quality of accounting information based on relative entropy in fuzzy linguistic environments. Firstly, the accounting information quality evaluation criteria are constructed not only from the quality of the accounting information content but also from the accounting information generation environment. Considering that the rating values with respect to the criteria are in linguistic forms with different granularities, the method to deal with the linguistic rating values is given. In the method, the linguistic terms are modeled with the 2-tuple linguistic model. Relative entropy is used to calculate the information consistency, which is used to derive the weight of experts and criteria. Finally, the example is given to illustrate the feasibility and practicability of the proposed method. Full article
Figures

Figure 1

Open AccessArticle Nonequilibrium Thermodynamics and Steady State Density Matrix for Quantum Open Systems
Entropy 2017, 19(4), 158; https://doi.org/10.3390/e19040158
Received: 8 March 2017 / Revised: 28 March 2017 / Accepted: 30 March 2017 / Published: 2 April 2017
Cited by 1 | PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
We consider the generic model of a finite-size quantum electron system connected to two (temperature and particle) reservoirs. The quantum open system is driven out of equilibrium by the presence of both potential temperature and chemical differences between the two reservoirs. The nonequilibrium
[...] Read more.
We consider the generic model of a finite-size quantum electron system connected to two (temperature and particle) reservoirs. The quantum open system is driven out of equilibrium by the presence of both potential temperature and chemical differences between the two reservoirs. The nonequilibrium (NE) thermodynamical properties of such a quantum open system are studied for the steady state regime. In such a regime, the corresponding NE density matrix is built on the so-called generalised Gibbs ensembles. From different expressions of the NE density matrix, we can identify the terms related to the entropy production in the system. We show, for a simple model, that the entropy production rate is always a positive quantity. Alternative expressions for the entropy production are also obtained from the Gibbs–von Neumann conventional formula and discussed in detail. Our results corroborate and expand earlier works found in the literature. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle Quadratic Mutual Information Feature Selection
Entropy 2017, 19(4), 157; https://doi.org/10.3390/e19040157
Received: 13 December 2016 / Revised: 27 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
Cited by 1 | PDF Full-text (1567 KB) | HTML Full-text | XML Full-text
Abstract
We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second
[...] Read more.
We propose a novel feature selection method based on quadratic mutual information which has its roots in Cauchy–Schwarz divergence and Renyi entropy. The method uses the direct estimation of quadratic mutual information from data samples using Gaussian kernel functions, and can detect second order non-linear relations. Its main advantages are: (i) unified analysis of discrete and continuous data, excluding any discretization; and (ii) its parameter-free design. The effectiveness of the proposed method is demonstrated through an extensive comparison with mutual information feature selection (MIFS), minimum redundancy maximum relevance (MRMR), and joint mutual information (JMI) on classification and regression problem domains. The experiments show that proposed method performs comparably to the other methods when applied to classification problems, except it is considerably faster. In the case of regression, it compares favourably to the others, but is slower. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Use of Exergy Analysis to Quantify the Effect of Lithium Bromide Concentration in an Absorption Chiller
Entropy 2017, 19(4), 156; https://doi.org/10.3390/e19040156
Received: 24 February 2017 / Revised: 27 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
Cited by 5 | PDF Full-text (1892 KB) | HTML Full-text | XML Full-text
Abstract
Absorption chillers present opportunities to utilize sustainable fuels in the production of chilled water. An assessment of the steam driven absorption chiller at the University of Idaho, was performed to quantify the current exergy destruction rates. Measurements of external processes and flows were
[...] Read more.
Absorption chillers present opportunities to utilize sustainable fuels in the production of chilled water. An assessment of the steam driven absorption chiller at the University of Idaho, was performed to quantify the current exergy destruction rates. Measurements of external processes and flows were used to create a mathematical model. Using engineering equation solver to analyze and identify the major sources of exergy destruction within the chiller. It was determined that the absorber, generator and condenser are the largest contribution to the exergy destruction at 30%, 31% and 28% of the respectively. The exergetic efficiency is found to be 16% with a Coefficient of performance (COP) of 0.65. Impacts of weak solution concentration of lithium bromide on the exergy destruction rates were evaluated using parametric studies. The studies reveled an optimum concentration that could be obtained by increasing the weak solution concentration from 56% to 58.8% a net decrease in 0.4% of the exergy destruction caused by the absorption chiller can be obtained. The 2.8% increase in lithium-bromide concentration decreases the exergy destruction primarily within the absorber with a decrease of 5.1%. This increase in concentration is shown to also decrease the maximum cooling capacity by 3% and increase the exergy destruction of the generator by 4.9%. The study also shows that the increase in concentration will change the internal temperatures by 3 to 7 °C. Conversely, reducing the weak solution concentration results is also shown to increase the exergetic destruction rates while also potentially increasing the cooling capacity. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Random Walks Associated with Nonlinear Fokker–Planck Equations
Entropy 2017, 19(4), 155; https://doi.org/10.3390/e19040155
Received: 24 February 2017 / Revised: 28 March 2017 / Accepted: 30 March 2017 / Published: 1 April 2017
Cited by 3 | PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
A nonlinear random walk related to the porous medium equation (nonlinear Fokker–Planck equation) is investigated. This random walk is such that when the number of steps is sufficiently large, the probability of finding the walker in a certain position after taking a determined
[...] Read more.
A nonlinear random walk related to the porous medium equation (nonlinear Fokker–Planck equation) is investigated. This random walk is such that when the number of steps is sufficiently large, the probability of finding the walker in a certain position after taking a determined number of steps approximates to a q-Gaussian distribution ( G q , β ( x ) [ 1 ( 1 q ) β x 2 ] 1 / ( 1 q ) ), which is a solution of the porous medium equation. This can be seen as a verification of a generalized central limit theorem where the attractor is a q-Gaussian distribution, reducing to the Gaussian one when the linearity is recovered ( q 1 ). In addition, motivated by this random walk, a nonlinear Markov chain is suggested. Full article
Figures

Figure 1

Back to Top