Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 2 (February 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Darwinian evolution is grounded in a dynamical selection process that involves diverse classes of [...] Read more.
View options order results:
result details:
Displaying articles 1-69
Export citation of selected articles as:
Open AccessFeature PaperArticle The Volume of Two-Qubit States by Information Geometry
Entropy 2018, 20(2), 146; https://doi.org/10.3390/e20020146
Received: 22 December 2017 / Revised: 20 February 2018 / Accepted: 22 February 2018 / Published: 24 February 2018
Cited by 1 | PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the
[...] Read more.
Using the information geometry approach, we determine the volume of the set of two-qubit states with maximally disordered subsystems. Particular attention is devoted to the behavior of the volume of sub-manifolds of separable and entangled states with fixed purity. We show that the usage of the classical Fisher metric on phase space probability representation of quantum states gives the same qualitative results with respect to different versions of the quantum Fisher metric. Full article
(This article belongs to the Special Issue News Trends in Statistical Physics of Complex Systems)
Figures

Figure 1

Open AccessArticle Information Thermodynamics Derives the Entropy Current of Cell Signal Transduction as a Model of a Binary Coding System
Entropy 2018, 20(2), 145; https://doi.org/10.3390/e20020145
Received: 12 January 2018 / Revised: 7 February 2018 / Accepted: 14 February 2018 / Published: 24 February 2018
Cited by 4 | PDF Full-text (556 KB) | HTML Full-text | XML Full-text
Abstract
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study
[...] Read more.
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study aims to evaluate the signal transduction step in cascades from the viewpoint of changes in mixing entropy. An increase in active forms may induce biological signal transduction through a mixing entropy change, which induces a chemical potential current in the signaling cascade. We applied the fluctuation theorem to calculate the chemical potential current and found that the average entropy production current is independent of the step in the whole cascade. As a result, the entropy current carrying signal transduction is defined by the entropy current mobility. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Group Sparse Precoding for Cloud-RAN with Multiple User Antennas
Entropy 2018, 20(2), 144; https://doi.org/10.3390/e20020144
Received: 6 November 2017 / Revised: 2 February 2018 / Accepted: 19 February 2018 / Published: 23 February 2018
Cited by 1 | PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU)
[...] Read more.
Cloud radio access network (C-RAN) has become a promising network architecture to support the massive data traffic in the next generation cellular networks. In a C-RAN, a massive number of low-cost remote antenna ports (RAPs) are connected to a single baseband unit (BBU) pool via high-speed low-latency fronthaul links, which enables efficient resource allocation and interference management. As the RAPs are geographically distributed, group sparse beamforming schemes attract extensive studies, where a subset of RAPs is assigned to be active and a high spectral efficiency can be achieved. However, most studies assume that each user is equipped with a single antenna. How to design the group sparse precoder for the multiple antenna users remains little understood, as it requires the joint optimization of the mutual coupling transmit and receive beamformers. This paper formulates an optimal joint RAP selection and precoding design problem in a C-RAN with multiple antennas at each user. Specifically, we assume a fixed transmit power constraint for each RAP, and investigate the optimal tradeoff between the sum rate and the number of active RAPs. Motivated by the compressive sensing theory, this paper formulates the group sparse precoding problem by inducing the 0 -norm as a penalty and then uses the reweighted 1 heuristic to find a solution. By adopting the idea of block diagonalization precoding, the problem can be formulated as a convex optimization, and an efficient algorithm is proposed based on its Lagrangian dual. Simulation results verify that our proposed algorithm can achieve almost the same sum rate as that obtained from an exhaustive search. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation
Entropy 2018, 20(2), 143; https://doi.org/10.3390/e20020143
Received: 16 December 2017 / Revised: 5 February 2018 / Accepted: 12 February 2018 / Published: 23 February 2018
PDF Full-text (2857 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson
[...] Read more.
We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method. Full article
Figures

Figure 1

Open AccessArticle A Simple and Adaptive Dispersion Regression Model for Count Data
Entropy 2018, 20(2), 142; https://doi.org/10.3390/e20020142
Received: 19 January 2018 / Revised: 14 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
Cited by 3 | PDF Full-text (409 KB) | HTML Full-text | XML Full-text
Abstract
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It
[...] Read more.
Regression for count data is widely performed by models such as Poisson, negative binomial (NB) and zero-inflated regression. A challenge often faced by practitioners is the selection of the right model to take into account dispersion, which typically occurs in count datasets. It is highly desirable to have a unified model that can automatically adapt to the underlying dispersion and that can be easily implemented in practice. In this paper, a discrete Weibull regression model is shown to be able to adapt in a simple way to different types of dispersions relative to Poisson regression: overdispersion, underdispersion and covariate-specific dispersion. Maximum likelihood can be used for efficient parameter estimation. The description of the model, parameter inference and model diagnostics is accompanied by simulated and real data analyses. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Figures

Figure 1

Open AccessArticle Finding a Hadamard Matrix by Simulated Quantum Annealing
Entropy 2018, 20(2), 141; https://doi.org/10.3390/e20020141
Received: 2 January 2018 / Revised: 6 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
PDF Full-text (773 KB) | HTML Full-text | XML Full-text
Abstract
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which
[...] Read more.
Hard problems have recently become an important issue in computing. Various methods, including a heuristic approach that is inspired by physical phenomena, are being explored. In this paper, we propose the use of simulated quantum annealing (SQA) to find a Hadamard matrix, which is itself a hard problem. We reformulate the problem as an energy minimization of spin vectors connected by a complete graph. The computation is conducted based on a path-integral Monte-Carlo (PIMC) SQA of the spin vector system, with an applied transverse magnetic field whose strength is decreased over time. In the numerical experiments, the proposed method is employed to find low-order Hadamard matrices, including the ones that cannot be constructed trivially by the Sylvester method. The scaling property of the method and the measurement of residual energy after a sufficiently large number of iterations show that SQA outperforms simulated annealing (SA) in solving this hard problem. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle A Chemo-Mechanical Model of Diffusion in Reactive Systems
Entropy 2018, 20(2), 140; https://doi.org/10.3390/e20020140
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
Cited by 1 | PDF Full-text (31650 KB) | HTML Full-text | XML Full-text
Abstract
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical
[...] Read more.
The functional properties of multi-component materials are often determined by a rearrangement of their different phases and by chemical reactions of their components. In this contribution, a material model is presented which enables computational simulations and structural optimization of solid multi-component systems. Typical Systems of this kind are anodes in batteries, reactive polymer blends and propellants. The physical processes which are assumed to contribute to the microstructural evolution are: (i) particle exchange and mechanical deformation; (ii) spinodal decomposition and phase coarsening; (iii) chemical reactions between the components; and (iv) energetic forces associated with the elastic field of the solid. To illustrate the capability of the deduced coupled field model, three-dimensional Non-Uniform Rational Basis Spline (NURBS) based finite element simulations of such multi-component structures are presented. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Figures

Figure 1

Open AccessArticle Lagrangian Function on the Finite State Space Statistical Bundle
Entropy 2018, 20(2), 139; https://doi.org/10.3390/e20020139
Received: 26 December 2017 / Revised: 21 January 2018 / Accepted: 24 January 2018 / Published: 22 February 2018
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
The statistical bundle is the set of couples ( Q , W ) of a probability density Q and a random variable W such that Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Open AccessArticle Coarse-Graining Approaches in Univariate Multiscale Sample and Dispersion Entropy
Entropy 2018, 20(2), 138; https://doi.org/10.3390/e20020138
Received: 1 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 22 February 2018
Cited by 2 | PDF Full-text (3483 KB) | HTML Full-text | XML Full-text
Abstract
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those
[...] Read more.
The evaluation of complexity in univariate signals has attracted considerable attention in recent years. This is often done using the framework of Multiscale Entropy, which entails two basic steps: coarse-graining to consider multiple temporal scales, and evaluation of irregularity for each of those scales with entropy estimators. Recent developments in the field have proposed modifications to this approach to facilitate the analysis of short-time series. However, the role of the downsampling in the classical coarse-graining process and its relationships with alternative filtering techniques has not been systematically explored yet. Here, we assess the impact of coarse-graining in multiscale entropy estimations based on both Sample Entropy and Dispersion Entropy. We compare the classical moving average approach with low-pass Butterworth filtering, both with and without downsampling, and empirical mode decomposition in Intrinsic Multiscale Entropy, in selected synthetic data and two real physiological datasets. The results show that when the sampling frequency is low or high, downsampling respectively decreases or increases the entropy values. Our results suggest that, when dealing with long signals and relatively low levels of noise, the refine composite method makes little difference in the quality of the entropy estimation at the expense of considerable additional computational cost. It is also found that downsampling within the coarse-graining procedure may not be required to quantify the complexity of signals, especially for short ones. Overall, we expect these results to contribute to the ongoing discussion about the development of stable, fast and robust-to-noise multiscale entropy techniques suited for either short or long recordings. Full article
Figures

Figure 1

Open AccessArticle Engine Load Effects on the Energy and Exergy Performance of a Medium Cycle/Organic Rankine Cycle for Exhaust Waste Heat Recovery
Entropy 2018, 20(2), 137; https://doi.org/10.3390/e20020137
Received: 10 December 2017 / Revised: 3 February 2018 / Accepted: 12 February 2018 / Published: 21 February 2018
Cited by 1 | PDF Full-text (4772 KB) | HTML Full-text | XML Full-text
Abstract
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence,
[...] Read more.
The Organic Rankine Cycle (ORC) has been proved a promising technique to exploit waste heat from Internal Combustion Engines (ICEs). Waste heat recovery systems have usually been designed based on engine rated working conditions, while engines often operate under part load conditions. Hence, it is quite important to analyze the off-design performance of ORC systems under different engine loads. This paper presents an off-design Medium Cycle/Organic Rankine Cycle (MC/ORC) system model by interconnecting the component models, which allows the prediction of system off-design behavior. The sliding pressure control method is applied to balance the variation of system parameters and evaporating pressure is chosen as the operational variable. The effect of operational variable and engine load on system performance is analyzed from the aspects of energy and exergy. The results show that with the drop of engine load, the MC/ORC system can always effectively recover waste heat, whereas the maximum net power output, thermal efficiency and exergy efficiency decrease linearly. Considering the contributions of components to total exergy destruction, the proportions of the gas-oil exchanger and turbine increase, while the proportions of the evaporator and condenser decrease with the drop of engine load. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Robustification of a One-Dimensional Generic Sigmoidal Chaotic Map with Application of True Random Bit Generation
Entropy 2018, 20(2), 136; https://doi.org/10.3390/e20020136
Received: 23 December 2017 / Revised: 7 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (6699 KB) | HTML Full-text | XML Full-text
Abstract
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic
[...] Read more.
The search for generation approaches to robust chaos has received considerable attention due to potential applications in cryptography or secure communications. This paper is of interest regarding a 1-D sigmoidal chaotic map, which has never been distinctly investigated. This paper introduces a generic form of the sigmoidal chaotic map with three terms, i.e., xn+1 = ∓AfNL(Bxn) ± Cxn ± D, where A, B, C, and D are real constants. The unification of modified sigmoid and hyperbolic tangent (tanh) functions reveals the existence of a “unified sigmoidal chaotic map” generically fulfilling the three terms, with robust chaos partially appearing in some parameter ranges. A simplified generic form, i.e., xn+1 = ∓fNL(Bxn) ± Cxn, through various S-shaped functions, has recently led to the possibility of linearization using (i) hardtanh and (ii) signum functions. This study finds a linearized sigmoidal chaotic map that potentially offers robust chaos over an entire range of parameters. Chaos dynamics are described in terms of chaotic waveforms, histogram, cobweb plots, fixed point, Jacobian, and a bifurcation structure diagram based on Lyapunov exponents. As a practical example, a true random bit generator using the linearized sigmoidal chaotic map is demonstrated. The resulting output is evaluated using the NIST SP800-22 test suite and TestU01. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessArticle Complexity of Simple, Switched and Skipped Chaotic Maps in Finite Precision
Entropy 2018, 20(2), 135; https://doi.org/10.3390/e20020135
Received: 29 December 2017 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (5247 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary
[...] Read more.
In this paper we investigate the degradation of the statistic properties of chaotic maps as consequence of their implementation in a digital media such as Digital Signal Processors (DSP), Field Programmable Gate Arrays (FPGA) or Application-Specific Integrated Circuits (ASIC). In these systems, binary floating- and fixed-point are the numerical representations available. Fixed-point representation is preferred over floating-point when speed, low power and/or small circuit area are necessary. Then, in this paper we compare the degradation of fixed-point binary precision version of chaotic maps with the one obtained by using floating point 754-IEEE standard, to evaluate the feasibility of their FPGA implementation. The specific period that every fixed-point precision produces was investigated in previous reports. Statistical characteristics are also relevant, it has been recently shown that it is convenient to describe the statistical characteristic using both, causal and non-causal quantifiers. In this paper we complement the period analysis by characterizing the behavior of these maps from an statistical point of view using cuantifiers from information theory. Here, rather than reproducing an exact replica of the real system, the aim is to meet certain conditions related to the statistics of systems. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Investigating the Configurations in Cross-Shareholding: A Joint Copula-Entropy Approach
Entropy 2018, 20(2), 134; https://doi.org/10.3390/e20020134
Received: 24 December 2017 / Revised: 16 February 2018 / Accepted: 17 February 2018 / Published: 20 February 2018
PDF Full-text (877 KB) | HTML Full-text | XML Full-text
Abstract
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market,
[...] Read more.
The complex nature of the interlacement of economic actors is quite evident at the level of the Stock market, where any company may actually interact with the other companies buying and selling their shares. In this respect, the companies populating a Stock market, along with their connections, can be effectively modeled through a directed network, where the nodes represent the companies, and the links indicate the ownership. This paper deals with this theme and discusses the concentration of a market. A cross-shareholding matrix is considered, along with two key factors: the node out-degree distribution which represents the diversification of investments in terms of the number of involved companies, and the node in-degree distribution which reports the integration of a company due to the sales of its own shares to other companies. While diversification is widely explored in the literature, integration is most present in literature on contagions. This paper captures such quantities of interest in the two frameworks and studies the stochastic dependence of diversification and integration through a copula approach. We adopt entropies as measures for assessing the concentration in the market. The main question is to assess the dependence structure leading to a better description of the data or to market polarization (minimal entropy) or market fairness (maximal entropy). In so doing, we derive information on the way in which the in- and out-degrees should be connected in order to shape the market. The question is of interest to regulators bodies, as witnessed by specific alert threshold published on the US mergers guidelines for limiting the possibility of acquisitions and the prevalence of a single company on the market. Indeed, all countries and the EU have also rules or guidelines in order to limit concentrations, in a country or across borders, respectively. The calibration of copulas and model parameters on the basis of real data serves as an illustrative application of the theoretical proposal. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Applying Time-Dependent Attributes to Represent Demand in Road Mass Transit Systems
Entropy 2018, 20(2), 133; https://doi.org/10.3390/e20020133
Received: 24 January 2018 / Revised: 15 February 2018 / Accepted: 16 February 2018 / Published: 20 February 2018
PDF Full-text (3944 KB) | HTML Full-text | XML Full-text
Abstract
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ
[...] Read more.
The development of efficient mass transit systems that provide quality of service is a major challenge for modern societies. To meet this challenge, it is essential to understand user demand. This article proposes using new time-dependent attributes to represent demand, attributes that differ from those that have traditionally been used in the design and planning of this type of transit system. Data mining was used to obtain these new attributes; they were created using clustering techniques, and their quality evaluated with the Shannon entropy function and with neural networks. The methodology was implemented on an intercity public transport company and the results demonstrate that the attributes obtained offer a more precise understanding of demand and enable predictions to be made with acceptable precision. Full article
(This article belongs to the Special Issue Entropy-based Data Mining)
Figures

Graphical abstract

Open AccessArticle Uncertainty Relation Based on Wigner–Yanase–Dyson Skew Information with Quantum Memory
Entropy 2018, 20(2), 132; https://doi.org/10.3390/e20020132
Received: 2 January 2018 / Revised: 11 February 2018 / Accepted: 15 February 2018 / Published: 20 February 2018
Cited by 1 | PDF Full-text (426 KB) | HTML Full-text | XML Full-text
Abstract
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the
[...] Read more.
We present uncertainty relations based on Wigner–Yanase–Dyson skew information with quantum memory. Uncertainty inequalities both in product and summation forms are derived. It is shown that the lower bounds contain two terms: one characterizes the degree of compatibility of two measurements, and the other is the quantum correlation between the measured system and the quantum memory. Detailed examples are given for product, separable and entangled states. Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Figures

Figure 1

Open AccessArticle On a Dynamical Approach to Some Prime Number Sequences
Entropy 2018, 20(2), 131; https://doi.org/10.3390/e20020131
Received: 8 December 2017 / Revised: 30 January 2018 / Accepted: 18 February 2018 / Published: 19 February 2018
PDF Full-text (2900 KB) | HTML Full-text | XML Full-text
Abstract
We show how the cross-disciplinary transfer of techniques from dynamical systems theory to number theory can be a fruitful avenue for research. We illustrate this idea by exploring from a nonlinear and symbolic dynamics viewpoint certain patterns emerging in some residue sequences generated
[...] Read more.
We show how the cross-disciplinary transfer of techniques from dynamical systems theory to number theory can be a fruitful avenue for research. We illustrate this idea by exploring from a nonlinear and symbolic dynamics viewpoint certain patterns emerging in some residue sequences generated from the prime number sequence. We show that the sequence formed by the residues of the primes modulo k are maximally chaotic and, while lacking forbidden patterns, unexpectedly display a non-trivial spectrum of Renyi entropies which suggest that every block of size m > 1 , while admissible, occurs with different probability. This non-uniform distribution of blocks for m > 1 contrasts Dirichlet’s theorem that guarantees equiprobability for m = 1 . We then explore in a similar fashion the sequence of prime gap residues. We numerically find that this sequence is again chaotic (positivity of Kolmogorov–Sinai entropy), however chaos is weaker as forbidden patterns emerge for every block of size m > 1 . We relate the onset of these forbidden patterns with the divisibility properties of integers, and estimate the densities of gap block residues via Hardy–Littlewood k-tuple conjecture. We use this estimation to argue that the amount of admissible blocks is non-uniformly distributed, what supports the fact that the spectrum of Renyi entropies is again non-trivial in this case. We complete our analysis by applying the chaos game to these symbolic sequences, and comparing the Iterated Function System (IFS) attractors found for the experimental sequences with appropriate null models. Full article
Figures

Figure 1

Open AccessArticle Exergoeconomic Assessment of Solar Absorption and Absorption–Compression Hybrid Refrigeration in Building Cooling
Entropy 2018, 20(2), 130; https://doi.org/10.3390/e20020130
Received: 19 January 2018 / Revised: 12 February 2018 / Accepted: 14 February 2018 / Published: 17 February 2018
Cited by 2 | PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
The paper mainly deals with the match of solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression–absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS), and building cooling based on the exergoeconomics. Three types
[...] Read more.
The paper mainly deals with the match of solar refrigeration, i.e., solar/natural gas-driven absorption chiller (SNGDAC), solar vapor compression–absorption integrated refrigeration system with parallel configuration (SVCAIRSPC), and solar absorption-subcooled compression hybrid cooling system (SASCHCS), and building cooling based on the exergoeconomics. Three types of building cooling are considered: Type 1 is the single-story building, type 2 includes the two-story and three-story buildings, and type 3 is the multi-story buildings. Besides this, two Chinese cities, Guangzhou and Turpan, are taken into account as well. The product cost flow rate is employed as the primary decision variable. The result exhibits that SNGDAC is considered as a suitable solution for type 1 buildings in Turpan, owing to its negligible natural gas consumption and lowest product cost flow rate. SVCAIRSPC is more applicable for type 2 buildings in Turpan because of its higher actual cooling capacity of absorption subsystem and lower fuel and product cost flow rate. Additionally, SASCHCS shows the most extensive cost-effectiveness, namely, its exergy destruction and product cost flow rate are both the lowest when used in all types of buildings in Guangzhou or type 3 buildings in Turpan. This paper is helpful to promote the application of solar cooling. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras
Entropy 2018, 20(2), 129; https://doi.org/10.3390/e20020129
Received: 25 January 2018 / Revised: 6 February 2018 / Accepted: 9 February 2018 / Published: 16 February 2018
Cited by 3 | PDF Full-text (295 KB) | HTML Full-text | XML Full-text
Abstract
In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have
[...] Read more.
In the paper we propose, using the logical entropy function, a new kind of entropy in product MV-algebras, namely the logical entropy and its conditional version. Fundamental characteristics of these quantities have been shown and subsequently, the results regarding the logical entropy have been used to define the logical mutual information of experiments in the studied case. In addition, we define the logical cross entropy and logical divergence for the examined situation and prove basic properties of the suggested quantities. To illustrate the results, we provide several numerical examples. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle On Points Focusing Entropy
Entropy 2018, 20(2), 128; https://doi.org/10.3390/e20020128
Received: 22 January 2018 / Revised: 12 February 2018 / Accepted: 13 February 2018 / Published: 16 February 2018
PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
In the paper, we consider local aspects of the entropy of nonautonomous dynamical systems. For this purpose, we introduce the notion of a (asymptotical) focal entropy point. The notion of entropy appeared as a result of practical needs concerning thermodynamics and the problem
[...] Read more.
In the paper, we consider local aspects of the entropy of nonautonomous dynamical systems. For this purpose, we introduce the notion of a (asymptotical) focal entropy point. The notion of entropy appeared as a result of practical needs concerning thermodynamics and the problem of information flow, and it is connected with the complexity of a system. The definition adopted in the paper specifies the notions that express the complexity of a system around certain points (the complexity of the system is the same as its complexity around these points), and moreover, the complexity of a system around such points does not depend on the behavior of the system in other parts of its domain. Any periodic system “acting” in the closed unit interval has an asymptotical focal entropy point, which justifies wide interest in these issues. In the paper, we examine the problems of the distortions of a system and the approximation of an autonomous system by a nonautonomous one, in the context of having a (asymptotical) focal entropy point. It is shown that even a slight modification of a system may lead to the arising of the respective focal entropy points. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems)
Open AccessArticle Application of Multiscale Entropy in Assessing Plantar Skin Blood Flow Dynamics in Diabetics with Peripheral Neuropathy
Entropy 2018, 20(2), 127; https://doi.org/10.3390/e20020127
Received: 22 January 2018 / Revised: 10 February 2018 / Accepted: 12 February 2018 / Published: 15 February 2018
PDF Full-text (5452 KB) | HTML Full-text | XML Full-text
Abstract
Diabetic foot ulcer (DFU) is a common complication of diabetes mellitus, while tissue ischemia caused by impaired vasodilatory response to plantar pressure is thought to be a major factor of the development of DFUs, which has been assessed using various measures of skin
[...] Read more.
Diabetic foot ulcer (DFU) is a common complication of diabetes mellitus, while tissue ischemia caused by impaired vasodilatory response to plantar pressure is thought to be a major factor of the development of DFUs, which has been assessed using various measures of skin blood flow (SBF) in the time or frequency domain. These measures, however, are incapable of characterizing nonlinear dynamics of SBF, which is an indicator of pathologic alterations of microcirculation in the diabetic foot. This study recruited 18 type 2 diabetics with peripheral neuropathy and eight healthy controls. SBF at the first metatarsal head in response to locally applied pressure and heating was measured using laser Doppler flowmetry. A multiscale entropy algorithm was utilized to quantify the regularity degree of the SBF responses. The results showed that during reactive hyperemia and thermally induced biphasic response, the regularity degree of SBF in diabetics underwent only small changes compared to baseline and significantly differed from that in controls at multiple scales (p < 0.05). On the other hand, the transition of regularity degree of SBF in diabetics distinctively differed from that in controls (p < 0.05). These findings indicated that multiscale entropy could provide a more comprehensive assessment of impaired microvascular reactivity in the diabetic foot compared to other entropy measures based on only a single scale, which strengthens the use of plantar SBF dynamics to assess the risk for DFU. Full article
Figures

Figure 1

Open AccessArticle Mesoscopic Moment Equations for Heat Conduction: Characteristic Features and Slow–Fast Mode Decomposition
Entropy 2018, 20(2), 126; https://doi.org/10.3390/e20020126
Received: 21 December 2017 / Revised: 30 January 2018 / Accepted: 12 February 2018 / Published: 15 February 2018
PDF Full-text (946 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we derive different systems of mesoscopic moment equations for the heat-conduction problem and analyze the basic features that they must hold. We discuss two- and three-equation systems, showing that the resulting mesoscopic equation from two-equation systems is of the telegraphist’s
[...] Read more.
In this work, we derive different systems of mesoscopic moment equations for the heat-conduction problem and analyze the basic features that they must hold. We discuss two- and three-equation systems, showing that the resulting mesoscopic equation from two-equation systems is of the telegraphist’s type and complies with the Cattaneo equation in the Extended Irreversible Thermodynamics Framework. The solution of the proposed systems is analyzed, and it is shown that it accounts for two modes: a slow diffusive mode, and a fast advective mode. This latter additional mode makes them suitable for heat transfer phenomena on fast time-scales, such as high-frequency pulses and heat transfer in small-scale devices. We finally show that, if proper initial conditions are provided, the advective mode disappears, and the solution of the system tends asymptotically to the transient solution of the classical parabolic heat-conduction equation. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessArticle Dissolution or Growth of a Liquid Drop via Phase-Field Ternary Mixture Model Based on the Non-Random, Two-Liquid Equation
Entropy 2018, 20(2), 125; https://doi.org/10.3390/e20020125
Received: 8 January 2018 / Revised: 6 February 2018 / Accepted: 11 February 2018 / Published: 14 February 2018
PDF Full-text (950 KB) | HTML Full-text | XML Full-text
Abstract
We simulate the diffusion-driven dissolution or growth of a single-component liquid drop embedded in a continuous phase of a binary liquid. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the non-random, two-liquid (NRTL) equation as a
[...] Read more.
We simulate the diffusion-driven dissolution or growth of a single-component liquid drop embedded in a continuous phase of a binary liquid. Our theoretical approach follows a diffuse-interface model of partially miscible ternary liquid mixtures that incorporates the non-random, two-liquid (NRTL) equation as a submodel for the enthalpic (so-called excess) component of the Gibbs energy of mixing, while its nonlocal part is represented based on a square-gradient (Cahn-Hilliard-type modeling) assumption. The governing equations for this phase-field ternary mixture model are simulated in 2D, showing that, for a single-component drop embedded in a continuous phase of a binary liquid (which is highly miscible with either one component of the continuous phase but is essentially immiscible with the other), the size of the drop can either shrink to zero or reach a stationary value, depending on whether the global composition of the mixture is within the one-phase region or the unstable range of the phase diagram. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics of Interfaces)
Figures

Figure 1

Open AccessArticle Adaptive Synchronization of Fractional-Order Complex-Valued Neural Networks with Discrete and Distributed Delays
Entropy 2018, 20(2), 124; https://doi.org/10.3390/e20020124
Received: 25 January 2018 / Revised: 10 February 2018 / Accepted: 11 February 2018 / Published: 13 February 2018
Cited by 2 | PDF Full-text (955 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the synchronization problem of fractional-order complex-valued neural networks with discrete and distributed delays is investigated. Based on the adaptive control and Lyapunov function theory, some sufficient conditions are derived to ensure the states of two fractional-order complex-valued neural networks with
[...] Read more.
In this paper, the synchronization problem of fractional-order complex-valued neural networks with discrete and distributed delays is investigated. Based on the adaptive control and Lyapunov function theory, some sufficient conditions are derived to ensure the states of two fractional-order complex-valued neural networks with discrete and distributed delays achieve complete synchronization rapidly. Finally, numerical simulations are given to illustrate the effectiveness and feasibility of the theoretical results. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessFeature PaperArticle Kinetic Energy of a Free Quantum Brownian Particle
Entropy 2018, 20(2), 123; https://doi.org/10.3390/e20020123
Received: 29 December 2017 / Revised: 21 January 2018 / Accepted: 9 February 2018 / Published: 12 February 2018
PDF Full-text (482 KB) | HTML Full-text | XML Full-text
Abstract
We consider a paradigmatic model of a quantum Brownian particle coupled to a thermostat consisting of harmonic oscillators. In the framework of a generalized Langevin equation, the memory (damping) kernel is assumed to be in the form of exponentially-decaying oscillations. We discuss a
[...] Read more.
We consider a paradigmatic model of a quantum Brownian particle coupled to a thermostat consisting of harmonic oscillators. In the framework of a generalized Langevin equation, the memory (damping) kernel is assumed to be in the form of exponentially-decaying oscillations. We discuss a quantum counterpart of the equipartition energy theorem for a free Brownian particle in a thermal equilibrium state. We conclude that the average kinetic energy of the Brownian particle is equal to thermally-averaged kinetic energy per one degree of freedom of oscillators of the environment, additionally averaged over all possible oscillators’ frequencies distributed according to some probability density in which details of the particle-environment interaction are present via the parameters of the damping kernel. Full article
(This article belongs to the Special Issue Theoretical Aspect of Nonlinear Statistical Physics)
Figures

Figure 1

Open AccessFeature PaperArticle Attraction Controls the Entropy of Fluctuations in Isosceles Triangular Networks
Entropy 2018, 20(2), 122; https://doi.org/10.3390/e20020122
Received: 26 January 2018 / Revised: 8 February 2018 / Accepted: 10 February 2018 / Published: 12 February 2018
Cited by 1 | PDF Full-text (1257 KB) | HTML Full-text | XML Full-text
Abstract
We study two-dimensional triangular-network models, which have degenerate ground states composed of straight or randomly-zigzagging stripes and thus sub-extensive residual entropy. We show that attraction is responsible for the inversion of the stable phase by changing the entropy of fluctuations around the ground-state
[...] Read more.
We study two-dimensional triangular-network models, which have degenerate ground states composed of straight or randomly-zigzagging stripes and thus sub-extensive residual entropy. We show that attraction is responsible for the inversion of the stable phase by changing the entropy of fluctuations around the ground-state configurations. By using a real-space shell-expansion method, we compute the exact expression of the entropy for harmonic interactions, while for repulsive harmonic interactions we obtain the entropy arising from a limited subset of the system by numerical integration. We compare these results with a three-dimensional triangular-network model, which shows the same attraction-mediated selection mechanism of the stable phase, and conclude that this effect is general with respect to the dimensionality of the system. Full article
(This article belongs to the Special Issue Residual Entropy and Nonequilibrium States)
Figures

Figure 1

Open AccessFeature PaperArticle Stochastic Proximal Gradient Algorithms for Multi-Source Quantitative Photoacoustic Tomography
Entropy 2018, 20(2), 121; https://doi.org/10.3390/e20020121
Received: 13 December 2017 / Revised: 22 January 2018 / Accepted: 4 February 2018 / Published: 11 February 2018
PDF Full-text (821 KB) | HTML Full-text | XML Full-text
Abstract
The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters
[...] Read more.
The development of accurate and efficient image reconstruction algorithms is a central aspect of quantitative photoacoustic tomography (QPAT). In this paper, we address this issues for multi-source QPAT using the radiative transfer equation (RTE) as accurate model for light transport. The tissue parameters are jointly reconstructed from the acoustical data measured for each of the applied sources. We develop stochastic proximal gradient methods for multi-source QPAT, which are more efficient than standard proximal gradient methods in which a single iterative update has complexity proportional to the number applies sources. Additionally, we introduce a completely new formulation of QPAT as multilinear (MULL) inverse problem which avoids explicitly solving the RTE. The MULL formulation of QPAT is again addressed with stochastic proximal gradient methods. Numerical results for both approaches are presented. Besides the introduction of stochastic proximal gradient algorithms to QPAT, we consider the new MULL formulation of QPAT as main contribution of this paper. Full article
(This article belongs to the Special Issue Probabilistic Methods for Inverse Problems)
Figures

Figure 1

Open AccessArticle Performance Evaluations on Using Entropy of Ultrasound Log-Compressed Envelope Images for Hepatic Steatosis Assessment: An In Vivo Animal Study
Entropy 2018, 20(2), 120; https://doi.org/10.3390/e20020120
Received: 8 January 2018 / Revised: 5 February 2018 / Accepted: 9 February 2018 / Published: 11 February 2018
PDF Full-text (3297 KB) | HTML Full-text | XML Full-text
Abstract
Ultrasound B-mode imaging based on log-compressed envelope data has been widely applied to examine hepatic steatosis. Modeling raw backscattered signals returned from the liver parenchyma by using statistical distributions can provide additional information to assist in hepatic steatosis diagnosis. Since raw data are
[...] Read more.
Ultrasound B-mode imaging based on log-compressed envelope data has been widely applied to examine hepatic steatosis. Modeling raw backscattered signals returned from the liver parenchyma by using statistical distributions can provide additional information to assist in hepatic steatosis diagnosis. Since raw data are not always available in modern ultrasound systems, information entropy, which is a widely known nonmodel-based approach, may allow ultrasound backscattering analysis using B-scan for assessing hepatic steatosis. In this study, we explored the feasibility of using ultrasound entropy imaging constructed using log-compressed backscattered envelopes for assessing hepatic steatosis. Different stages of hepatic steatosis were induced in male Wistar rats fed with a methionine- and choline-deficient diet for 0 (i.e., normal control) and 1, 1.5, and 2 weeks (n = 48; 12 rats in each group). In vivo scanning of rat livers was performed using a commercial ultrasound machine (Model 3000, Terason, Burlington, MA, USA) equipped with a 7-MHz linear array transducer (Model 10L5, Terason) for ultrasound B-mode and entropy imaging based on uncompressed (HE image) and log-compressed envelopes (HB image), which were subsequently compared with histopathological examinations. Receiver operating characteristic (ROC) curve analysis and areas under the ROC curves (AUC) were used to assess diagnostic performance levels. The results showed that ultrasound entropy imaging can be used to assess hepatic steatosis. The AUCs obtained from HE imaging for diagnosing different steatosis stages were 0.93 (≥mild), 0.89 (≥moderate), and 0.89 (≥severe), respectively. HB imaging produced AUCs ranging from 0.74 (≥mild) to 0.84 (≥severe) as long as a higher number of bins was used to reconstruct the signal histogram for estimating entropy. The results indicated that entropy use enables ultrasound parametric imaging based on log-compressed envelope signals with great potential for diagnosing hepatic steatosis. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Exergy Analysis of the Musculoskeletal System Efficiency during Aerobic and Anaerobic Activities
Entropy 2018, 20(2), 119; https://doi.org/10.3390/e20020119
Received: 19 December 2017 / Revised: 31 January 2018 / Accepted: 9 February 2018 / Published: 11 February 2018
Cited by 2 | PDF Full-text (926 KB) | HTML Full-text | XML Full-text
Abstract
The first and second laws of thermodynamics were applied to the human body in order to evaluate the quality of the energy conversion during muscle activity. Such an implementation represents an important issue in the exergy analysis of the body, because there is
[...] Read more.
The first and second laws of thermodynamics were applied to the human body in order to evaluate the quality of the energy conversion during muscle activity. Such an implementation represents an important issue in the exergy analysis of the body, because there is a difficulty in the literature in evaluating the performed power in some activities. Hence, to have the performed work as an input in the exergy model, two types of exercises were evaluated: weight lifting and aerobic exercise on a stationary bicycle. To this aim, we performed a study of the aerobic and anaerobic reactions in the muscle cells, aiming at predicting the metabolic efficiency and muscle efficiency during exercises. Physiological data such as oxygen consumption, carbon dioxide production, skin and internal temperatures and performed power were measured. Results indicated that the exergy efficiency was around 4% in the weight lifting, whereas it could reach values as high as 30% for aerobic exercises. It has been shown that the stationary bicycle is a more adequate test for first correlations between exergy and performance indices. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Performance of Segmented Thermoelectric Cooler Micro-Elements with Different Geometric Shapes and Temperature-Dependent Properties
Entropy 2018, 20(2), 118; https://doi.org/10.3390/e20020118
Received: 31 December 2017 / Revised: 7 February 2018 / Accepted: 8 February 2018 / Published: 11 February 2018
PDF Full-text (2222 KB) | HTML Full-text | XML Full-text
Abstract
In this work, the influences of the Thomson effect and the geometry of the p-type segmented leg on the performance of a segmented thermoelectric microcooler (STEMC) were examined. The effects of geometry and the material configuration of the p-type segmented leg on the
[...] Read more.
In this work, the influences of the Thomson effect and the geometry of the p-type segmented leg on the performance of a segmented thermoelectric microcooler (STEMC) were examined. The effects of geometry and the material configuration of the p-type segmented leg on the cooling power ( Q c ) and coefficient of performance ( C O P ) were investigated. The influence of the cross-sectional area ratio of the two joined segments on the device performance was also evaluated. We analyzed a one-dimensional p-type segmented leg model composed of two different semiconductor materials, B i 2 T e 3 and ( B i 0.5 S b 0.5 ) 2 T e 3 . Considering the three most common p-type leg geometries, we studied both single-material systems (using the same material for both segments) and segmented systems (using different materials for each segment). The C O P , Q c and temperature profile were evaluated for each of the modeled geometric configurations under a fixed temperature gradient of Δ T = 30 K. The performances of the STEMC were evaluated using two models, namely the constant-properties material (CPM) and temperature-dependent properties material (TDPM) models, considering the thermal conductivity ( κ ( T ) ), electrical conductivity ( σ ( T ) ) and Seebeck coefficient ( α ( T ) ). We considered the influence of the Thomson effect on C O P and Q c using the TDPM model. The results revealed the optimal material configurations for use in each segment of the p-type leg. According to the proposed geometric models, the optimal leg geometry and electrical current for maximum performance were determined. After consideration of the Thomson effect, the STEMC system was found to deliver a maximum cooling power that was 5.10 % higher than that of the single-material system. The results showed that the inverse system (where the material with a higher Seebeck coefficient is used for the first segment) delivered a higher performance than the direct system, with improvements in the C O P and Q c of 6.67 % and 29.25 % , respectively. Finally, analysis of the relationship between the areas of the STEMC segments demonstrated that increasing the cross-sectional area in the second segment led to improvements in the C O P and Q c of 16.67 % and 8.03 % , respectively. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Graphical abstract

Open AccessArticle Bayesian Technique for the Selection of Probability Distributions for Frequency Analyses of Hydrometeorological Extremes
Entropy 2018, 20(2), 117; https://doi.org/10.3390/e20020117
Received: 13 November 2017 / Revised: 11 January 2018 / Accepted: 16 January 2018 / Published: 11 February 2018
Cited by 12 | PDF Full-text (1761 KB) | HTML Full-text | XML Full-text
Abstract
Frequency analysis of hydrometeorological extremes plays an important role in the design of hydraulic structures. A multitude of distributions have been employed for hydrological frequency analysis, and more than one distribution is often found to be adequate for frequency analysis. The current method
[...] Read more.
Frequency analysis of hydrometeorological extremes plays an important role in the design of hydraulic structures. A multitude of distributions have been employed for hydrological frequency analysis, and more than one distribution is often found to be adequate for frequency analysis. The current method for selecting the best fitted distributions are not so objective. Using different kinds of constraints, entropy theory was employed in this study to derive five generalized distributions for frequency analysis. These distributions are the generalized gamma (GG) distribution, generalized beta distribution of the second kind (GB2), Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B), and Halphen type inverse B (Hal-IB) distribution. The Bayesian technique was employed to objectively select the optimal distribution. The method of selection was tested using simulation as well as using extreme daily and hourly rainfall data from the Mississippi. The results showed that the Bayesian technique was able to select the best fitted distribution, thus providing a new way for model selection for frequency analysis of hydrometeorological extremes. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Back to Top