Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 8 (August 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-35
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Complexity, Criticality and Computation
Entropy 2017, 19(8), 403; doi:10.3390/e19080403
Received: 3 August 2017 / Revised: 3 August 2017 / Accepted: 3 August 2017 / Published: 4 August 2017
PDF Full-text (150 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))

Research

Jump to: Editorial, Review

Open AccessArticle Planck-Scale Soccer-Ball Problem: A Case of Mistaken Identity
Entropy 2017, 19(8), 400; doi:10.3390/e19080400
Received: 23 April 2017 / Revised: 10 July 2017 / Accepted: 18 July 2017 / Published: 2 August 2017
PDF Full-text (231 KB) | HTML Full-text | XML Full-text
Abstract
Over the last decade, it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to “real” 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather
[...] Read more.
Over the last decade, it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to “real” 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather different quantum-gravity models has motivated several studies, but this interest is being tempered by concerns that a nonlinear law of addition of momenta might inevitably produce a pathological description of the total momentum of a macroscopic body. I here show that such concerns are unjustified, finding that they are rooted in failure to appreciate the differences between two roles for laws composition of momentum in physics. Previous results relied exclusively on the role of a law of momentum composition in the description of spacetime locality. However, the notion of total momentum of a multi-particle system is not a manifestation of locality, but rather reflects translational invariance. By working within an illustrative example of quantum spacetime, I show explicitly that spacetime locality is indeed reflected in a nonlinear law of composition of momenta, but translational invariance still results in an undeformed linear law of addition of momenta building up the total momentum of a multi-particle system. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle On the Modelling and Control of a Laboratory Prototype of a Hydraulic Canal Based on a TITO Fractional-Order Model
Entropy 2017, 19(8), 401; doi:10.3390/e19080401
Received: 30 June 2017 / Revised: 28 July 2017 / Accepted: 1 August 2017 / Published: 3 August 2017
PDF Full-text (6305 KB) | HTML Full-text | XML Full-text
Abstract
In this paper a two-input, two-output (TITO) fractional order mathematical model of a laboratory prototype of a hydraulic canal is proposed. This canal is made up of two pools that have a strong interaction between them. The inputs of the TITO model are
[...] Read more.
In this paper a two-input, two-output (TITO) fractional order mathematical model of a laboratory prototype of a hydraulic canal is proposed. This canal is made up of two pools that have a strong interaction between them. The inputs of the TITO model are the pump flow and the opening of an intermediate gate, and the two outputs are the water levels in the two pools. Based on the experiments developed in a laboratory prototype the parameters of the mathematical models have been identified. Then, considering the TITO model, a first control loop of the pump is closed to reproduce real-world conditions in which the water level of the first pool is not dependent on the opening of the upstream gate, thus leading to an equivalent single input, single output (SISO) system. The comparison of the resulting system with the classical first order systems typically utilized to model hydraulic canals shows that the proposed model has significantly lower error: about 50%, and, therefore, higher accuracy in capturing the canal dynamics. This model has also been utilized to optimize the design of the controller of the pump of the canal, thus achieving a faster response to step commands and thus minimizing the interaction between the two pools of the experimental platform. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Optimal Belief Approximation
Entropy 2017, 19(8), 402; doi:10.3390/e19080402
Received: 18 April 2017 / Revised: 4 July 2017 / Accepted: 5 July 2017 / Published: 4 August 2017
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss
[...] Read more.
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments—the approximated and non-approximated beliefs—should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Thermal Transport and Entropy Production Mechanisms in a Turbulent Round Jet at Supercritical Thermodynamic Conditions
Entropy 2017, 19(8), 404; doi:10.3390/e19080404
Received: 1 July 2017 / Revised: 29 July 2017 / Accepted: 2 August 2017 / Published: 5 August 2017
PDF Full-text (8530 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy
[...] Read more.
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy of heat fluxes and temperature scales are examined. Secondly, the entropy production rates during thermofluid processes evolving in the supercritical flow are investigated in order to identify the causes of irreversibilities and to display advantageous locations of handling along with the process regimes favorable to mixing. Thereby, it turned out that (1) the jet disintegration process consists of four main stages under supercritical conditions (potential core, separation, pseudo-boiling, turbulent mixing), (2) causes of irreversibilities are primarily due to heat transport and thermodynamic effects rather than turbulence dynamics and (3) heat fluxes and temperature scales appear anisotropic even at the smallest scales, which implies that anisotropic thermal diffusivity models might be appropriate in the context of both Reynolds-averaged Navier–Stokes (RANS) and large eddy simulation (LES) approaches while numerically modeling supercritical fluid flows. Full article
(This article belongs to the Section Thermodynamics)
Figures

Open AccessArticle Intrinsic Losses Based on Information Geometry and Their Applications
Entropy 2017, 19(8), 405; doi:10.3390/e19080405
Received: 30 April 2017 / Revised: 21 July 2017 / Accepted: 3 August 2017 / Published: 6 August 2017
PDF Full-text (837 KB) | HTML Full-text | XML Full-text
Abstract
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the
[...] Read more.
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the framework of Riemannian geometry and dual geometry, we revisit two commonly-used intrinsic losses which are respectively given by the squared Rao distance and the symmetrized Kullback–Leibler divergence (or Jeffreys divergence). For an exponential family endowed with the Fisher metric and α -connections, the two loss functions are uniformly described as the energy difference along an α -geodesic path, for some α { 1 , 0 , 1 } . Subsequently, the two intrinsic losses are utilized to develop Bayesian analyses of covariance matrix estimation and range-spread target detection. We provide an intrinsically unbiased covariance estimator, which is verified to be asymptotically efficient in terms of the intrinsic mean square error. The decision rules deduced by the intrinsic Bayesian criterion provide a geometrical justification for the constant false alarm rate detector based on generalized likelihood ratio principle. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Statistical Process Control for Unimodal Distribution Based on Maximum Entropy Distribution Approximation
Entropy 2017, 19(8), 406; doi:10.3390/e19080406
Received: 21 June 2017 / Revised: 18 July 2017 / Accepted: 4 August 2017 / Published: 7 August 2017
PDF Full-text (306 KB) | HTML Full-text | XML Full-text
Abstract
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution.
[...] Read more.
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. This article proposes a simplified method based on maximum entropy for the control chart design when the quantity being monitored is unimodal distribution. First, we use the maximum entropy distribution to approximate the unknown distribution of the monitored quantity. Then we directly take the value of the quantity as the monitoring statistic. Finally, the Lebesgue measure is applied to estimate the acceptance regions and the one with minimum volume is chosen as the optimal in-control region of the monitored quantity. The results from two cases show that the proposed method in this article has a higher detection capability than the conventional control chart techniques when the monitored quantity is asymmetric unimodal distribution. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Generalized Maxwell Relations in Thermodynamics with Metric Derivatives
Entropy 2017, 19(8), 407; doi:10.3390/e19080407
Received: 5 July 2017 / Revised: 25 July 2017 / Accepted: 27 July 2017 / Published: 7 August 2017
PDF Full-text (246 KB) | HTML Full-text | XML Full-text
Abstract
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also
[...] Read more.
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also the α -total differentiation with conformable derivatives. Some results in the literature are re-obtained, such as the physical temperature defined by Sumiyoshi Abe. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Open AccessFeature PaperArticle Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Entropy 2017, 19(8), 408; doi:10.3390/e19080408
Received: 21 June 2017 / Revised: 3 August 2017 / Accepted: 7 August 2017 / Published: 8 August 2017
PDF Full-text (1829 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information
[...] Read more.
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone. Full article
Figures

Figure 1

Open AccessArticle Solutions to the Cosmic Initial Entropy Problem without Equilibrium Initial Conditions
Entropy 2017, 19(8), 411; doi:10.3390/e19080411
Received: 30 June 2017 / Revised: 31 July 2017 / Accepted: 7 August 2017 / Published: 10 August 2017
PDF Full-text (1511 KB) | HTML Full-text | XML Full-text
Abstract
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem
[...] Read more.
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1) Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2) Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann) for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Relationship between Entropy, Corporate Entrepreneurship and Organizational Capabilities in Romanian Medium Sized Enterprises
Entropy 2017, 19(8), 412; doi:10.3390/e19080412
Received: 12 July 2017 / Revised: 31 July 2017 / Accepted: 8 August 2017 / Published: 10 August 2017
PDF Full-text (1194 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate
[...] Read more.
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate entrepreneurship as an instrument to fulfil their objectives. Our study contributes to the limited amount of empirical research on entropy in an organization setting by highlighting the boundary conditions of the impact by examining the moderating effect of firms’ organizational capabilities and also to the development of Econophysics as a fast growing area of interdisciplinary sciences. Full article
(This article belongs to the Section Statistical Mechanics)
Figures

Figure 1

Open AccessArticle The Emergence of Hyperchaos and Synchronization in Networks with Discrete Periodic Oscillators
Entropy 2017, 19(8), 413; doi:10.3390/e19080413 (registering DOI)
Received: 9 May 2017 / Revised: 8 August 2017 / Accepted: 8 August 2017 / Published: 16 August 2017
PDF Full-text (4788 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among
[...] Read more.
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among others. Nevertheless, the emergence of hyperchaos in this kind of real-life network has not been proven. In particular, we focus this study on the emergence of hyperchaotic dynamics, considering that these can be mainly used in engineering applications such as cryptography, secure communications, biometric systems, telemedicine, among others. In order to corroborate that the emerging dynamics are hyperchaotic, some chaos and hyperchaos verification tests are conducted. In addition, the presented hyperchaotic coupled system synchronizes, based on the proposed coupling scheme. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Effect of Slip Conditions and Entropy Generation Analysis with an Effective Prandtl Number Model on a Nanofluid Flow through a Stretching Sheet
Entropy 2017, 19(8), 414; doi:10.3390/e19080414
Received: 14 July 2017 / Revised: 8 August 2017 / Accepted: 9 August 2017 / Published: 11 August 2017
PDF Full-text (6763 KB) | HTML Full-text | XML Full-text
Abstract
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum
[...] Read more.
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum boundary layers. For this purpose, we have considered a model for effective Prandtl number which is borrowed by means of experimental analysis on a nano boundary layer, steady, two-dimensional incompressible flow through a stretching sheet. We have considered γAl2O3-H2O and Al2O3-C2H6O2 nanoparticles for the governing flow problem. An entropy generation analysis is also presented with the help of the second law of thermodynamics. A numerical technique known as Successive Taylor Series Linearization Method (STSLM) is used to solve the obtained governing nonlinear boundary layer equations. The numerical and graphical results are discussed for two cases i.e., (i) effective Prandtl number and (ii) without effective Prandtl number. From graphical results, it is observed that the velocity profile and temperature profile increases in the absence of effective Prandtl number while both expressions become larger in the presence of Prandtl number. Further, numerical comparison has been presented with previously published results to validate the current methodology and results. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle Person-Situation Debate Revisited: Phase Transitions with Quenched and Annealed Disorders
Entropy 2017, 19(8), 415; doi:10.3390/e19080415
Received: 30 June 2017 / Revised: 8 August 2017 / Accepted: 9 August 2017 / Published: 13 August 2017
PDF Full-text (447 KB) | HTML Full-text | XML Full-text
Abstract
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person
[...] Read more.
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person approach with the quenched disorder and the situation approach with the annealed disorder, and investigate how these two approaches influence order–disorder phase transitions observed in the q-voter model with noise. We show that under a quenched disorder, differences between models with independence and anticonformity are weaker and only quantitative. In contrast, annealing has a much more profound impact on the system and leads to qualitative differences between models on a macroscopic level. Furthermore, only under an annealed disorder may the discontinuous phase transitions appear. It seems that freezing the agents’ behavior at the beginning of simulation—introducing quenched disorder—supports second-order phase transitions, whereas allowing agents to reverse their attitude in time—incorporating annealed disorder—supports discontinuous ones. We show that anticonformity is insensitive to the type of disorder, and in all cases it gives the same result. We precede our study with a short insight from statistical physics into annealed vs. quenched disorder and a brief review of these two approaches in models of opinion dynamics. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Quantum Genetic Learning Control of Quantum Ensembles with Hamiltonian Uncertainties
Entropy 2017, 19(8), 376; doi:10.3390/e19080376
Received: 9 April 2017 / Revised: 5 July 2017 / Accepted: 19 July 2017 / Published: 1 August 2017
PDF Full-text (1213 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new method for controlling a quantum ensemble that its members have uncertainties in Hamiltonian parameters is designed. Based on combining the sampling-based learning control (SLC) and a new quantum genetic algorithm (QGA) method, the control of an ensemble of
[...] Read more.
In this paper, a new method for controlling a quantum ensemble that its members have uncertainties in Hamiltonian parameters is designed. Based on combining the sampling-based learning control (SLC) and a new quantum genetic algorithm (QGA) method, the control of an ensemble of a two-level quantum system with Hamiltonian uncertainties is achieved. To simultaneously transfer the ensemble members to a desired state, an SLC algorithm is designed. For reducing the transfer error significantly, an optimization problem is defined. Considering the advantages of QGA and the nature of the problem, the optimization problem by using the QGA method is solved. For this purpose, N samples through sampling of the uncertainty parameters via uniform distribution are generated and an augmented system is also created. By using QGA in the training step, the best control signal is obtained. To test the performance and validation of the method, the obtained control is implemented for some random selected samples. A couple of examples are simulated for investigating the proposed model. The results of the simulations indicate the effectiveness and the advantages of the proposed method. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Kinetic Theory beyond the Stosszahlansatz
Entropy 2017, 19(8), 381; doi:10.3390/e19080381
Received: 8 June 2017 / Revised: 20 July 2017 / Accepted: 21 July 2017 / Published: 25 July 2017
PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
In a recent paper (Chliamovitch, et al., 2015), we suggested using the principle of maximum entropy to generalize Boltzmann’s Stosszahlansatz to higher-order distribution functions. This conceptual shift of focus allowed us to derive an analog of the Boltzmann equation for the two-particle distribution
[...] Read more.
In a recent paper (Chliamovitch, et al., 2015), we suggested using the principle of maximum entropy to generalize Boltzmann’s Stosszahlansatz to higher-order distribution functions. This conceptual shift of focus allowed us to derive an analog of the Boltzmann equation for the two-particle distribution function. While we only briefly mentioned there the possibility of a hydrodynamical treatment, we complete here a crucial step towards this program. We discuss bilocal collisional invariants, from which we deduce the two-particle stationary distribution. This allows for the existence of equilibrium states in which the momenta of particles are correlated, as well as for the existence of a fourth conserved quantity besides mass, momentum and kinetic energy. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Open AccessArticle Toward a Theory of Industrial Supply Networks: A Multi-Level Perspective via Network Analysis
Entropy 2017, 19(8), 382; doi:10.3390/e19080382
Received: 30 May 2017 / Revised: 21 July 2017 / Accepted: 21 July 2017 / Published: 25 July 2017
PDF Full-text (22422 KB) | HTML Full-text | XML Full-text
Abstract
In most supply chains (SCs), transaction relationships between suppliers and customers are commonly considered to be an extrapolation from a linear perspective. However, this traditional linear concept of an SC is egotistic and oversimplified and does not sufficiently reflect the complex and cyclical
[...] Read more.
In most supply chains (SCs), transaction relationships between suppliers and customers are commonly considered to be an extrapolation from a linear perspective. However, this traditional linear concept of an SC is egotistic and oversimplified and does not sufficiently reflect the complex and cyclical structure of supplier-customer relationships in current economic and industrial situations. The interactional relationships and topological characteristics between suppliers and customers should be analyzed using supply networks (SNs) rather than traditional linear SCs. Therefore, this paper reconceptualizes SCs as SNs in complex adaptive systems (CAS), and presents three main contributions. First, we propose an integrated framework of CAS network by synthesizing multi-level network analysis from the network-, community- and vertex-perspective. The CAS perspective enables us to understand the advances of SN properties. Second, in order to emphasize the CAS properties of SNs, we conducted a real-world SN based on the Japanese industry and describe an advanced investigation of SN theory. The CAS properties help in enriching the SN theory, which can benefit SN management, community economics and industrial resilience. Third, we propose a quantitative metric of entropy to measure the complexity and robustness of SNs. The results not only support a specific understanding of the structural outcomes relevant to SNs, but also deliver efficient and effective support to the management and design of SNs. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling
Entropy 2017, 19(8), 383; doi:10.3390/e19080383
Received: 4 April 2017 / Revised: 11 July 2017 / Accepted: 13 July 2017 / Published: 26 July 2017
PDF Full-text (690 KB) | HTML Full-text | XML Full-text
Abstract
Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in
[...] Read more.
Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS). The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA). Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Securing Relay Networks with Artificial Noise: An Error Performance-Based Approach
Entropy 2017, 19(8), 384; doi:10.3390/e19080384
Received: 24 June 2017 / Revised: 17 July 2017 / Accepted: 18 July 2017 / Published: 26 July 2017
PDF Full-text (1087 KB) | HTML Full-text | XML Full-text
Abstract
We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM) signals transmitted
[...] Read more.
We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM) signals transmitted from the source node to the relay, the destination node designs and transmits artificial noise (AN) symbols to jam the relay reception. The objective of our considered AN design is to degrade the error probability performance at the untrusted relay, for different types of channel state information (CSI) at the destination. By considering perfect knowledge of the instantaneous CSI of the source-to-relay and relay-to-destination links, we first present an analytical expression for the symbol error rate (SER) performance at the relay. Based on the assumption of an average power constraint at the destination node, we then derive the optimal phase and power distribution of the AN that maximizes the SER at the relay. Furthermore, we obtain the optimal AN design for the case where only statistical CSI is available at the destination node. For both cases, our study reveals that the Gaussian distribution is generally not optimal to generate AN symbols. The presented AN design takes into account practical parameters for the communication links, such as QAM signaling and maximum likelihood decoding. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Noise Robustness Analysis of Performance for EEG-Based Driver Fatigue Detection Using Different Entropy Feature Sets
Entropy 2017, 19(8), 385; doi:10.3390/e19080385
Received: 16 June 2017 / Revised: 24 July 2017 / Accepted: 25 July 2017 / Published: 27 July 2017
PDF Full-text (3612 KB) | HTML Full-text | XML Full-text
Abstract
Driver fatigue is an important factor in traffic accidents, and the development of a detection system for driver fatigue is of great significance. To estimate and prevent driver fatigue, various classifiers based on electroencephalogram (EEG) signals have been developed; however, as EEG signals
[...] Read more.
Driver fatigue is an important factor in traffic accidents, and the development of a detection system for driver fatigue is of great significance. To estimate and prevent driver fatigue, various classifiers based on electroencephalogram (EEG) signals have been developed; however, as EEG signals have inherent non-stationary characteristics, their detection performance is often deteriorated by background noise. To investigate the effects of noise on detection performance, simulated Gaussian noise, spike noise, and electromyogram (EMG) noise were added into a raw EEG signal. Four types of entropies, including sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for feature sets. Three base classifiers (K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT)) and two ensemble methods (Bootstrap Aggregating (Bagging) and Boosting) were employed and compared. Results showed that: (1) the simulated Gaussian noise and EMG noise had an impact on accuracy, while simulated spike noise did not, which is of great significance for the future application of driver fatigue detection; (2) the influence on noise performance was different based on each classifier, for example, the robust effect of classifier DT was the best and classifier SVM was the weakest; (3) the influence on noise performance was also different with each feature set where the robustness of feature set FE and the combined feature set were the best; and (4) while the Bagging method could not significantly improve performance against noise addition, the Boosting method may significantly improve performance against superimposed Gaussian and EMG noise. The entropy feature extraction method could not only identify driver fatigue, but also effectively resist noise, which is of great significance in future applications of an EEG-based driver fatigue detection system. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle On Increased Arc Endurance of the Cu–Cr System Materials
Entropy 2017, 19(8), 386; doi:10.3390/e19080386
Received: 4 July 2017 / Revised: 22 July 2017 / Accepted: 23 July 2017 / Published: 27 July 2017
PDF Full-text (8637 KB) | HTML Full-text | XML Full-text
Abstract
The study deals with arc resistance of composite Cu–Cr system materials of various compositions. The microstructure of materials exposed to an electric arc was investigated. Despite varying initial chromium contents, the same structure was formed in the arc exposure zones of all the
[...] Read more.
The study deals with arc resistance of composite Cu–Cr system materials of various compositions. The microstructure of materials exposed to an electric arc was investigated. Despite varying initial chromium contents, the same structure was formed in the arc exposure zones of all the tested materials. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Spurious Memory in Non-Equilibrium Stochastic Models of Imitative Behavior
Entropy 2017, 19(8), 387; doi:10.3390/e19080387
Received: 26 June 2017 / Revised: 21 July 2017 / Accepted: 24 July 2017 / Published: 27 July 2017
PDF Full-text (617 KB) | HTML Full-text | XML Full-text
Abstract
The origin of the long-range memory in non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases, the notion of spurious memory is introduced. A good example of Markov processes with
[...] Read more.
The origin of the long-range memory in non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases, the notion of spurious memory is introduced. A good example of Markov processes with spurious memory is a stochastic process driven by a non-linear stochastic differential equation (SDE). This example is at odds with models built using fractional Brownian motion (fBm). We analyze the differences between these two cases seeking to establish possible empirical tests of the origin of the observed long-range memory. We investigate probability density functions (PDFs) of burst and inter-burst duration in numerically-obtained time series and compare with the results of fBm. Our analysis confirms that the characteristic feature of the processes described by a one-dimensional SDE is the power-law exponent 3 / 2 of the burst or inter-burst duration PDF. This property of stochastic processes might be used to detect spurious memory in various non-equilibrium systems, where observed macroscopic behavior can be derived from the imitative interactions of agents. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle On Entropy Test for Conditionally Heteroscedastic Location-Scale Time Series Models
Entropy 2017, 19(8), 388; doi:10.3390/e19080388
Received: 9 June 2017 / Revised: 14 July 2017 / Accepted: 26 July 2017 / Published: 28 July 2017
PDF Full-text (265 KB) | HTML Full-text | XML Full-text
Abstract
This study considers the goodness of fit test for a class of conditionally heteroscedastic location-scale time series models. For this task, we develop an entropy-type goodness of fit test based on residuals. To examine the asymptotic behavior of the test, we first investigate
[...] Read more.
This study considers the goodness of fit test for a class of conditionally heteroscedastic location-scale time series models. For this task, we develop an entropy-type goodness of fit test based on residuals. To examine the asymptotic behavior of the test, we first investigate the asymptotic property of the residual empirical process and then derive the limiting null distribution of the entropy test. Full article
Open AccessArticle On the Reliability Function of Variable-Rate Slepian-Wolf Coding
Entropy 2017, 19(8), 389; doi:10.3390/e19080389
Received: 13 June 2017 / Revised: 14 July 2017 / Accepted: 27 July 2017 / Published: 28 July 2017
PDF Full-text (934 KB) | HTML Full-text | XML Full-text
Abstract
The reliability function of variable-rate Slepian-Wolf coding is linked to the reliability function of channel coding with constant composition codes, through which computable lower and upper bounds are derived. The bounds coincide at rates close to the Slepian-Wolf limit, yielding a complete characterization
[...] Read more.
The reliability function of variable-rate Slepian-Wolf coding is linked to the reliability function of channel coding with constant composition codes, through which computable lower and upper bounds are derived. The bounds coincide at rates close to the Slepian-Wolf limit, yielding a complete characterization of the reliability function in that rate region. It is shown that variable-rate Slepian-Wolf codes can significantly outperform fixed-rate Slepian-Wolf codes in terms of rate-error tradeoff. Variable-rate Slepian-Wolf coding with rate below the Slepian-Wolf limit is also analyzed. In sharp contrast with fixed-rate Slepian-Wolf codes for which the correct decoding probability decays to zero exponentially fast if the rate is below the Slepian-Wolf limit, the correct decoding probability of variable-rate Slepian-Wolf codes can be bounded away from zero. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Tidal Analysis Using Time–Frequency Signal Processing and Information Clustering
Entropy 2017, 19(8), 390; doi:10.3390/e19080390
Received: 11 June 2017 / Revised: 13 July 2017 / Accepted: 26 July 2017 / Published: 29 July 2017
PDF Full-text (4910 KB) | HTML Full-text | XML Full-text
Abstract
Geophysical time series have a complex nature that poses challenges to reaching assertive conclusions, and require advanced mathematical and computational tools to unravel embedded information. In this paper, time–frequency methods and hierarchical clustering (HC) techniques are combined for processing and visualizing tidal information.
[...] Read more.
Geophysical time series have a complex nature that poses challenges to reaching assertive conclusions, and require advanced mathematical and computational tools to unravel embedded information. In this paper, time–frequency methods and hierarchical clustering (HC) techniques are combined for processing and visualizing tidal information. In a first phase, the raw data are pre-processed for estimating missing values and obtaining dimensionless reliable time series. In a second phase, the Jensen–Shannon divergence is adopted for measuring dissimilarities between data collected at several stations. The signals are compared in the frequency and time–frequency domains, and the HC is applied to visualize hidden relationships. In a third phase, the long-range behavior of tides is studied by means of power law functions. Numerical examples demonstrate the effectiveness of the approach when dealing with a large volume of real-world data. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle A Noninformative Prior on a Space of Distribution Functions
Entropy 2017, 19(8), 391; doi:10.3390/e19080391
Received: 20 June 2017 / Revised: 24 July 2017 / Accepted: 28 July 2017 / Published: 29 July 2017
PDF Full-text (891 KB) | HTML Full-text | XML Full-text
Abstract
In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess
[...] Read more.
In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess an invariance under a transformation group that encodes a lack of information, leading to a unique prior—this idea was explored at length by E.T. Jaynes. Previous successful examples have included location-scale invariance under linear transformation, multiplicative invariance of the rate at which events in a counting process are observed, and the derivation of the Haldane prior for a Bernoulli success probability. In this paper we show that this method can be extended, by generalizing Jaynes, in two ways: (1) to yield families of approximately invariant priors; and (2) to the infinite-dimensional setting, yielding families of priors on spaces of distribution functions. Our results can be used to describe conditions under which a particular Dirichlet Process posterior arises from an optimal Bayesian analysis, in the sense that invariances in the prior and likelihood lead to one and only one posterior distribution. Full article
(This article belongs to the Section Information Theory)
Open AccessFeature PaperArticle A Thermodynamic Point of View on Dark Energy Models
Entropy 2017, 19(8), 392; doi:10.3390/e19080392
Received: 23 June 2017 / Revised: 25 July 2017 / Accepted: 27 July 2017 / Published: 29 July 2017
PDF Full-text (869 KB) | HTML Full-text | XML Full-text
Abstract
We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type
[...] Read more.
We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO) standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle Optimal Multiuser Diversity in Multi-Cell MIMO Uplink Networks: User Scaling Law and Beamforming Design
Entropy 2017, 19(8), 393; doi:10.3390/e19080393
Received: 31 May 2017 / Revised: 18 July 2017 / Accepted: 27 July 2017 / Published: 29 July 2017
PDF Full-text (981 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO) uplink network, referred to as a MIMO interfering multiple-access channel (IMAC). Assuming both no information exchange among base stations (BS) and local channel state information at the transmitters
[...] Read more.
We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO) uplink network, referred to as a MIMO interfering multiple-access channel (IMAC). Assuming both no information exchange among base stations (BS) and local channel state information at the transmitters for the MIMO IMAC, we propose a joint beamforming and user scheduling protocol, and then show that the proposed protocol can achieve the optimal multiuser diversity gain, i.e., KMlog(SNRlog N), as long as the number of mobile stations (MSs) in a cell, N, scales faster than SNR K M L 1 ϵ for a small constant ϵ > 0, where M, L, K, and SNR denote the number of receive antennas at each BS, the number of transmit antennas at each MS, the number of cells, and the signal-to-noise ratio, respectively. Our result indicates that multiuser diversity can be achieved in the presence of intra-cell and inter-cell interference even in a distributed fashion. As a result, vital information on how to design distributed algorithms in interference-limited cellular environments is provided. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Global Efficiency of Heat Engines and Heat Pumps with Non-Linear Boundary Conditions
Entropy 2017, 19(8), 394; doi:10.3390/e19080394
Received: 31 March 2017 / Revised: 7 July 2017 / Accepted: 19 July 2017 / Published: 31 July 2017
PDF Full-text (763 KB) | HTML Full-text | XML Full-text
Abstract
Analysis of global energy efficiency of thermal systems is of practical importance for a number of reasons. Cycles and processes used in thermal systems exist in very different configurations, making comparison difficult if specific models are required to analyze specific thermal systems. Thermal
[...] Read more.
Analysis of global energy efficiency of thermal systems is of practical importance for a number of reasons. Cycles and processes used in thermal systems exist in very different configurations, making comparison difficult if specific models are required to analyze specific thermal systems. Thermal systems with small temperature differences between a hot side and a cold side also suffer from difficulties due to heat transfer pinch point effects. Such pinch points are consequences of thermal systems design and must therefore be integrated in the global evaluation. In optimizing thermal systems, detailed entropy generation analysis is suitable to identify performance losses caused by cycle components. In plant analysis, a similar logic applies with the difference that the thermal system is then only a component, often industrially standardized. This article presents how a thermodynamic “black box” method for defining and comparing thermal efficiency of different size and types of heat engines can be extended to also compare heat pumps of different apparent magnitude and type. Impact of a non-linear boundary condition on reversible thermal efficiency is exemplified and a correlation of average real heat engine efficiencies is discussed in the light of linear and non-linear boundary conditions. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Open AccessArticle Parameterization of Coarse-Grained Molecular Interactions through Potential of Mean Force Calculations and Cluster Expansion Techniques
Entropy 2017, 19(8), 395; doi:10.3390/e19080395
Received: 24 May 2017 / Revised: 24 July 2017 / Accepted: 24 July 2017 / Published: 1 August 2017
PDF Full-text (777 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We present a systematic coarse-graining (CG) strategy for many particle molecular systems based on cluster expansion techniques. We construct a hierarchy of coarse-grained Hamiltonians with interaction potentials consisting of two, three and higher body interactions. In this way, the suggested model becomes computationally
[...] Read more.
We present a systematic coarse-graining (CG) strategy for many particle molecular systems based on cluster expansion techniques. We construct a hierarchy of coarse-grained Hamiltonians with interaction potentials consisting of two, three and higher body interactions. In this way, the suggested model becomes computationally tractable, since no information from long n-body (bulk) simulations is required in order to develop it, while retaining the fluctuations at the coarse-grained level. The accuracy of the derived cluster expansion based on interatomic potentials is examined over a range of various temperatures and densities and compared to direct computation of the pair potential of mean force. The comparison of the coarse-grained simulations is done on the basis of the structural properties, against detailed all-atom data. On the other hand, by construction, the approximate coarse-grained models retain, in principle, the thermodynamic properties of the atomistic model without the need for any further parameter fitting. We give specific examples for methane and ethane molecules in which the coarse-grained variable is the centre of mass of the molecule. We investigate different temperature (T) and density ( ρ ) regimes, and we examine differences between the methane and ethane systems. Results show that the cluster expansion formalism can be used in order to provide accurate effective pair and three-body CG potentials at high T and low ρ regimes. In the liquid regime, the three-body effective CG potentials give a small improvement over the typical pair CG ones; however, in order to get significantly better results, one needs to consider even higher order terms. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle A Fuzzy Comprehensive Evaluation Method Based on AHP and Entropy for a Landslide Susceptibility Map
Entropy 2017, 19(8), 396; doi:10.3390/e19080396
Received: 19 May 2017 / Revised: 5 July 2017 / Accepted: 28 July 2017 / Published: 1 August 2017
PDF Full-text (6040 KB) | HTML Full-text | XML Full-text
Abstract
Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help
[...] Read more.
Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help reduce the risk to the lives of mountain residents. In this paper, the Shannon entropy theory, a fuzzy comprehensive method and an analytic hierarchy process (AHP) have been used to demonstrate a variable type of weighting for landslide susceptibility evaluation modeling, combining subjective and objective weights. Further, based on a single factor sensitivity analysis, we established a strict criterion for landslide susceptibility assessments. Eight influencing factors have been selected for the study of Zhen’an County, Shan’xi Province: the lithology, relief amplitude, slope, aspect, slope morphology, altitude, annual mean rainfall and distance to the river. In order to verify the advantages of the proposed method, the landslide index, prediction accuracy P, the R-index and the area under the curve were used in this paper. The results show that the proposed model of landslide hazard susceptibility can help to produce more objective and accurate landslide susceptibility maps, which not only take advantage of the information from the original data, but also reflect an expert’s knowledge and the opinions of decision-makers. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle LPI Optimization Framework for Radar Network Based on Minimum Mean-Square Error Estimation
Entropy 2017, 19(8), 397; doi:10.3390/e19080397
Received: 14 June 2017 / Revised: 28 July 2017 / Accepted: 31 July 2017 / Published: 2 August 2017
PDF Full-text (1936 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel low probability of intercept (LPI) optimization framework in radar network by minimizing the Schleher intercept factor based on minimum mean-square error (MMSE) estimation. MMSE of the estimate of the target scatterer matrix is presented as a metric for
[...] Read more.
This paper presents a novel low probability of intercept (LPI) optimization framework in radar network by minimizing the Schleher intercept factor based on minimum mean-square error (MMSE) estimation. MMSE of the estimate of the target scatterer matrix is presented as a metric for the ability to estimate the target scattering characteristic. The LPI optimization problem, which is developed on the basis of a predetermined MMSE threshold, has two variables, including transmitted power and target assignment index. We separated power allocation from target assignment through two sub-problems. First, the optimum power allocation is obtained for each target assignment scheme. Second, target assignment schemes are selected based on the results of power allocation. The main problem of this paper can be considered in the point of views based on two cases, including single radar assigned to each target and two radars assigned to each target. According to simulation results, the proposed algorithm can effectively reduce the total Schleher intercept factor of a radar network, which can make a great contribution to improve the LPI performance of a radar network. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Coupled DM Heating in SCDEW Cosmologies
Entropy 2017, 19(8), 398; doi:10.3390/e19080398
Received: 20 July 2017 / Revised: 27 July 2017 / Accepted: 28 July 2017 / Published: 2 August 2017
PDF Full-text (492 KB) | HTML Full-text | XML Full-text
Abstract
Strongly-Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies admit the stationary presence of ∼1% of coupled-DM and DE, since inflationary reheating. Coupled-DM fluctuations therefore grow up to non-linearity even in the early radiative expansion. Such early non-linear stages are modelized here through
[...] Read more.
Strongly-Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies admit the stationary presence of ∼1% of coupled-DM and DE, since inflationary reheating. Coupled-DM fluctuations therefore grow up to non-linearity even in the early radiative expansion. Such early non-linear stages are modelized here through the evolution of a top-hat density enhancement, reaching an early virial balance when the coupled-DM density contrast is just 25–26, and the DM density enhancement is ∼10 % of the total density. During the time needed to settle in virial equilibrium, the virial balance conditions, however, continue to modify, so that “virialized” lumps undergo a complete evaporation. Here, we outline that DM particles processed by overdensities preserve a fraction of their virial momentum. Although fully non-relativistic, the resulting velocities (moderately) affect the fluctuation dynamics over greater scales, entering the horizon later on. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons
Entropy 2017, 19(8), 399; doi:10.3390/e19080399
Received: 23 May 2017 / Revised: 27 July 2017 / Accepted: 31 July 2017 / Published: 2 August 2017
PDF Full-text (2945 KB) | HTML Full-text | XML Full-text
Abstract
Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge
[...] Read more.
Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality (SOSC)) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and Dragon-king avalanches. We also find that neuronal gains can produce collective oscillations that coexist with neuronal avalanches. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview A View of Information-Estimation Relations in Gaussian Networks
Entropy 2017, 19(8), 409; doi:10.3390/e19080409
Received: 31 May 2017 / Revised: 31 July 2017 / Accepted: 1 August 2017 / Published: 9 August 2017
PDF Full-text (1948 KB) | HTML Full-text | XML Full-text
Abstract
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE).
[...] Read more.
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). This paper reviews several applications of the I-MMSE relationship to information theoretic problems arising in connection with multi-user channel coding. The goal of this paper is to review the different techniques used on such problems, as well as to emphasize the added-value obtained from the information-estimation point of view. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top