Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 8 (August 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story How much uncertainty do we have about a given issue? And how relevant is it to another? Information [...] Read more.
View options order results:
result details:
Displaying articles 1-54
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

Open AccessEditorial Complexity, Criticality and Computation
Entropy 2017, 19(8), 403; doi:10.3390/e19080403
Received: 3 August 2017 / Revised: 3 August 2017 / Accepted: 3 August 2017 / Published: 4 August 2017
PDF Full-text (150 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³)) Printed Edition available

Research

Jump to: Editorial, Review, Other

Open AccessArticle Atangana–Baleanu and Caputo Fabrizio Analysis of Fractional Derivatives for Heat and Mass Transfer of Second Grade Fluids over a Vertical Plate: A Comparative Study
Entropy 2017, 19(8), 279; doi:10.3390/e19080279
Received: 4 May 2017 / Revised: 11 June 2017 / Accepted: 12 June 2017 / Published: 18 August 2017
PDF Full-text (1732 KB) | HTML Full-text | XML Full-text
Abstract
This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) CF(β/tβ) and Atangana–Baleanu (AB) AB(α/tα
[...] Read more.
This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( β / t β ) and Atangana–Baleanu (AB) A B ( α / t α ) fractional derivatives. For this purpose, second grade fluids flow with combined gradients of mass concentration and temperature distribution over a vertical flat plate is considered. The problem is first written in non-dimensional form and then based on AB and CF fractional derivatives, it is developed in fractional form, and then using the Laplace transform technique, exact solutions are established for both cases of AB and CF derivatives. They are then expressed in terms of newly defined M-function M q p ( z ) and generalized Hyper-geometric function p Ψ q ( z ) . The obtained exact solutions are plotted graphically for several pertinent parameters and an interesting comparison is made between AB and CF derivatives results with various similarities and differences. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Figures

Figure 1

Open AccessArticle Quantum Genetic Learning Control of Quantum Ensembles with Hamiltonian Uncertainties
Entropy 2017, 19(8), 376; doi:10.3390/e19080376
Received: 9 April 2017 / Revised: 5 July 2017 / Accepted: 19 July 2017 / Published: 1 August 2017
PDF Full-text (1213 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new method for controlling a quantum ensemble that its members have uncertainties in Hamiltonian parameters is designed. Based on combining the sampling-based learning control (SLC) and a new quantum genetic algorithm (QGA) method, the control of an ensemble of
[...] Read more.
In this paper, a new method for controlling a quantum ensemble that its members have uncertainties in Hamiltonian parameters is designed. Based on combining the sampling-based learning control (SLC) and a new quantum genetic algorithm (QGA) method, the control of an ensemble of a two-level quantum system with Hamiltonian uncertainties is achieved. To simultaneously transfer the ensemble members to a desired state, an SLC algorithm is designed. For reducing the transfer error significantly, an optimization problem is defined. Considering the advantages of QGA and the nature of the problem, the optimization problem by using the QGA method is solved. For this purpose, N samples through sampling of the uncertainty parameters via uniform distribution are generated and an augmented system is also created. By using QGA in the training step, the best control signal is obtained. To test the performance and validation of the method, the obtained control is implemented for some random selected samples. A couple of examples are simulated for investigating the proposed model. The results of the simulations indicate the effectiveness and the advantages of the proposed method. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Figures

Figure 1

Open AccessArticle Kinetic Theory beyond the Stosszahlansatz
Entropy 2017, 19(8), 381; doi:10.3390/e19080381
Received: 8 June 2017 / Revised: 20 July 2017 / Accepted: 21 July 2017 / Published: 25 July 2017
PDF Full-text (237 KB) | HTML Full-text | XML Full-text
Abstract
In a recent paper (Chliamovitch, et al., 2015), we suggested using the principle of maximum entropy to generalize Boltzmann’s Stosszahlansatz to higher-order distribution functions. This conceptual shift of focus allowed us to derive an analog of the Boltzmann equation for the two-particle distribution
[...] Read more.
In a recent paper (Chliamovitch, et al., 2015), we suggested using the principle of maximum entropy to generalize Boltzmann’s Stosszahlansatz to higher-order distribution functions. This conceptual shift of focus allowed us to derive an analog of the Boltzmann equation for the two-particle distribution function. While we only briefly mentioned there the possibility of a hydrodynamical treatment, we complete here a crucial step towards this program. We discuss bilocal collisional invariants, from which we deduce the two-particle stationary distribution. This allows for the existence of equilibrium states in which the momenta of particles are correlated, as well as for the existence of a fourth conserved quantity besides mass, momentum and kinetic energy. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Open AccessArticle Toward a Theory of Industrial Supply Networks: A Multi-Level Perspective via Network Analysis
Entropy 2017, 19(8), 382; doi:10.3390/e19080382
Received: 30 May 2017 / Revised: 21 July 2017 / Accepted: 21 July 2017 / Published: 25 July 2017
PDF Full-text (22422 KB) | HTML Full-text | XML Full-text
Abstract
In most supply chains (SCs), transaction relationships between suppliers and customers are commonly considered to be an extrapolation from a linear perspective. However, this traditional linear concept of an SC is egotistic and oversimplified and does not sufficiently reflect the complex and cyclical
[...] Read more.
In most supply chains (SCs), transaction relationships between suppliers and customers are commonly considered to be an extrapolation from a linear perspective. However, this traditional linear concept of an SC is egotistic and oversimplified and does not sufficiently reflect the complex and cyclical structure of supplier-customer relationships in current economic and industrial situations. The interactional relationships and topological characteristics between suppliers and customers should be analyzed using supply networks (SNs) rather than traditional linear SCs. Therefore, this paper reconceptualizes SCs as SNs in complex adaptive systems (CAS), and presents three main contributions. First, we propose an integrated framework of CAS network by synthesizing multi-level network analysis from the network-, community- and vertex-perspective. The CAS perspective enables us to understand the advances of SN properties. Second, in order to emphasize the CAS properties of SNs, we conducted a real-world SN based on the Japanese industry and describe an advanced investigation of SN theory. The CAS properties help in enriching the SN theory, which can benefit SN management, community economics and industrial resilience. Third, we propose a quantitative metric of entropy to measure the complexity and robustness of SNs. The results not only support a specific understanding of the structural outcomes relevant to SNs, but also deliver efficient and effective support to the management and design of SNs. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Avalanching Systems with Longer Range Connectivity: Occurrence of a Crossover Phenomenon and Multifractal Finite Size Scaling
Entropy 2017, 19(8), 383; doi:10.3390/e19080383
Received: 4 April 2017 / Revised: 11 July 2017 / Accepted: 13 July 2017 / Published: 26 July 2017
PDF Full-text (690 KB) | HTML Full-text | XML Full-text
Abstract
Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in
[...] Read more.
Many out-of-equilibrium systems respond to external driving with nonlinear and self-similar dynamics. This near scale-invariant behavior of relaxation events has been modeled through sand pile cellular automata. However, a common feature of these models is the assumption of a local connectivity, while in many real systems, we have evidence for longer range connectivity and a complex topology of the interacting structures. Here, we investigate the role that longer range connectivity might play in near scale-invariant systems, by analyzing the results of a sand pile cellular automaton model on a Newman–Watts network. The analysis clearly indicates the occurrence of a crossover phenomenon in the statistics of the relaxation events as a function of the percentage of longer range links and the breaking of the simple Finite Size Scaling (FSS). The more complex nature of the dynamics in the presence of long-range connectivity is investigated in terms of multi-scaling features and analyzed by the Rank-Ordered Multifractal Analysis (ROMA). Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Securing Relay Networks with Artificial Noise: An Error Performance-Based Approach
Entropy 2017, 19(8), 384; doi:10.3390/e19080384
Received: 24 June 2017 / Revised: 17 July 2017 / Accepted: 18 July 2017 / Published: 26 July 2017
PDF Full-text (1087 KB) | HTML Full-text | XML Full-text
Abstract
We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM) signals transmitted
[...] Read more.
We apply the concept of artificial and controlled interference in a two-hop relay network with an untrusted relay, aiming at enhancing the wireless communication secrecy between the source and the destination node. In order to shield the square quadrature amplitude-modulated (QAM) signals transmitted from the source node to the relay, the destination node designs and transmits artificial noise (AN) symbols to jam the relay reception. The objective of our considered AN design is to degrade the error probability performance at the untrusted relay, for different types of channel state information (CSI) at the destination. By considering perfect knowledge of the instantaneous CSI of the source-to-relay and relay-to-destination links, we first present an analytical expression for the symbol error rate (SER) performance at the relay. Based on the assumption of an average power constraint at the destination node, we then derive the optimal phase and power distribution of the AN that maximizes the SER at the relay. Furthermore, we obtain the optimal AN design for the case where only statistical CSI is available at the destination node. For both cases, our study reveals that the Gaussian distribution is generally not optimal to generate AN symbols. The presented AN design takes into account practical parameters for the communication links, such as QAM signaling and maximum likelihood decoding. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Noise Robustness Analysis of Performance for EEG-Based Driver Fatigue Detection Using Different Entropy Feature Sets
Entropy 2017, 19(8), 385; doi:10.3390/e19080385
Received: 16 June 2017 / Revised: 24 July 2017 / Accepted: 25 July 2017 / Published: 27 July 2017
PDF Full-text (3612 KB) | HTML Full-text | XML Full-text
Abstract
Driver fatigue is an important factor in traffic accidents, and the development of a detection system for driver fatigue is of great significance. To estimate and prevent driver fatigue, various classifiers based on electroencephalogram (EEG) signals have been developed; however, as EEG signals
[...] Read more.
Driver fatigue is an important factor in traffic accidents, and the development of a detection system for driver fatigue is of great significance. To estimate and prevent driver fatigue, various classifiers based on electroencephalogram (EEG) signals have been developed; however, as EEG signals have inherent non-stationary characteristics, their detection performance is often deteriorated by background noise. To investigate the effects of noise on detection performance, simulated Gaussian noise, spike noise, and electromyogram (EMG) noise were added into a raw EEG signal. Four types of entropies, including sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE), and spectral entropy (PE), were deployed for feature sets. Three base classifiers (K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Decision Tree (DT)) and two ensemble methods (Bootstrap Aggregating (Bagging) and Boosting) were employed and compared. Results showed that: (1) the simulated Gaussian noise and EMG noise had an impact on accuracy, while simulated spike noise did not, which is of great significance for the future application of driver fatigue detection; (2) the influence on noise performance was different based on each classifier, for example, the robust effect of classifier DT was the best and classifier SVM was the weakest; (3) the influence on noise performance was also different with each feature set where the robustness of feature set FE and the combined feature set were the best; and (4) while the Bagging method could not significantly improve performance against noise addition, the Boosting method may significantly improve performance against superimposed Gaussian and EMG noise. The entropy feature extraction method could not only identify driver fatigue, but also effectively resist noise, which is of great significance in future applications of an EEG-based driver fatigue detection system. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Figures

Figure 1

Open AccessArticle On Increased Arc Endurance of the Cu–Cr System Materials
Entropy 2017, 19(8), 386; doi:10.3390/e19080386
Received: 4 July 2017 / Revised: 22 July 2017 / Accepted: 23 July 2017 / Published: 27 July 2017
PDF Full-text (8637 KB) | HTML Full-text | XML Full-text
Abstract
The study deals with arc resistance of composite Cu–Cr system materials of various compositions. The microstructure of materials exposed to an electric arc was investigated. Despite varying initial chromium contents, the same structure was formed in the arc exposure zones of all the
[...] Read more.
The study deals with arc resistance of composite Cu–Cr system materials of various compositions. The microstructure of materials exposed to an electric arc was investigated. Despite varying initial chromium contents, the same structure was formed in the arc exposure zones of all the tested materials. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle Spurious Memory in Non-Equilibrium Stochastic Models of Imitative Behavior
Entropy 2017, 19(8), 387; doi:10.3390/e19080387
Received: 26 June 2017 / Revised: 21 July 2017 / Accepted: 24 July 2017 / Published: 27 July 2017
PDF Full-text (617 KB) | HTML Full-text | XML Full-text
Abstract
The origin of the long-range memory in non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases, the notion of spurious memory is introduced. A good example of Markov processes with
[...] Read more.
The origin of the long-range memory in non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases, the notion of spurious memory is introduced. A good example of Markov processes with spurious memory is a stochastic process driven by a non-linear stochastic differential equation (SDE). This example is at odds with models built using fractional Brownian motion (fBm). We analyze the differences between these two cases seeking to establish possible empirical tests of the origin of the observed long-range memory. We investigate probability density functions (PDFs) of burst and inter-burst duration in numerically-obtained time series and compare with the results of fBm. Our analysis confirms that the characteristic feature of the processes described by a one-dimensional SDE is the power-law exponent 3 / 2 of the burst or inter-burst duration PDF. This property of stochastic processes might be used to detect spurious memory in various non-equilibrium systems, where observed macroscopic behavior can be derived from the imitative interactions of agents. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle On Entropy Test for Conditionally Heteroscedastic Location-Scale Time Series Models
Entropy 2017, 19(8), 388; doi:10.3390/e19080388
Received: 9 June 2017 / Revised: 14 July 2017 / Accepted: 26 July 2017 / Published: 28 July 2017
PDF Full-text (265 KB) | HTML Full-text | XML Full-text
Abstract
This study considers the goodness of fit test for a class of conditionally heteroscedastic location-scale time series models. For this task, we develop an entropy-type goodness of fit test based on residuals. To examine the asymptotic behavior of the test, we first investigate
[...] Read more.
This study considers the goodness of fit test for a class of conditionally heteroscedastic location-scale time series models. For this task, we develop an entropy-type goodness of fit test based on residuals. To examine the asymptotic behavior of the test, we first investigate the asymptotic property of the residual empirical process and then derive the limiting null distribution of the entropy test. Full article
Open AccessArticle On the Reliability Function of Variable-Rate Slepian-Wolf Coding
Entropy 2017, 19(8), 389; doi:10.3390/e19080389
Received: 13 June 2017 / Revised: 14 July 2017 / Accepted: 27 July 2017 / Published: 28 July 2017
PDF Full-text (934 KB) | HTML Full-text | XML Full-text
Abstract
The reliability function of variable-rate Slepian-Wolf coding is linked to the reliability function of channel coding with constant composition codes, through which computable lower and upper bounds are derived. The bounds coincide at rates close to the Slepian-Wolf limit, yielding a complete characterization
[...] Read more.
The reliability function of variable-rate Slepian-Wolf coding is linked to the reliability function of channel coding with constant composition codes, through which computable lower and upper bounds are derived. The bounds coincide at rates close to the Slepian-Wolf limit, yielding a complete characterization of the reliability function in that rate region. It is shown that variable-rate Slepian-Wolf codes can significantly outperform fixed-rate Slepian-Wolf codes in terms of rate-error tradeoff. Variable-rate Slepian-Wolf coding with rate below the Slepian-Wolf limit is also analyzed. In sharp contrast with fixed-rate Slepian-Wolf codes for which the correct decoding probability decays to zero exponentially fast if the rate is below the Slepian-Wolf limit, the correct decoding probability of variable-rate Slepian-Wolf codes can be bounded away from zero. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Figures

Figure 1

Open AccessArticle Tidal Analysis Using Time–Frequency Signal Processing and Information Clustering
Entropy 2017, 19(8), 390; doi:10.3390/e19080390
Received: 11 June 2017 / Revised: 13 July 2017 / Accepted: 26 July 2017 / Published: 29 July 2017
PDF Full-text (4910 KB) | HTML Full-text | XML Full-text
Abstract
Geophysical time series have a complex nature that poses challenges to reaching assertive conclusions, and require advanced mathematical and computational tools to unravel embedded information. In this paper, time–frequency methods and hierarchical clustering (HC) techniques are combined for processing and visualizing tidal information.
[...] Read more.
Geophysical time series have a complex nature that poses challenges to reaching assertive conclusions, and require advanced mathematical and computational tools to unravel embedded information. In this paper, time–frequency methods and hierarchical clustering (HC) techniques are combined for processing and visualizing tidal information. In a first phase, the raw data are pre-processed for estimating missing values and obtaining dimensionless reliable time series. In a second phase, the Jensen–Shannon divergence is adopted for measuring dissimilarities between data collected at several stations. The signals are compared in the frequency and time–frequency domains, and the HC is applied to visualize hidden relationships. In a third phase, the long-range behavior of tides is studied by means of power law functions. Numerical examples demonstrate the effectiveness of the approach when dealing with a large volume of real-world data. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle A Noninformative Prior on a Space of Distribution Functions
Entropy 2017, 19(8), 391; doi:10.3390/e19080391
Received: 20 June 2017 / Revised: 24 July 2017 / Accepted: 28 July 2017 / Published: 29 July 2017
PDF Full-text (891 KB) | HTML Full-text | XML Full-text
Abstract
In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess
[...] Read more.
In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess an invariance under a transformation group that encodes a lack of information, leading to a unique prior—this idea was explored at length by E.T. Jaynes. Previous successful examples have included location-scale invariance under linear transformation, multiplicative invariance of the rate at which events in a counting process are observed, and the derivation of the Haldane prior for a Bernoulli success probability. In this paper we show that this method can be extended, by generalizing Jaynes, in two ways: (1) to yield families of approximately invariant priors; and (2) to the infinite-dimensional setting, yielding families of priors on spaces of distribution functions. Our results can be used to describe conditions under which a particular Dirichlet Process posterior arises from an optimal Bayesian analysis, in the sense that invariances in the prior and likelihood lead to one and only one posterior distribution. Full article
(This article belongs to the Section Information Theory)
Open AccessFeature PaperArticle A Thermodynamic Point of View on Dark Energy Models
Entropy 2017, 19(8), 392; doi:10.3390/e19080392
Received: 23 June 2017 / Revised: 25 July 2017 / Accepted: 27 July 2017 / Published: 29 July 2017
PDF Full-text (869 KB) | HTML Full-text | XML Full-text
Abstract
We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type
[...] Read more.
We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO) standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle Optimal Multiuser Diversity in Multi-Cell MIMO Uplink Networks: User Scaling Law and Beamforming Design
Entropy 2017, 19(8), 393; doi:10.3390/e19080393
Received: 31 May 2017 / Revised: 18 July 2017 / Accepted: 27 July 2017 / Published: 29 July 2017
PDF Full-text (981 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO) uplink network, referred to as a MIMO interfering multiple-access channel (IMAC). Assuming both no information exchange among base stations (BS) and local channel state information at the transmitters
[...] Read more.
We introduce a distributed protocol to achieve multiuser diversity in a multicell multiple-input multiple-output (MIMO) uplink network, referred to as a MIMO interfering multiple-access channel (IMAC). Assuming both no information exchange among base stations (BS) and local channel state information at the transmitters for the MIMO IMAC, we propose a joint beamforming and user scheduling protocol, and then show that the proposed protocol can achieve the optimal multiuser diversity gain, i.e., KMlog(SNRlog N), as long as the number of mobile stations (MSs) in a cell, N, scales faster than SNR K M L 1 ϵ for a small constant ϵ > 0, where M, L, K, and SNR denote the number of receive antennas at each BS, the number of transmit antennas at each MS, the number of cells, and the signal-to-noise ratio, respectively. Our result indicates that multiuser diversity can be achieved in the presence of intra-cell and inter-cell interference even in a distributed fashion. As a result, vital information on how to design distributed algorithms in interference-limited cellular environments is provided. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessArticle Global Efficiency of Heat Engines and Heat Pumps with Non-Linear Boundary Conditions
Entropy 2017, 19(8), 394; doi:10.3390/e19080394
Received: 31 March 2017 / Revised: 7 July 2017 / Accepted: 19 July 2017 / Published: 31 July 2017
PDF Full-text (763 KB) | HTML Full-text | XML Full-text
Abstract
Analysis of global energy efficiency of thermal systems is of practical importance for a number of reasons. Cycles and processes used in thermal systems exist in very different configurations, making comparison difficult if specific models are required to analyze specific thermal systems. Thermal
[...] Read more.
Analysis of global energy efficiency of thermal systems is of practical importance for a number of reasons. Cycles and processes used in thermal systems exist in very different configurations, making comparison difficult if specific models are required to analyze specific thermal systems. Thermal systems with small temperature differences between a hot side and a cold side also suffer from difficulties due to heat transfer pinch point effects. Such pinch points are consequences of thermal systems design and must therefore be integrated in the global evaluation. In optimizing thermal systems, detailed entropy generation analysis is suitable to identify performance losses caused by cycle components. In plant analysis, a similar logic applies with the difference that the thermal system is then only a component, often industrially standardized. This article presents how a thermodynamic “black box” method for defining and comparing thermal efficiency of different size and types of heat engines can be extended to also compare heat pumps of different apparent magnitude and type. Impact of a non-linear boundary condition on reversible thermal efficiency is exemplified and a correlation of average real heat engine efficiencies is discussed in the light of linear and non-linear boundary conditions. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Figures

Figure 1

Open AccessArticle Parameterization of Coarse-Grained Molecular Interactions through Potential of Mean Force Calculations and Cluster Expansion Techniques
Entropy 2017, 19(8), 395; doi:10.3390/e19080395
Received: 24 May 2017 / Revised: 24 July 2017 / Accepted: 24 July 2017 / Published: 1 August 2017
PDF Full-text (777 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We present a systematic coarse-graining (CG) strategy for many particle molecular systems based on cluster expansion techniques. We construct a hierarchy of coarse-grained Hamiltonians with interaction potentials consisting of two, three and higher body interactions. In this way, the suggested model becomes computationally
[...] Read more.
We present a systematic coarse-graining (CG) strategy for many particle molecular systems based on cluster expansion techniques. We construct a hierarchy of coarse-grained Hamiltonians with interaction potentials consisting of two, three and higher body interactions. In this way, the suggested model becomes computationally tractable, since no information from long n-body (bulk) simulations is required in order to develop it, while retaining the fluctuations at the coarse-grained level. The accuracy of the derived cluster expansion based on interatomic potentials is examined over a range of various temperatures and densities and compared to direct computation of the pair potential of mean force. The comparison of the coarse-grained simulations is done on the basis of the structural properties, against detailed all-atom data. On the other hand, by construction, the approximate coarse-grained models retain, in principle, the thermodynamic properties of the atomistic model without the need for any further parameter fitting. We give specific examples for methane and ethane molecules in which the coarse-grained variable is the centre of mass of the molecule. We investigate different temperature (T) and density ( ρ ) regimes, and we examine differences between the methane and ethane systems. Results show that the cluster expansion formalism can be used in order to provide accurate effective pair and three-body CG potentials at high T and low ρ regimes. In the liquid regime, the three-body effective CG potentials give a small improvement over the typical pair CG ones; however, in order to get significantly better results, one needs to consider even higher order terms. Full article
(This article belongs to the Special Issue Thermodynamics and Statistical Mechanics of Small Systems)
Figures

Figure 1

Open AccessArticle A Fuzzy Comprehensive Evaluation Method Based on AHP and Entropy for a Landslide Susceptibility Map
Entropy 2017, 19(8), 396; doi:10.3390/e19080396
Received: 19 May 2017 / Revised: 5 July 2017 / Accepted: 28 July 2017 / Published: 1 August 2017
PDF Full-text (6040 KB) | HTML Full-text | XML Full-text
Abstract
Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help
[...] Read more.
Landslides are a common type of natural disaster in mountainous areas. As a result of the comprehensive influences of geology, geomorphology and climatic conditions, the susceptibility to landslide hazards in mountainous areas shows obvious regionalism. The evaluation of regional landslide susceptibility can help reduce the risk to the lives of mountain residents. In this paper, the Shannon entropy theory, a fuzzy comprehensive method and an analytic hierarchy process (AHP) have been used to demonstrate a variable type of weighting for landslide susceptibility evaluation modeling, combining subjective and objective weights. Further, based on a single factor sensitivity analysis, we established a strict criterion for landslide susceptibility assessments. Eight influencing factors have been selected for the study of Zhen’an County, Shan’xi Province: the lithology, relief amplitude, slope, aspect, slope morphology, altitude, annual mean rainfall and distance to the river. In order to verify the advantages of the proposed method, the landslide index, prediction accuracy P, the R-index and the area under the curve were used in this paper. The results show that the proposed model of landslide hazard susceptibility can help to produce more objective and accurate landslide susceptibility maps, which not only take advantage of the information from the original data, but also reflect an expert’s knowledge and the opinions of decision-makers. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle LPI Optimization Framework for Radar Network Based on Minimum Mean-Square Error Estimation
Entropy 2017, 19(8), 397; doi:10.3390/e19080397
Received: 14 June 2017 / Revised: 28 July 2017 / Accepted: 31 July 2017 / Published: 2 August 2017
PDF Full-text (1936 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel low probability of intercept (LPI) optimization framework in radar network by minimizing the Schleher intercept factor based on minimum mean-square error (MMSE) estimation. MMSE of the estimate of the target scatterer matrix is presented as a metric for
[...] Read more.
This paper presents a novel low probability of intercept (LPI) optimization framework in radar network by minimizing the Schleher intercept factor based on minimum mean-square error (MMSE) estimation. MMSE of the estimate of the target scatterer matrix is presented as a metric for the ability to estimate the target scattering characteristic. The LPI optimization problem, which is developed on the basis of a predetermined MMSE threshold, has two variables, including transmitted power and target assignment index. We separated power allocation from target assignment through two sub-problems. First, the optimum power allocation is obtained for each target assignment scheme. Second, target assignment schemes are selected based on the results of power allocation. The main problem of this paper can be considered in the point of views based on two cases, including single radar assigned to each target and two radars assigned to each target. According to simulation results, the proposed algorithm can effectively reduce the total Schleher intercept factor of a radar network, which can make a great contribution to improve the LPI performance of a radar network. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Coupled DM Heating in SCDEW Cosmologies
Entropy 2017, 19(8), 398; doi:10.3390/e19080398
Received: 20 July 2017 / Revised: 27 July 2017 / Accepted: 28 July 2017 / Published: 2 August 2017
PDF Full-text (492 KB) | HTML Full-text | XML Full-text
Abstract
Strongly-Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies admit the stationary presence of ∼1% of coupled-DM and DE, since inflationary reheating. Coupled-DM fluctuations therefore grow up to non-linearity even in the early radiative expansion. Such early non-linear stages are modelized here through
[...] Read more.
Strongly-Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies admit the stationary presence of ∼1% of coupled-DM and DE, since inflationary reheating. Coupled-DM fluctuations therefore grow up to non-linearity even in the early radiative expansion. Such early non-linear stages are modelized here through the evolution of a top-hat density enhancement, reaching an early virial balance when the coupled-DM density contrast is just 25–26, and the DM density enhancement is ∼10 % of the total density. During the time needed to settle in virial equilibrium, the virial balance conditions, however, continue to modify, so that “virialized” lumps undergo a complete evaporation. Here, we outline that DM particles processed by overdensities preserve a fraction of their virial momentum. Although fully non-relativistic, the resulting velocities (moderately) affect the fluctuation dynamics over greater scales, entering the horizon later on. Full article
(This article belongs to the Special Issue Dark Energy)
Figures

Figure 1

Open AccessArticle Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons
Entropy 2017, 19(8), 399; doi:10.3390/e19080399
Received: 23 May 2017 / Revised: 27 July 2017 / Accepted: 31 July 2017 / Published: 2 August 2017
PDF Full-text (2945 KB) | HTML Full-text | XML Full-text
Abstract
Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge
[...] Read more.
Networks of stochastic spiking neurons are interesting models in the area of theoretical neuroscience, presenting both continuous and discontinuous phase transitions. Here, we study fully-connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality (SOSC)) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and Dragon-king avalanches. We also find that neuronal gains can produce collective oscillations that coexist with neuronal avalanches. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Planck-Scale Soccer-Ball Problem: A Case of Mistaken Identity
Entropy 2017, 19(8), 400; doi:10.3390/e19080400
Received: 23 April 2017 / Revised: 10 July 2017 / Accepted: 18 July 2017 / Published: 2 August 2017
PDF Full-text (231 KB) | HTML Full-text | XML Full-text
Abstract
Over the last decade, it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to “real” 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather
[...] Read more.
Over the last decade, it has been found that nonlinear laws of composition of momenta are predicted by some alternative approaches to “real” 4D quantum gravity, and by all formulations of dimensionally-reduced (3D) quantum gravity coupled to matter. The possible relevance for rather different quantum-gravity models has motivated several studies, but this interest is being tempered by concerns that a nonlinear law of addition of momenta might inevitably produce a pathological description of the total momentum of a macroscopic body. I here show that such concerns are unjustified, finding that they are rooted in failure to appreciate the differences between two roles for laws composition of momentum in physics. Previous results relied exclusively on the role of a law of momentum composition in the description of spacetime locality. However, the notion of total momentum of a multi-particle system is not a manifestation of locality, but rather reflects translational invariance. By working within an illustrative example of quantum spacetime, I show explicitly that spacetime locality is indeed reflected in a nonlinear law of composition of momenta, but translational invariance still results in an undeformed linear law of addition of momenta building up the total momentum of a multi-particle system. Full article
(This article belongs to the Special Issue Quantum Information and Foundations)
Open AccessArticle On the Modelling and Control of a Laboratory Prototype of a Hydraulic Canal Based on a TITO Fractional-Order Model
Entropy 2017, 19(8), 401; doi:10.3390/e19080401
Received: 30 June 2017 / Revised: 28 July 2017 / Accepted: 1 August 2017 / Published: 3 August 2017
PDF Full-text (6305 KB) | HTML Full-text | XML Full-text
Abstract
In this paper a two-input, two-output (TITO) fractional order mathematical model of a laboratory prototype of a hydraulic canal is proposed. This canal is made up of two pools that have a strong interaction between them. The inputs of the TITO model are
[...] Read more.
In this paper a two-input, two-output (TITO) fractional order mathematical model of a laboratory prototype of a hydraulic canal is proposed. This canal is made up of two pools that have a strong interaction between them. The inputs of the TITO model are the pump flow and the opening of an intermediate gate, and the two outputs are the water levels in the two pools. Based on the experiments developed in a laboratory prototype the parameters of the mathematical models have been identified. Then, considering the TITO model, a first control loop of the pump is closed to reproduce real-world conditions in which the water level of the first pool is not dependent on the opening of the upstream gate, thus leading to an equivalent single input, single output (SISO) system. The comparison of the resulting system with the classical first order systems typically utilized to model hydraulic canals shows that the proposed model has significantly lower error: about 50%, and, therefore, higher accuracy in capturing the canal dynamics. This model has also been utilized to optimize the design of the controller of the pump of the canal, thus achieving a faster response to step commands and thus minimizing the interaction between the two pools of the experimental platform. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Figure 1

Open AccessArticle Optimal Belief Approximation
Entropy 2017, 19(8), 402; doi:10.3390/e19080402
Received: 18 April 2017 / Revised: 4 July 2017 / Accepted: 5 July 2017 / Published: 4 August 2017
PDF Full-text (361 KB) | HTML Full-text | XML Full-text
Abstract
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss
[...] Read more.
In Bayesian statistics probability distributions express beliefs. However, for many problems the beliefs cannot be computed analytically and approximations of beliefs are needed. We seek a loss function that quantifies how “embarrassing” it is to communicate a given approximation. We reproduce and discuss an old proof showing that there is only one ranking under the requirements that (1) the best ranked approximation is the non-approximated belief and (2) that the ranking judges approximations only by their predictions for actual outcomes. The loss function that is obtained in the derivation is equal to the Kullback-Leibler divergence when normalized. This loss function is frequently used in the literature. However, there seems to be confusion about the correct order in which its functional arguments—the approximated and non-approximated beliefs—should be used. The correct order ensures that the recipient of a communication is only deprived of the minimal amount of information. We hope that the elementary derivation settles the apparent confusion. For example when approximating beliefs with Gaussian distributions the optimal approximation is given by moment matching. This is in contrast to many suggested computational schemes. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Thermal Transport and Entropy Production Mechanisms in a Turbulent Round Jet at Supercritical Thermodynamic Conditions
Entropy 2017, 19(8), 404; doi:10.3390/e19080404
Received: 1 July 2017 / Revised: 29 July 2017 / Accepted: 2 August 2017 / Published: 5 August 2017
PDF Full-text (8530 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy
[...] Read more.
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy of heat fluxes and temperature scales are examined. Secondly, the entropy production rates during thermofluid processes evolving in the supercritical flow are investigated in order to identify the causes of irreversibilities and to display advantageous locations of handling along with the process regimes favorable to mixing. Thereby, it turned out that (1) the jet disintegration process consists of four main stages under supercritical conditions (potential core, separation, pseudo-boiling, turbulent mixing), (2) causes of irreversibilities are primarily due to heat transport and thermodynamic effects rather than turbulence dynamics and (3) heat fluxes and temperature scales appear anisotropic even at the smallest scales, which implies that anisotropic thermal diffusivity models might be appropriate in the context of both Reynolds-averaged Navier–Stokes (RANS) and large eddy simulation (LES) approaches while numerically modeling supercritical fluid flows. Full article
(This article belongs to the Section Thermodynamics)
Figures

Open AccessArticle Intrinsic Losses Based on Information Geometry and Their Applications
Entropy 2017, 19(8), 405; doi:10.3390/e19080405
Received: 30 April 2017 / Revised: 21 July 2017 / Accepted: 3 August 2017 / Published: 6 August 2017
PDF Full-text (837 KB) | HTML Full-text | XML Full-text
Abstract
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the
[...] Read more.
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the framework of Riemannian geometry and dual geometry, we revisit two commonly-used intrinsic losses which are respectively given by the squared Rao distance and the symmetrized Kullback–Leibler divergence (or Jeffreys divergence). For an exponential family endowed with the Fisher metric and α -connections, the two loss functions are uniformly described as the energy difference along an α -geodesic path, for some α { 1 , 0 , 1 } . Subsequently, the two intrinsic losses are utilized to develop Bayesian analyses of covariance matrix estimation and range-spread target detection. We provide an intrinsically unbiased covariance estimator, which is verified to be asymptotically efficient in terms of the intrinsic mean square error. The decision rules deduced by the intrinsic Bayesian criterion provide a geometrical justification for the constant false alarm rate detector based on generalized likelihood ratio principle. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Statistical Process Control for Unimodal Distribution Based on Maximum Entropy Distribution Approximation
Entropy 2017, 19(8), 406; doi:10.3390/e19080406
Received: 21 June 2017 / Revised: 18 July 2017 / Accepted: 4 August 2017 / Published: 7 August 2017
PDF Full-text (306 KB) | HTML Full-text | XML Full-text
Abstract
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution.
[...] Read more.
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. This article proposes a simplified method based on maximum entropy for the control chart design when the quantity being monitored is unimodal distribution. First, we use the maximum entropy distribution to approximate the unknown distribution of the monitored quantity. Then we directly take the value of the quantity as the monitoring statistic. Finally, the Lebesgue measure is applied to estimate the acceptance regions and the one with minimum volume is chosen as the optimal in-control region of the monitored quantity. The results from two cases show that the proposed method in this article has a higher detection capability than the conventional control chart techniques when the monitored quantity is asymmetric unimodal distribution. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle Generalized Maxwell Relations in Thermodynamics with Metric Derivatives
Entropy 2017, 19(8), 407; doi:10.3390/e19080407
Received: 5 July 2017 / Revised: 25 July 2017 / Accepted: 27 July 2017 / Published: 7 August 2017
PDF Full-text (246 KB) | HTML Full-text | XML Full-text
Abstract
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also
[...] Read more.
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also the α -total differentiation with conformable derivatives. Some results in the literature are re-obtained, such as the physical temperature defined by Sumiyoshi Abe. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Open AccessFeature PaperArticle Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Entropy 2017, 19(8), 408; doi:10.3390/e19080408
Received: 21 June 2017 / Revised: 3 August 2017 / Accepted: 7 August 2017 / Published: 8 August 2017
Cited by 1 | PDF Full-text (1829 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information
[...] Read more.
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone. Full article
Figures

Figure 1

Open AccessArticle Spatio-Temporal Variability of Soil Water Content under Different Crop Covers in Irrigation Districts of Northwest China
Entropy 2017, 19(8), 410; doi:10.3390/e19080410
Received: 25 April 2017 / Revised: 3 August 2017 / Accepted: 4 August 2017 / Published: 18 August 2017
PDF Full-text (6516 KB) | HTML Full-text | XML Full-text
Abstract
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced
[...] Read more.
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced spatial and temporal variation of soil water. During a study, SWC was measured under maize and wheat for two years in northwest China. Statistical methods and entropy analysis were applied to investigate the spatio-temporal variability of SWC and the interaction between SWC and its influencing factors. The SWC variability changed within the field plot, with the standard deviation reaching a maximum value under intermediate mean SWC in different layers under various conditions (climatic conditions, soil conditions, crop type conditions). The spatial-temporal-distribution of the SWC reflects the variability of precipitation and potential evapotranspiration (ET0) under different crop covers. The mutual entropy values between SWC and precipitation were similar in two years under wheat cover but were different under maize cover. However, the mutual entropy values at different depths were different under different crop covers. The entropy values changed with SWC following an exponential trend. The informational correlation coefficient (R0) between the SWC and the precipitation was higher than that between SWC and other factors at different soil depths. Precipitation was the dominant factor controlling the SWC variability, and the crop efficient was the second dominant factor. This study highlights the precipitation is a paramount factor for investigating the spatio-temporal variability of soil water content in Northwest China. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Figure 1

Open AccessArticle Solutions to the Cosmic Initial Entropy Problem without Equilibrium Initial Conditions
Entropy 2017, 19(8), 411; doi:10.3390/e19080411
Received: 30 June 2017 / Revised: 31 July 2017 / Accepted: 7 August 2017 / Published: 10 August 2017
PDF Full-text (1511 KB) | HTML Full-text | XML Full-text
Abstract
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem
[...] Read more.
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1) Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2) Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann) for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Figures

Figure 1

Open AccessArticle Relationship between Entropy, Corporate Entrepreneurship and Organizational Capabilities in Romanian Medium Sized Enterprises
Entropy 2017, 19(8), 412; doi:10.3390/e19080412
Received: 12 July 2017 / Revised: 31 July 2017 / Accepted: 8 August 2017 / Published: 10 August 2017
PDF Full-text (1194 KB) | HTML Full-text | XML Full-text
Abstract
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate
[...] Read more.
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate entrepreneurship as an instrument to fulfil their objectives. Our study contributes to the limited amount of empirical research on entropy in an organization setting by highlighting the boundary conditions of the impact by examining the moderating effect of firms’ organizational capabilities and also to the development of Econophysics as a fast growing area of interdisciplinary sciences. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle The Emergence of Hyperchaos and Synchronization in Networks with Discrete Periodic Oscillators
Entropy 2017, 19(8), 413; doi:10.3390/e19080413
Received: 9 May 2017 / Revised: 8 August 2017 / Accepted: 8 August 2017 / Published: 16 August 2017
PDF Full-text (4788 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among
[...] Read more.
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among others. Nevertheless, the emergence of hyperchaos in this kind of real-life network has not been proven. In particular, we focus this study on the emergence of hyperchaotic dynamics, considering that these can be mainly used in engineering applications such as cryptography, secure communications, biometric systems, telemedicine, among others. In order to corroborate that the emerging dynamics are hyperchaotic, some chaos and hyperchaos verification tests are conducted. In addition, the presented hyperchaotic coupled system synchronizes, based on the proposed coupling scheme. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Effect of Slip Conditions and Entropy Generation Analysis with an Effective Prandtl Number Model on a Nanofluid Flow through a Stretching Sheet
Entropy 2017, 19(8), 414; doi:10.3390/e19080414
Received: 14 July 2017 / Revised: 8 August 2017 / Accepted: 9 August 2017 / Published: 11 August 2017
PDF Full-text (6763 KB) | HTML Full-text | XML Full-text
Abstract
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum
[...] Read more.
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum boundary layers. For this purpose, we have considered a model for effective Prandtl number which is borrowed by means of experimental analysis on a nano boundary layer, steady, two-dimensional incompressible flow through a stretching sheet. We have considered γAl2O3-H2O and Al2O3-C2H6O2 nanoparticles for the governing flow problem. An entropy generation analysis is also presented with the help of the second law of thermodynamics. A numerical technique known as Successive Taylor Series Linearization Method (STSLM) is used to solve the obtained governing nonlinear boundary layer equations. The numerical and graphical results are discussed for two cases i.e., (i) effective Prandtl number and (ii) without effective Prandtl number. From graphical results, it is observed that the velocity profile and temperature profile increases in the absence of effective Prandtl number while both expressions become larger in the presence of Prandtl number. Further, numerical comparison has been presented with previously published results to validate the current methodology and results. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Figures

Figure 1

Open AccessArticle Person-Situation Debate Revisited: Phase Transitions with Quenched and Annealed Disorders
Entropy 2017, 19(8), 415; doi:10.3390/e19080415
Received: 30 June 2017 / Revised: 8 August 2017 / Accepted: 9 August 2017 / Published: 13 August 2017
PDF Full-text (447 KB) | HTML Full-text | XML Full-text
Abstract
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person
[...] Read more.
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person approach with the quenched disorder and the situation approach with the annealed disorder, and investigate how these two approaches influence order–disorder phase transitions observed in the q-voter model with noise. We show that under a quenched disorder, differences between models with independence and anticonformity are weaker and only quantitative. In contrast, annealing has a much more profound impact on the system and leads to qualitative differences between models on a macroscopic level. Furthermore, only under an annealed disorder may the discontinuous phase transitions appear. It seems that freezing the agents’ behavior at the beginning of simulation—introducing quenched disorder—supports second-order phase transitions, whereas allowing agents to reverse their attitude in time—incorporating annealed disorder—supports discontinuous ones. We show that anticonformity is insensitive to the type of disorder, and in all cases it gives the same result. We precede our study with a short insight from statistical physics into annealed vs. quenched disorder and a brief review of these two approaches in models of opinion dynamics. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Thermal and Exergetic Analysis of the Goswami Cycle Integrated with Mid-Grade Heat Sources
Entropy 2017, 19(8), 416; doi:10.3390/e19080416
Received: 26 July 2017 / Revised: 11 August 2017 / Accepted: 14 August 2017 / Published: 17 August 2017
Cited by 1 | PDF Full-text (1049 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste
[...] Read more.
This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste heat as a bottoming cycle or generating power from non-conventional sources like solar radiation or geothermal energy. A thermodynamic study of power and cooling co-generation is presented for heat source temperatures between 100 to 350 °C. A comprehensive analysis of the effect of several operation and configuration parameters, including the number of turbine stages and different superheating configurations, on the power output and the thermal and exergy efficiencies was conducted. Results showed the Goswami cycle can operate at an effective exergy efficiency of 60–80% with thermal efficiencies between 25 to 31%. The investigation also showed that multiple stage turbines had a better performance than single stage turbines when heat source temperatures remain above 200 °C in terms of power, thermal and exergy efficiencies. However, the effect of turbine stages is almost the same when heat source temperatures were below 175 °C. For multiple turbine stages, the use of partial superheating with Single or Double Reheat stream showed a better performance in terms of efficiency. It also showed an increase in exergy destruction when heat source temperature was increased. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Power-Law Distributions from Sigma-Pi Structure of Sums of Random Multiplicative Processes
Entropy 2017, 19(8), 417; doi:10.3390/e19080417
Received: 2 August 2017 / Revised: 13 August 2017 / Accepted: 16 August 2017 / Published: 17 August 2017
PDF Full-text (1058 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors
[...] Read more.
We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors governing the evolution of sizes: independence, Kesten dependence and mixed dependence. We take the sum X of the sizes of the entities as the representative quantity of the system, which has the structure of a sum of product terms (Sigma-Pi), whose asymptotic distribution function has a power-law tail behavior. We present evidence that the dependence type does not alter the asymptotic power-law tail behavior, nor the value of the tail exponent. However, the structure of the large values of the sum X is found to vary with the dependence between the growth factors (and thus the entities). In particular, for the independence case, we find that the large values of X are contributed by a single maximum size entity: the asymptotic power-law tail is the result of such single contribution to the sum, with this maximum contributing entity changing stochastically with time and with realizations. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Figures

Figure 1

Open AccessArticle Deformed Jarzynski Equality
Entropy 2017, 19(8), 419; doi:10.3390/e19080419
Received: 24 July 2017 / Revised: 15 August 2017 / Accepted: 15 August 2017 / Published: 18 August 2017
Cited by 1 | PDF Full-text (322 KB) | HTML Full-text | XML Full-text
Abstract
The well-known Jarzynski equality, often written in the form eβΔF=eβW, provides a non-equilibrium means to measure the free energy difference ΔF of a system at the same inverse temperature β
[...] Read more.
The well-known Jarzynski equality, often written in the form e β Δ F = e β W , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature β based on an ensemble average of non-equilibrium work W. The accuracy of Jarzynski’s measurement scheme was known to be determined by the variance of exponential work, denoted as var e β W . However, it was recently found that var e β W can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining Δ F . In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging var e β W . The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging var e β W . The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Figures

Figure 1

Open AccessArticle Incipient Fault Diagnosis of Rolling Bearings Based on Impulse-Step Impact Dictionary and Re-Weighted Minimizing Nonconvex Penalty Lq Regular Technique
Entropy 2017, 19(8), 421; doi:10.3390/e19080421
Received: 31 July 2017 / Revised: 15 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
PDF Full-text (2928 KB) | HTML Full-text | XML Full-text
Abstract
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and
[...] Read more.
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Open AccessArticle Computing Entropies with Nested Sampling
Entropy 2017, 19(8), 422; doi:10.3390/e19080422
Received: 12 July 2017 / Revised: 14 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
PDF Full-text (782 KB) | HTML Full-text | XML Full-text
Abstract
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be
[...] Read more.
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Figures

Figure 1

Open AccessArticle An Improved System for Utilizing Low-Temperature Waste Heat of Flue Gas from Coal-Fired Power Plants
Entropy 2017, 19(8), 423; doi:10.3390/e19080423
Received: 10 July 2017 / Revised: 6 August 2017 / Accepted: 15 August 2017 / Published: 19 August 2017
PDF Full-text (1463 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas
[...] Read more.
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas is not only used to preheat air for assisting coal combustion as usual but also to heat up feedwater and for low-pressure steam extraction. Air preheating is performed by both the exhaust flue gas in the boiler island and the low-pressure steam extraction in the turbine island; thereby part of the flue gas heat originally exchanged in the air preheater can be saved and introduced to heat the feedwater and the high-temperature condensed water. Consequently, part of the high-pressure steam is saved for further expansion in the steam turbine, which results in additional net power output. Based on the design data of a typical 1000 MW ultra-supercritical coal-fired power plant in China, an in-depth analysis of the energy-saving characteristics of the improved waste heat utilization system (WHUS) and the conventional WHUS is conducted. When the improved WHUS is adopted in a typical 1000 MW unit, net power output increases by 19.51 MW, exergy efficiency improves to 45.46%, and net annual revenue reaches USD 4.741 million while for the conventional WHUS, these performance parameters are 5.83 MW, 44.80% and USD 1.244 million, respectively. The research described in this paper provides a feasible energy-saving option for coal-fired power plants. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Figures

Figure 1

Open AccessArticle Survey on Probabilistic Models of Low-Rank Matrix Factorizations
Entropy 2017, 19(8), 424; doi:10.3390/e19080424
Received: 12 June 2017 / Revised: 11 August 2017 / Accepted: 16 August 2017 / Published: 19 August 2017
PDF Full-text (589 KB) | HTML Full-text | XML Full-text
Abstract
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption
[...] Read more.
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption that the data matrices are contaminated stochastically by some type of noise. Thus the point estimations of low-rank components can be obtained by Maximum Likelihood (ML) estimation or Maximum a posteriori (MAP). In the past decade, a variety of probabilistic models of low-rank matrix factorizations have emerged. The most significant difference between low-rank matrix factorizations and their corresponding probabilistic models is that the latter treat the low-rank components as random variables. This paper makes a survey of the probabilistic models of low-rank matrix factorizations. Firstly, we review some probability distributions commonly-used in probabilistic models of low-rank matrix factorizations and introduce the conjugate priors of some probability distributions to simplify the Bayesian inference. Then we provide two main inference methods for probabilistic low-rank matrix factorizations, i.e., Gibbs sampling and variational Bayesian inference. Next, we classify roughly the important probabilistic models of low-rank matrix factorizations into several categories and review them respectively. The categories are performed via different matrix factorizations formulations, which mainly include PCA, matrix factorizations, robust PCA, NMF and tensor factorizations. Finally, we discuss the research issues needed to be studied in the future. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Figures

Figure 1

Open AccessArticle A Sparse Multiwavelet-Based Generalized Laguerre–Volterra Model for Identifying Time-Varying Neural Dynamics from Spiking Activities
Entropy 2017, 19(8), 425; doi:10.3390/e19080425
Received: 13 July 2017 / Revised: 7 August 2017 / Accepted: 18 August 2017 / Published: 20 August 2017
PDF Full-text (6929 KB) | HTML Full-text | XML Full-text
Abstract
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying
[...] Read more.
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Figures

Figure 1

Open AccessArticle Iterative QR-Based SFSIC Detection and Decoder Algorithm for a Reed–Muller Space-Time Turbo System
Entropy 2017, 19(8), 426; doi:10.3390/e19080426
Received: 30 June 2017 / Revised: 24 July 2017 / Accepted: 15 August 2017 / Published: 20 August 2017
PDF Full-text (1038 KB) | HTML Full-text | XML Full-text
Abstract
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive
[...] Read more.
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive interference cancellation, stemmed from the a priori log-likelihood ratio (LLR) of encoded bits. Then, the signal originating from the symbols of the reliable segment, the symbol reliability metric, in terms of an a posteriori LLR of encoded bits which is larger than a certain threshold, is iteratively cancelled with the QRSFSIC in order to further obtain the residual signal for evaluating the symbols in the unreliable segment. This is done until the unreliable segment is empty, resulting in the extrinsic information for a RM turbo-coded bit with the greatest likelihood. Bridged by de-multiplexing and multiplexing, an iterative QRSFSIC detection is concatenated with an iterative trellis-based maximum a posteriori probability RM turbo decoder as if a principal Turbo detection and decoder is embedded with an iterative subordinate QRSFSIC detection and a RM turbo decoder, exchanging each other’s detection and decoding soft-decision information iteratively. These three stages let the proposed algorithm approach the upper bound of the diversity. The simulation results also show that the proposed scheme outperforms the other suboptimum detectors considered in this paper. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations
Entropy 2017, 19(8), 427; doi:10.3390/e19080427
Received: 27 June 2017 / Revised: 8 August 2017 / Accepted: 18 August 2017 / Published: 21 August 2017
PDF Full-text (3212 KB) | HTML Full-text | XML Full-text
Abstract
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower
[...] Read more.
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower bounds on the entropy for the minimum entropy distribution over arbitrarily large collections of binary units with any fixed set of mean values and pairwise correlations. We also construct specific low-entropy distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size for any set of first- and second-order statistics consistent with arbitrarily large systems. We further demonstrate that some sets of these low-order statistics can only be realized by small systems. Our results show how only small amounts of randomness are needed to mimic low-order statistical properties of highly entropic distributions, and we discuss some applications for engineered and biological information transmission systems. Full article
(This article belongs to the Special Issue Thermodynamics of Information Processing)
Figures

Figure 1

Open AccessArticle The Potential Application of Multiscale Entropy Analysis of Electroencephalography in Children with Neurological and Neuropsychiatric Disorders
Entropy 2017, 19(8), 428; doi:10.3390/e19080428
Received: 29 June 2017 / Revised: 11 August 2017 / Accepted: 16 August 2017 / Published: 21 August 2017
PDF Full-text (227 KB) | HTML Full-text | XML Full-text
Abstract
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity
[...] Read more.
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity is a powerful tool in investigating the underlying disturbances of neural networks of the brain. Most neurological and neuropsychiatric disorders in childhood affect the early stage of brain development. The analysis of EEG complexity may show the influences of different neurological and neuropsychiatric disorders on different regions of the brain during development. This article aims to give a brief summary of current concepts of MSE analysis in pediatric neurological and neuropsychiatric disorders. Studies utilizing MSE or its modifications for investigating neurological and neuropsychiatric disorders in children were reviewed. Abnormal EEG complexity was shown in a variety of childhood neurological and neuropsychiatric diseases, including autism, attention deficit/hyperactivity disorder, Tourette syndrome, and epilepsy in infancy and childhood. MSE has been shown to be a powerful method for analyzing the non-linear anomaly of EEG in childhood neurological diseases. Further studies are needed to show its clinical implications on diagnosis, treatment, and outcome prediction. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Open AccessArticle Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case
Entropy 2017, 19(8), 429; doi:10.3390/e19080429
Received: 4 July 2017 / Revised: 6 August 2017 / Accepted: 17 August 2017 / Published: 21 August 2017
PDF Full-text (304 KB) | HTML Full-text | XML Full-text
Abstract
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition,
[...] Read more.
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, we define the logical entropy of an IF-dynamical system. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Open AccessArticle Influence of Failure Probability Due to Parameter and Anchor Variance of a Freeway Dip Slope Slide—A Case Study in Taiwan
Entropy 2017, 19(8), 431; doi:10.3390/e19080431
Received: 19 June 2017 / Revised: 10 August 2017 / Accepted: 18 August 2017 / Published: 22 August 2017
PDF Full-text (5968 KB) | HTML Full-text | XML Full-text
Abstract
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model
[...] Read more.
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model were not considered. The objective of research was to analyze the stability of natural slope, in consideration of characteristics of rock layers and the variability of pre-stressing force. By sensitivity and uncertainty analysis, the result showed the sensitivity for pre-stressing force of rock anchor was significantly smaller than the cohesive (c) of rock layer and the varying influence of the friction angle (ϕ) in rock layers. In addition, the immersion by water at the natural slope would weaken the rock layers, in which the cohesion c was reduced to 6 kPa and the friction angle ϕ was decreased below 14°, and it started to show instability and failure in the balance as FS became smaller than 1. The failure rate to the slope could be as high as 50%. By stabilizing with a rock anchor, the failure rate could be reduced below 3%, greatly improving the stability and the reliability of the slope. Full article
Figures

Figure 1

Open AccessArticle Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels
Entropy 2017, 19(8), 432; doi:10.3390/e19080432
Received: 6 July 2017 / Revised: 16 August 2017 / Accepted: 18 August 2017 / Published: 22 August 2017
PDF Full-text (2379 KB) | HTML Full-text | XML Full-text
Abstract
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties
[...] Read more.
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties on the two grouped channel coefficients to improve the channel estimation performance in a mixed noise environment. As a result, a zero attraction term is obtained from the expected l 0 and l 1 penalty techniques. Furthermore, a reweighting factor is adopted and incorporated into the zero-attraction term of the GC-MCC algorithm which is denoted as the reweighted GC-MCC (RGC-MMC) algorithm to enhance the estimation performance. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the sparse multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors are discussed and analyzed over sparse channels in mixed Gaussian noise environments. The computer simulation results show that the estimated steady-state error is smaller and the convergence is faster than those of the previously reported MCC and sparse MCC algorithms. Full article
Figures

Figure 1

Open AccessArticle Entropy Generation Analysis of Wildfire Propagation
Entropy 2017, 19(8), 433; doi:10.3390/e19080433
Received: 30 June 2017 / Revised: 4 August 2017 / Accepted: 11 August 2017 / Published: 22 August 2017
PDF Full-text (2215 KB) | HTML Full-text | XML Full-text
Abstract
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of
[...] Read more.
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of finding possible links between entropy generation and propagation directions. The ultimate goal of such analysis consists in helping one to overcome possible limitations of the models usually applied to the prediction of wildfire propagation. These models are based on the application of the superimposition of the effects due to wind and slope, which has proven to fail in various cases. The analysis presented here shows that entropy generation allows a detailed analysis of the landscape propagation of a fire and can be thus applied to its quantitative description. Full article
(This article belongs to the Section Thermodynamics)
Figures

Review

Jump to: Editorial, Research, Other

Open AccessReview A View of Information-Estimation Relations in Gaussian Networks
Entropy 2017, 19(8), 409; doi:10.3390/e19080409
Received: 31 May 2017 / Revised: 31 July 2017 / Accepted: 1 August 2017 / Published: 9 August 2017
PDF Full-text (1948 KB) | HTML Full-text | XML Full-text
Abstract
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE).
[...] Read more.
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). This paper reviews several applications of the I-MMSE relationship to information theoretic problems arising in connection with multi-user channel coding. The goal of this paper is to review the different techniques used on such problems, as well as to emphasize the added-value obtained from the information-estimation point of view. Full article
(This article belongs to the Special Issue Network Information Theory)
Figures

Figure 1

Open AccessReview Securing Wireless Communications of the Internet of Things from the Physical Layer, An Overview
Entropy 2017, 19(8), 420; doi:10.3390/e19080420
Received: 3 July 2017 / Revised: 14 August 2017 / Accepted: 16 August 2017 / Published: 18 August 2017
PDF Full-text (1109 KB) | HTML Full-text | XML Full-text
Abstract
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative
[...] Read more.
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Figures

Figure 1

Other

Jump to: Editorial, Research, Review

Open AccessDiscussion Quality Systems. A Thermodynamics-Related Interpretive Model
Entropy 2017, 19(8), 418; doi:10.3390/e19080418
Received: 21 June 2017 / Revised: 9 August 2017 / Accepted: 14 August 2017 / Published: 17 August 2017
PDF Full-text (1547 KB) | HTML Full-text | XML Full-text
Abstract
In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is
[...] Read more.
In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is proposed for QMSs, including a virtual document entropy and an entropy linked to processes and organisation. QMSs are also interpreted in light of Cybernetics, and interrelations between Information Theory and quality are also highlighted. A measure for the information content of quality documents is proposed. Such parameters can be used as adequacy indices for QMSs. From the discussed approach, suggestions for organising QMSs are also derived. Further interpretive thermodynamic-based criteria for QMSs are also proposed. The work represents the first attempt to treat quality organisational systems according to a thermodynamics-related approach. At this stage, no data are available to compare statements in the paper. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top