Next Issue
Volume 19, September
Previous Issue
Volume 19, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 19, Issue 8 (August 2017) – 54 articles

Cover Story (view full-size image): How much uncertainty do we have about a given issue? And how relevant is it to another? Information theory provides a framework for quantifying these notions, in terms of entropy, mutual information, and the like. However, it can be difficult to apply in practice, because difficult integrals appear in the calculations. I adapt the Nested Sampling algorithm used in Bayesian statistics, so that it can calculate information theoretic quantities, making applied information theory easier. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2215 KiB  
Article
Entropy Generation Analysis of Wildfire Propagation
by Elisa Guelpa and Vittorio Verda
Entropy 2017, 19(8), 433; https://doi.org/10.3390/e19080433 - 22 Aug 2017
Cited by 4 | Viewed by 4725
Abstract
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of [...] Read more.
Entropy generation is commonly applied to describe the evolution of irreversible processes, such as heat transfer and turbulence. These are both dominating phenomena in fire propagation. In this paper, entropy generation analysis is applied to a grassland fire event, with the aim of finding possible links between entropy generation and propagation directions. The ultimate goal of such analysis consists in helping one to overcome possible limitations of the models usually applied to the prediction of wildfire propagation. These models are based on the application of the superimposition of the effects due to wind and slope, which has proven to fail in various cases. The analysis presented here shows that entropy generation allows a detailed analysis of the landscape propagation of a fire and can be thus applied to its quantitative description. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Graphical abstract

2379 KiB  
Article
Group-Constrained Maximum Correntropy Criterion Algorithms for Estimating Sparse Mix-Noised Channels
by Yanyan Wang, Yingsong Li, Felix Albu and Rui Yang
Entropy 2017, 19(8), 432; https://doi.org/10.3390/e19080432 - 22 Aug 2017
Cited by 34 | Viewed by 4269
Abstract
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties [...] Read more.
A group-constrained maximum correntropy criterion (GC-MCC) algorithm is proposed on the basis of the compressive sensing (CS) concept and zero attracting (ZA) techniques and its estimating behavior is verified over sparse multi-path channels. The proposed algorithm is implemented by exerting different norm penalties on the two grouped channel coefficients to improve the channel estimation performance in a mixed noise environment. As a result, a zero attraction term is obtained from the expected l 0 and l 1 penalty techniques. Furthermore, a reweighting factor is adopted and incorporated into the zero-attraction term of the GC-MCC algorithm which is denoted as the reweighted GC-MCC (RGC-MMC) algorithm to enhance the estimation performance. Both the GC-MCC and RGC-MCC algorithms are developed to exploit well the inherent sparseness properties of the sparse multi-path channels due to the expected zero-attraction terms in their iterations. The channel estimation behaviors are discussed and analyzed over sparse channels in mixed Gaussian noise environments. The computer simulation results show that the estimated steady-state error is smaller and the convergence is faster than those of the previously reported MCC and sparse MCC algorithms. Full article
Show Figures

Figure 1

5968 KiB  
Article
Influence of Failure Probability Due to Parameter and Anchor Variance of a Freeway Dip Slope Slide—A Case Study in Taiwan
by Shong-Loong Chen and Chia-Pang Cheng
Entropy 2017, 19(8), 431; https://doi.org/10.3390/e19080431 - 22 Aug 2017
Cited by 7 | Viewed by 5568
Abstract
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model [...] Read more.
The traditional slope stability analysis used the Factor of Safety (FS) from the Limit Equilibrium Theory as the determinant. If the FS was greater than 1, it was considered as “safe” and variables or parameters of uncertainty in the analysis model were not considered. The objective of research was to analyze the stability of natural slope, in consideration of characteristics of rock layers and the variability of pre-stressing force. By sensitivity and uncertainty analysis, the result showed the sensitivity for pre-stressing force of rock anchor was significantly smaller than the cohesive (c) of rock layer and the varying influence of the friction angle (ϕ) in rock layers. In addition, the immersion by water at the natural slope would weaken the rock layers, in which the cohesion c was reduced to 6 kPa and the friction angle ϕ was decreased below 14°, and it started to show instability and failure in the balance as FS became smaller than 1. The failure rate to the slope could be as high as 50%. By stabilizing with a rock anchor, the failure rate could be reduced below 3%, greatly improving the stability and the reliability of the slope. Full article
Show Figures

Figure 1

304 KiB  
Article
Logical Entropy and Logical Mutual Information of Experiments in the Intuitionistic Fuzzy Case
by Dagmar Markechová and Beloslav Riečan
Entropy 2017, 19(8), 429; https://doi.org/10.3390/e19080429 - 21 Aug 2017
Cited by 12 | Viewed by 4407
Abstract
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, [...] Read more.
In this contribution, we introduce the concepts of logical entropy and logical mutual information of experiments in the intuitionistic fuzzy case, and study the basic properties of the suggested measures. Subsequently, by means of the suggested notion of logical entropy of an IF-partition, we define the logical entropy of an IF-dynamical system. It is shown that the logical entropy of IF-dynamical systems is invariant under isomorphism. Finally, an analogy of the Kolmogorov–Sinai theorem on generators for IF-dynamical systems is proved. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
227 KiB  
Article
The Potential Application of Multiscale Entropy Analysis of Electroencephalography in Children with Neurological and Neuropsychiatric Disorders
by Yen-Ju Chu, Chi-Feng Chang, Jiann-Shing Shieh and Wang-Tso Lee
Entropy 2017, 19(8), 428; https://doi.org/10.3390/e19080428 - 21 Aug 2017
Cited by 17 | Viewed by 5081
Abstract
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity [...] Read more.
Electroencephalography (EEG) is frequently used in functional neurological assessment of children with neurological and neuropsychiatric disorders. Multiscale entropy (MSE) can reveal complexity in both short and long time scales and is more feasible in the analysis of EEG. Entropy-based estimation of EEG complexity is a powerful tool in investigating the underlying disturbances of neural networks of the brain. Most neurological and neuropsychiatric disorders in childhood affect the early stage of brain development. The analysis of EEG complexity may show the influences of different neurological and neuropsychiatric disorders on different regions of the brain during development. This article aims to give a brief summary of current concepts of MSE analysis in pediatric neurological and neuropsychiatric disorders. Studies utilizing MSE or its modifications for investigating neurological and neuropsychiatric disorders in children were reviewed. Abnormal EEG complexity was shown in a variety of childhood neurological and neuropsychiatric diseases, including autism, attention deficit/hyperactivity disorder, Tourette syndrome, and epilepsy in infancy and childhood. MSE has been shown to be a powerful method for analyzing the non-linear anomaly of EEG in childhood neurological diseases. Further studies are needed to show its clinical implications on diagnosis, treatment, and outcome prediction. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
3212 KiB  
Article
Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations
by Badr F. Albanna, Christopher Hillar, Jascha Sohl-Dickstein and Michael R. DeWeese
Entropy 2017, 19(8), 427; https://doi.org/10.3390/e19080427 - 21 Aug 2017
Cited by 4 | Viewed by 7655
Abstract
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower [...] Read more.
Maximum entropy models are increasingly being used to describe the collective activity of neural populations with measured mean neural activities and pairwise correlations, but the full space of probability distributions consistent with these constraints has not been explored. We provide upper and lower bounds on the entropy for the minimum entropy distribution over arbitrarily large collections of binary units with any fixed set of mean values and pairwise correlations. We also construct specific low-entropy distributions for several relevant cases. Surprisingly, the minimum entropy solution has entropy scaling logarithmically with system size for any set of first- and second-order statistics consistent with arbitrarily large systems. We further demonstrate that some sets of these low-order statistics can only be realized by small systems. Our results show how only small amounts of randomness are needed to mimic low-order statistical properties of highly entropic distributions, and we discuss some applications for engineered and biological information transmission systems. Full article
(This article belongs to the Special Issue Thermodynamics of Information Processing)
Show Figures

Figure 1

1038 KiB  
Article
Iterative QR-Based SFSIC Detection and Decoder Algorithm for a Reed–Muller Space-Time Turbo System
by Liang-Fang Ni, Yi Wang, Wei-Xia Li, Pei-Zhen Wang and Jia-Yan Zhang
Entropy 2017, 19(8), 426; https://doi.org/10.3390/e19080426 - 20 Aug 2017
Cited by 1 | Viewed by 4094
Abstract
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive [...] Read more.
An iterative QR-based soft feedback segment interference cancellation (QRSFSIC) detection and decoder algorithm for a Reed–Muller (RM) space-time turbo system is proposed in this paper. It forms the sufficient statistic for the minimum-mean-square error (MMSE) estimate according to QR decomposition-based soft feedback successive interference cancellation, stemmed from the a priori log-likelihood ratio (LLR) of encoded bits. Then, the signal originating from the symbols of the reliable segment, the symbol reliability metric, in terms of an a posteriori LLR of encoded bits which is larger than a certain threshold, is iteratively cancelled with the QRSFSIC in order to further obtain the residual signal for evaluating the symbols in the unreliable segment. This is done until the unreliable segment is empty, resulting in the extrinsic information for a RM turbo-coded bit with the greatest likelihood. Bridged by de-multiplexing and multiplexing, an iterative QRSFSIC detection is concatenated with an iterative trellis-based maximum a posteriori probability RM turbo decoder as if a principal Turbo detection and decoder is embedded with an iterative subordinate QRSFSIC detection and a RM turbo decoder, exchanging each other’s detection and decoding soft-decision information iteratively. These three stages let the proposed algorithm approach the upper bound of the diversity. The simulation results also show that the proposed scheme outperforms the other suboptimum detectors considered in this paper. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

6929 KiB  
Article
A Sparse Multiwavelet-Based Generalized Laguerre–Volterra Model for Identifying Time-Varying Neural Dynamics from Spiking Activities
by Song Xu, Yang Li, Tingwen Huang and Rosa H. M. Chan
Entropy 2017, 19(8), 425; https://doi.org/10.3390/e19080425 - 20 Aug 2017
Cited by 6 | Viewed by 5398
Abstract
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying [...] Read more.
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

589 KiB  
Article
Survey on Probabilistic Models of Low-Rank Matrix Factorizations
by Jiarong Shi, Xiuyun Zheng and Wei Yang
Entropy 2017, 19(8), 424; https://doi.org/10.3390/e19080424 - 19 Aug 2017
Cited by 12 | Viewed by 5991
Abstract
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption [...] Read more.
Low-rank matrix factorizations such as Principal Component Analysis (PCA), Singular Value Decomposition (SVD) and Non-negative Matrix Factorization (NMF) are a large class of methods for pursuing the low-rank approximation of a given data matrix. The conventional factorization models are based on the assumption that the data matrices are contaminated stochastically by some type of noise. Thus the point estimations of low-rank components can be obtained by Maximum Likelihood (ML) estimation or Maximum a posteriori (MAP). In the past decade, a variety of probabilistic models of low-rank matrix factorizations have emerged. The most significant difference between low-rank matrix factorizations and their corresponding probabilistic models is that the latter treat the low-rank components as random variables. This paper makes a survey of the probabilistic models of low-rank matrix factorizations. Firstly, we review some probability distributions commonly-used in probabilistic models of low-rank matrix factorizations and introduce the conjugate priors of some probability distributions to simplify the Bayesian inference. Then we provide two main inference methods for probabilistic low-rank matrix factorizations, i.e., Gibbs sampling and variational Bayesian inference. Next, we classify roughly the important probabilistic models of low-rank matrix factorizations into several categories and review them respectively. The categories are performed via different matrix factorizations formulations, which mainly include PCA, matrix factorizations, robust PCA, NMF and tensor factorizations. Finally, we discuss the research issues needed to be studied in the future. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

1463 KiB  
Article
An Improved System for Utilizing Low-Temperature Waste Heat of Flue Gas from Coal-Fired Power Plants
by Shengwei Huang, Chengzhou Li, Tianyu Tan, Peng Fu, Gang Xu and Yongping Yang
Entropy 2017, 19(8), 423; https://doi.org/10.3390/e19080423 - 19 Aug 2017
Cited by 22 | Viewed by 7168
Abstract
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas [...] Read more.
In this paper, an improved system to efficiently utilize the low-temperature waste heat from the flue gas of coal-fired power plants is proposed based on heat cascade theory. The essence of the proposed system is that the waste heat of exhausted flue gas is not only used to preheat air for assisting coal combustion as usual but also to heat up feedwater and for low-pressure steam extraction. Air preheating is performed by both the exhaust flue gas in the boiler island and the low-pressure steam extraction in the turbine island; thereby part of the flue gas heat originally exchanged in the air preheater can be saved and introduced to heat the feedwater and the high-temperature condensed water. Consequently, part of the high-pressure steam is saved for further expansion in the steam turbine, which results in additional net power output. Based on the design data of a typical 1000 MW ultra-supercritical coal-fired power plant in China, an in-depth analysis of the energy-saving characteristics of the improved waste heat utilization system (WHUS) and the conventional WHUS is conducted. When the improved WHUS is adopted in a typical 1000 MW unit, net power output increases by 19.51 MW, exergy efficiency improves to 45.46%, and net annual revenue reaches USD 4.741 million while for the conventional WHUS, these performance parameters are 5.83 MW, 44.80% and USD 1.244 million, respectively. The research described in this paper provides a feasible energy-saving option for coal-fired power plants. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Show Figures

Figure 1

782 KiB  
Article
Computing Entropies with Nested Sampling
by Brendon J. Brewer
Entropy 2017, 19(8), 422; https://doi.org/10.3390/e19080422 - 18 Aug 2017
Cited by 8 | Viewed by 6017
Abstract
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be [...] Read more.
The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple Gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

3013 KiB  
Article
Incipient Fault Diagnosis of Rolling Bearings Based on Impulse-Step Impact Dictionary and Re-Weighted Minimizing Nonconvex Penalty Lq Regular Technique
by Qing Li and Steven Y. Liang
Entropy 2017, 19(8), 421; https://doi.org/10.3390/e19080421 - 18 Aug 2017
Cited by 21 | Viewed by 5663 | Correction
Abstract
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and [...] Read more.
The periodical transient impulses caused by localized faults are sensitive and important characteristic information for rotating machinery fault diagnosis. However, it is very difficult to accurately extract transient impulses at the incipient fault stage because the fault impulse features are rather weak and always corrupted by heavy background noise. In this paper, a new transient impulse extraction methodology is proposed based on impulse-step dictionary and re-weighted minimizing nonconvex penalty Lq regular (R-WMNPLq, q = 0.5) for the incipient fault diagnosis of rolling bearings. Prior to the sparse representation, the original vibration signal is preprocessed by the variational mode decomposition (VMD) technique. Due to the physical mechanism of periodic double impacts, including step-like and impulse-like impacts, an impulse-step impact dictionary atom could be designed to match the natural waveform structure of vibration signals. On the other hand, the traditional sparse reconstruction approaches such as orthogonal matching pursuit (OMP), L1-norm regularization treat all vibration signal values equally and thus ignore the fact that the vibration peak value may have more useful information about periodical transient impulses and should be preserved at a larger weight value. Therefore, penalty and smoothing parameters are introduced on the reconstructed model to guarantee the reasonable distribution consistence of peak vibration values. Lastly, the proposed technique is applied to accelerated lifetime testing of rolling bearings, where it achieves a more noticeable and higher diagnostic accuracy compared with OMP, L1-norm regularization and traditional spectral Kurtogram (SK) method. Full article
Show Figures

Figure 1

1109 KiB  
Review
Securing Wireless Communications of the Internet of Things from the Physical Layer, An Overview
by Junqing Zhang, Trung Q. Duong, Roger Woods and Alan Marshall
Entropy 2017, 19(8), 420; https://doi.org/10.3390/e19080420 - 18 Aug 2017
Cited by 69 | Viewed by 11362
Abstract
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative [...] Read more.
The security of the Internet of Things (IoT) is receiving considerable interest as the low power constraints and complexity features of many IoT devices are limiting the use of conventional cryptographic techniques. This article provides an overview of recent research efforts on alternative approaches for securing IoT wireless communications at the physical layer, specifically the key topics of key generation and physical layer encryption. These schemes can be implemented and are lightweight, and thus offer practical solutions for providing effective IoT wireless security. Future research to make IoT-based physical layer security more robust and pervasive is also covered. Full article
(This article belongs to the Special Issue Information-Theoretic Security)
Show Figures

Figure 1

322 KiB  
Article
Deformed Jarzynski Equality
by Jiawen Deng, Juan D. Jaramillo, Peter Hänggi and Jiangbin Gong
Entropy 2017, 19(8), 419; https://doi.org/10.3390/e19080419 - 18 Aug 2017
Cited by 6 | Viewed by 5579
Abstract
The well-known Jarzynski equality, often written in the form e β Δ F = e β W , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature [...] Read more.
The well-known Jarzynski equality, often written in the form e β Δ F = e β W , provides a non-equilibrium means to measure the free energy difference Δ F of a system at the same inverse temperature β based on an ensemble average of non-equilibrium work W. The accuracy of Jarzynski’s measurement scheme was known to be determined by the variance of exponential work, denoted as var e β W . However, it was recently found that var e β W can systematically diverge in both classical and quantum cases. Such divergence will necessarily pose a challenge in the applications of Jarzynski equality because it may dramatically reduce the efficiency in determining Δ F . In this work, we present a deformed Jarzynski equality for both classical and quantum non-equilibrium statistics, in efforts to reuse experimental data that already suffers from a diverging var e β W . The main feature of our deformed Jarzynski equality is that it connects free energies at different temperatures and it may still work efficiently subject to a diverging var e β W . The conditions for applying our deformed Jarzynski equality may be met in experimental and computational situations. If so, then there is no need to redesign experimental or simulation methods. Furthermore, using the deformed Jarzynski equality, we exemplify the distinct behaviors of classical and quantum work fluctuations for the case of a time-dependent driven harmonic oscillator dynamics and provide insights into the essential performance differences between classical and quantum Jarzynski equalities. Full article
(This article belongs to the Special Issue Quantum Thermodynamics)
Show Figures

Figure 1

6516 KiB  
Article
Spatio-Temporal Variability of Soil Water Content under Different Crop Covers in Irrigation Districts of Northwest China
by Sufen Wang and Vijay P. Singh
Entropy 2017, 19(8), 410; https://doi.org/10.3390/e19080410 - 18 Aug 2017
Cited by 7 | Viewed by 4754
Abstract
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced [...] Read more.
The relationship between soil water content (SWC) and vegetation, topography, and climatic conditions is critical for developing effective agricultural water management practices and improving agricultural water use efficiency in arid areas. The purpose of this study was to determine how crop cover influenced spatial and temporal variation of soil water. During a study, SWC was measured under maize and wheat for two years in northwest China. Statistical methods and entropy analysis were applied to investigate the spatio-temporal variability of SWC and the interaction between SWC and its influencing factors. The SWC variability changed within the field plot, with the standard deviation reaching a maximum value under intermediate mean SWC in different layers under various conditions (climatic conditions, soil conditions, crop type conditions). The spatial-temporal-distribution of the SWC reflects the variability of precipitation and potential evapotranspiration (ET0) under different crop covers. The mutual entropy values between SWC and precipitation were similar in two years under wheat cover but were different under maize cover. However, the mutual entropy values at different depths were different under different crop covers. The entropy values changed with SWC following an exponential trend. The informational correlation coefficient (R0) between the SWC and the precipitation was higher than that between SWC and other factors at different soil depths. Precipitation was the dominant factor controlling the SWC variability, and the crop efficient was the second dominant factor. This study highlights the precipitation is a paramount factor for investigating the spatio-temporal variability of soil water content in Northwest China. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Show Figures

Figure 1

1732 KiB  
Article
Atangana–Baleanu and Caputo Fabrizio Analysis of Fractional Derivatives for Heat and Mass Transfer of Second Grade Fluids over a Vertical Plate: A Comparative Study
by Arshad Khan, Kashif Ali Abro, Asifa Tassaddiq and Ilyas Khan
Entropy 2017, 19(8), 279; https://doi.org/10.3390/e19080279 - 18 Aug 2017
Cited by 92 | Viewed by 7690
Abstract
This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( β / t β ) and Atangana–Baleanu (AB) [...] Read more.
This communication addresses a comparison of newly presented non-integer order derivatives with and without singular kernel, namely Michele Caputo–Mauro Fabrizio (CF) C F ( β / t β ) and Atangana–Baleanu (AB) A B ( α / t α ) fractional derivatives. For this purpose, second grade fluids flow with combined gradients of mass concentration and temperature distribution over a vertical flat plate is considered. The problem is first written in non-dimensional form and then based on AB and CF fractional derivatives, it is developed in fractional form, and then using the Laplace transform technique, exact solutions are established for both cases of AB and CF derivatives. They are then expressed in terms of newly defined M-function M q p ( z ) and generalized Hyper-geometric function p Ψ q ( z ) . The obtained exact solutions are plotted graphically for several pertinent parameters and an interesting comparison is made between AB and CF derivatives results with various similarities and differences. Full article
(This article belongs to the Special Issue Non-Equilibrium Thermodynamics of Micro Technologies)
Show Figures

Figure 1

1547 KiB  
Discussion
Quality Systems. A Thermodynamics-Related Interpretive Model
by Stefano A. Lollai
Entropy 2017, 19(8), 418; https://doi.org/10.3390/e19080418 - 17 Aug 2017
Cited by 4 | Viewed by 8210
Abstract
In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is [...] Read more.
In the present paper, a Quality Systems Theory is presented. Certifiable Quality Systems are treated and interpreted in accordance with a Thermodynamics-based approach. Analysis is also conducted on the relationship between Quality Management Systems (QMSs) and systems theories. A measure of entropy is proposed for QMSs, including a virtual document entropy and an entropy linked to processes and organisation. QMSs are also interpreted in light of Cybernetics, and interrelations between Information Theory and quality are also highlighted. A measure for the information content of quality documents is proposed. Such parameters can be used as adequacy indices for QMSs. From the discussed approach, suggestions for organising QMSs are also derived. Further interpretive thermodynamic-based criteria for QMSs are also proposed. The work represents the first attempt to treat quality organisational systems according to a thermodynamics-related approach. At this stage, no data are available to compare statements in the paper. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Show Figures

Figure 1

1058 KiB  
Article
Power-Law Distributions from Sigma-Pi Structure of Sums of Random Multiplicative Processes
by Arthur Matsuo Yamashita Rios de Sousa, Hideki Takayasu, Didier Sornette and Misako Takayasu
Entropy 2017, 19(8), 417; https://doi.org/10.3390/e19080417 - 17 Aug 2017
Cited by 3 | Viewed by 6831
Abstract
We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors [...] Read more.
We introduce a simple growth model in which the sizes of entities evolve as multiplicative random processes that start at different times. A novel aspect we examine is the dependence among entities. For this, we consider three classes of dependence between growth factors governing the evolution of sizes: independence, Kesten dependence and mixed dependence. We take the sum X of the sizes of the entities as the representative quantity of the system, which has the structure of a sum of product terms (Sigma-Pi), whose asymptotic distribution function has a power-law tail behavior. We present evidence that the dependence type does not alter the asymptotic power-law tail behavior, nor the value of the tail exponent. However, the structure of the large values of the sum X is found to vary with the dependence between the growth factors (and thus the entities). In particular, for the independence case, we find that the large values of X are contributed by a single maximum size entity: the asymptotic power-law tail is the result of such single contribution to the sum, with this maximum contributing entity changing stochastically with time and with realizations. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Show Figures

Figure 1

1049 KiB  
Article
Thermal and Exergetic Analysis of the Goswami Cycle Integrated with Mid-Grade Heat Sources
by Gokmen Demirkaya, Ricardo Vasquez Padilla, Armando Fontalvo, Maree Lake and Yee Yan Lim
Entropy 2017, 19(8), 416; https://doi.org/10.3390/e19080416 - 17 Aug 2017
Cited by 36 | Viewed by 6667
Abstract
This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste [...] Read more.
This paper presents a theoretical investigation of a combined Power and Cooling Cycle that employs an Ammonia-Water mixture. The cycle combines a Rankine and an absorption refrigeration cycle. The Goswami cycle can be used in a wide range of applications including recovering waste heat as a bottoming cycle or generating power from non-conventional sources like solar radiation or geothermal energy. A thermodynamic study of power and cooling co-generation is presented for heat source temperatures between 100 to 350 °C. A comprehensive analysis of the effect of several operation and configuration parameters, including the number of turbine stages and different superheating configurations, on the power output and the thermal and exergy efficiencies was conducted. Results showed the Goswami cycle can operate at an effective exergy efficiency of 60–80% with thermal efficiencies between 25 to 31%. The investigation also showed that multiple stage turbines had a better performance than single stage turbines when heat source temperatures remain above 200 °C in terms of power, thermal and exergy efficiencies. However, the effect of turbine stages is almost the same when heat source temperatures were below 175 °C. For multiple turbine stages, the use of partial superheating with Single or Double Reheat stream showed a better performance in terms of efficiency. It also showed an increase in exergy destruction when heat source temperature was increased. Full article
(This article belongs to the Special Issue Work Availability and Exergy Analysis)
Show Figures

Figure 1

4788 KiB  
Article
The Emergence of Hyperchaos and Synchronization in Networks with Discrete Periodic Oscillators
by Adrian Arellano-Delgado, Rosa Martha López-Gutiérrez, Miguel Angel Murillo-Escobar, Liliana Cardoza-Avendaño and César Cruz-Hernández
Entropy 2017, 19(8), 413; https://doi.org/10.3390/e19080413 - 16 Aug 2017
Cited by 7 | Viewed by 4545
Abstract
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among [...] Read more.
In this paper, the emergence of hyperchaos in a network with two very simple discrete periodic oscillators is presented. Uncoupled periodic oscillators may represent, in the crudest and simplest form, periodic oscillators in nature, for example fireflies, crickets, menstrual cycles of women, among others. Nevertheless, the emergence of hyperchaos in this kind of real-life network has not been proven. In particular, we focus this study on the emergence of hyperchaotic dynamics, considering that these can be mainly used in engineering applications such as cryptography, secure communications, biometric systems, telemedicine, among others. In order to corroborate that the emerging dynamics are hyperchaotic, some chaos and hyperchaos verification tests are conducted. In addition, the presented hyperchaotic coupled system synchronizes, based on the proposed coupling scheme. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Show Figures

Figure 1

447 KiB  
Article
Person-Situation Debate Revisited: Phase Transitions with Quenched and Annealed Disorders
by Arkadiusz Jędrzejewski and Katarzyna Sznajd-Weron
Entropy 2017, 19(8), 415; https://doi.org/10.3390/e19080415 - 13 Aug 2017
Cited by 22 | Viewed by 6146
Abstract
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person [...] Read more.
We study the q-voter model driven by stochastic noise arising from one out of two types of nonconformity: anticonformity or independence. We compare two approaches that were inspired by the famous psychological controversy known as the person–situation debate. We relate the person approach with the quenched disorder and the situation approach with the annealed disorder, and investigate how these two approaches influence order–disorder phase transitions observed in the q-voter model with noise. We show that under a quenched disorder, differences between models with independence and anticonformity are weaker and only quantitative. In contrast, annealing has a much more profound impact on the system and leads to qualitative differences between models on a macroscopic level. Furthermore, only under an annealed disorder may the discontinuous phase transitions appear. It seems that freezing the agents’ behavior at the beginning of simulation—introducing quenched disorder—supports second-order phase transitions, whereas allowing agents to reverse their attitude in time—incorporating annealed disorder—supports discontinuous ones. We show that anticonformity is insensitive to the type of disorder, and in all cases it gives the same result. We precede our study with a short insight from statistical physics into annealed vs. quenched disorder and a brief review of these two approaches in models of opinion dynamics. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Show Figures

Figure 1

6756 KiB  
Article
Effect of Slip Conditions and Entropy Generation Analysis with an Effective Prandtl Number Model on a Nanofluid Flow through a Stretching Sheet
by Mohammad Mehdi Rashidi and Munawwar Ali Abbas
Entropy 2017, 19(8), 414; https://doi.org/10.3390/e19080414 - 11 Aug 2017
Cited by 17 | Viewed by 4621
Abstract
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum [...] Read more.
This article describes the impact of slip conditions on nanofluid flow through a stretching sheet. Nanofluids are very helpful to enhance the convective heat transfer in a boundary layer flow. Prandtl number also play a major role in controlling the thermal and momentum boundary layers. For this purpose, we have considered a model for effective Prandtl number which is borrowed by means of experimental analysis on a nano boundary layer, steady, two-dimensional incompressible flow through a stretching sheet. We have considered γAl2O3-H2O and Al2O3-C2H6O2 nanoparticles for the governing flow problem. An entropy generation analysis is also presented with the help of the second law of thermodynamics. A numerical technique known as Successive Taylor Series Linearization Method (STSLM) is used to solve the obtained governing nonlinear boundary layer equations. The numerical and graphical results are discussed for two cases i.e., (i) effective Prandtl number and (ii) without effective Prandtl number. From graphical results, it is observed that the velocity profile and temperature profile increases in the absence of effective Prandtl number while both expressions become larger in the presence of Prandtl number. Further, numerical comparison has been presented with previously published results to validate the current methodology and results. Full article
(This article belongs to the Special Issue Entropy Generation in Nanofluid Flows)
Show Figures

Figure 1

1194 KiB  
Article
Relationship between Entropy, Corporate Entrepreneurship and Organizational Capabilities in Romanian Medium Sized Enterprises
by Eduard Gabriel Ceptureanu, Sebastian Ion Ceptureanu and Doina I. Popescu
Entropy 2017, 19(8), 412; https://doi.org/10.3390/e19080412 - 10 Aug 2017
Cited by 24 | Viewed by 6680
Abstract
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate [...] Read more.
This paper analyses the relations between entropy, organizational capabilities and corporate entrepreneurship. The results indicate strong links between strategy and corporate entrepreneurship, moderated by the organizational capabilities. We find that companies with strong organizational capabilities, using a systematic strategic approach, widely use corporate entrepreneurship as an instrument to fulfil their objectives. Our study contributes to the limited amount of empirical research on entropy in an organization setting by highlighting the boundary conditions of the impact by examining the moderating effect of firms’ organizational capabilities and also to the development of Econophysics as a fast growing area of interdisciplinary sciences. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex and Disordered Systems)
Show Figures

Figure 1

1511 KiB  
Article
Solutions to the Cosmic Initial Entropy Problem without Equilibrium Initial Conditions
by Vihan M. Patel and Charles H. Lineweaver
Entropy 2017, 19(8), 411; https://doi.org/10.3390/e19080411 - 10 Aug 2017
Cited by 7 | Viewed by 6063
Abstract
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem [...] Read more.
The entropy of the observable universe is increasing. Thus, at earlier times the entropy was lower. However, the cosmic microwave background radiation reveals an apparently high entropy universe close to thermal and chemical equilibrium. A two-part solution to this cosmic initial entropy problem is proposed. Following Penrose, we argue that the evenly distributed matter of the early universe is equivalent to low gravitational entropy. There are two competing explanations for how this initial low gravitational entropy comes about. (1) Inflation and baryogenesis produce a virtually homogeneous distribution of matter with a low gravitational entropy. (2) Dissatisfied with explaining a low gravitational entropy as the product of a ‘special’ scalar field, some theorists argue (following Boltzmann) for a “more natural” initial condition in which the entire universe is in an initial equilibrium state of maximum entropy. In this equilibrium model, our observable universe is an unusual low entropy fluctuation embedded in a high entropy universe. The anthropic principle and the fluctuation theorem suggest that this low entropy region should be as small as possible and have as large an entropy as possible, consistent with our existence. However, our low entropy universe is much larger than needed to produce observers, and we see no evidence for an embedding in a higher entropy background. The initial conditions of inflationary models are as natural as the equilibrium background favored by many theorists. Full article
(This article belongs to the Special Issue Entropy, Time and Evolution)
Show Figures

Figure 1

1948 KiB  
Review
A View of Information-Estimation Relations in Gaussian Networks
by Alex Dytso, Ronit Bustin, H. Vincent Poor and Shlomo Shamai (Shitz)
Entropy 2017, 19(8), 409; https://doi.org/10.3390/e19080409 - 9 Aug 2017
Cited by 11 | Viewed by 5038
Abstract
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). [...] Read more.
Relations between estimation and information measures have received considerable attention from the information theory community. One of the most notable such relationships is the I-MMSE identity of Guo, Shamai and Verdú that connects the mutual information and the minimum mean square error (MMSE). This paper reviews several applications of the I-MMSE relationship to information theoretic problems arising in connection with multi-user channel coding. The goal of this paper is to review the different techniques used on such problems, as well as to emphasize the added-value obtained from the information-estimation point of view. Full article
(This article belongs to the Special Issue Network Information Theory)
Show Figures

Figure 1

1829 KiB  
Article
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
by Luca Faes, Daniele Marinazzo and Sebastiano Stramaglia
Entropy 2017, 19(8), 408; https://doi.org/10.3390/e19080408 - 8 Aug 2017
Cited by 73 | Viewed by 10711
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information [...] Read more.
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone. Full article
Show Figures

Figure 1

246 KiB  
Article
Generalized Maxwell Relations in Thermodynamics with Metric Derivatives
by José Weberszpil and Wen Chen
Entropy 2017, 19(8), 407; https://doi.org/10.3390/e19080407 - 7 Aug 2017
Cited by 21 | Viewed by 5570
Abstract
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also [...] Read more.
In this contribution, we develop the Maxwell generalized thermodynamical relations via the metric derivative model upon the mapping to a continuous fractal space. This study also introduces the total q-derivative expressions depending on two variables, to describe nonextensive statistical mechanics and also the α -total differentiation with conformable derivatives. Some results in the literature are re-obtained, such as the physical temperature defined by Sumiyoshi Abe. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
306 KiB  
Article
Statistical Process Control for Unimodal Distribution Based on Maximum Entropy Distribution Approximation
by Xinghua Fang, Mingshun Song and Yizeng Chen
Entropy 2017, 19(8), 406; https://doi.org/10.3390/e19080406 - 7 Aug 2017
Cited by 1 | Viewed by 6323
Abstract
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. [...] Read more.
In statistical process control, the control chart utilizing the idea of maximum entropy distribution density level sets has been proven to perform well for monitoring the quantity with multimodal distribution. However, it is too complicated to implement for the quantity with unimodal distribution. This article proposes a simplified method based on maximum entropy for the control chart design when the quantity being monitored is unimodal distribution. First, we use the maximum entropy distribution to approximate the unknown distribution of the monitored quantity. Then we directly take the value of the quantity as the monitoring statistic. Finally, the Lebesgue measure is applied to estimate the acceptance regions and the one with minimum volume is chosen as the optimal in-control region of the monitored quantity. The results from two cases show that the proposed method in this article has a higher detection capability than the conventional control chart techniques when the monitored quantity is asymmetric unimodal distribution. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

837 KiB  
Article
Intrinsic Losses Based on Information Geometry and Their Applications
by Yao Rong, Mengjiao Tang and Jie Zhou
Entropy 2017, 19(8), 405; https://doi.org/10.3390/e19080405 - 6 Aug 2017
Cited by 6 | Viewed by 4312
Abstract
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the [...] Read more.
One main interest of information geometry is to study the properties of statistical models that do not depend on the coordinate systems or model parametrization; thus, it may serve as an analytic tool for intrinsic inference in statistics. In this paper, under the framework of Riemannian geometry and dual geometry, we revisit two commonly-used intrinsic losses which are respectively given by the squared Rao distance and the symmetrized Kullback–Leibler divergence (or Jeffreys divergence). For an exponential family endowed with the Fisher metric and α -connections, the two loss functions are uniformly described as the energy difference along an α -geodesic path, for some α { 1 , 0 , 1 } . Subsequently, the two intrinsic losses are utilized to develop Bayesian analyses of covariance matrix estimation and range-spread target detection. We provide an intrinsically unbiased covariance estimator, which is verified to be asymptotically efficient in terms of the intrinsic mean square error. The decision rules deduced by the intrinsic Bayesian criterion provide a geometrical justification for the constant false alarm rate detector based on generalized likelihood ratio principle. Full article
(This article belongs to the Special Issue Information Geometry II)
Show Figures

Figure 1

8530 KiB  
Article
Thermal Transport and Entropy Production Mechanisms in a Turbulent Round Jet at Supercritical Thermodynamic Conditions
by Florian Ries, Johannes Janicka and Amsini Sadiki
Entropy 2017, 19(8), 404; https://doi.org/10.3390/e19080404 - 5 Aug 2017
Cited by 21 | Viewed by 5479
Abstract
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy [...] Read more.
In the present paper, thermal transport and entropy production mechanisms in a turbulent round jet of compressed nitrogen at supercritical thermodynamic conditions are investigated using a direct numerical simulation. First, thermal transport and its contribution to the mixture formation along with the anisotropy of heat fluxes and temperature scales are examined. Secondly, the entropy production rates during thermofluid processes evolving in the supercritical flow are investigated in order to identify the causes of irreversibilities and to display advantageous locations of handling along with the process regimes favorable to mixing. Thereby, it turned out that (1) the jet disintegration process consists of four main stages under supercritical conditions (potential core, separation, pseudo-boiling, turbulent mixing), (2) causes of irreversibilities are primarily due to heat transport and thermodynamic effects rather than turbulence dynamics and (3) heat fluxes and temperature scales appear anisotropic even at the smallest scales, which implies that anisotropic thermal diffusivity models might be appropriate in the context of both Reynolds-averaged Navier–Stokes (RANS) and large eddy simulation (LES) approaches while numerically modeling supercritical fluid flows. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Graphical abstract

Previous Issue
Back to TopTop