Next Issue
Volume 19, December
Previous Issue
Volume 19, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 19, Issue 11 (November 2017) – 65 articles

Cover Story (view full-size image): Already Leonardo da Vinci (1452–1519) realized that turbulent flows can be basically separated in what he called “principal current” and “random and reverse motion”. In this issue, a first theoretical approach to turbulence as a critical phenomenon with the two phases “laminar streaks” and “vorticity-rich islands” is proposed in the form of a mean field theory with strong analogies to magnetism. Existing experiments give qualitative support to such a macroscopic description of turbulence. View the paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
320 KiB  
Article
Dynamic and Thermodynamic Properties of a CA Engine with Non-Instantaneous Adiabats
by Ricardo T. Paéz-Hernández, Norma Sánchez-Salas, Juan C. Chimal-Eguía and Delfino Ladino-Luna
Entropy 2017, 19(11), 632; https://doi.org/10.3390/e19110632 - 22 Nov 2017
Cited by 1 | Viewed by 2979
Abstract
This paper presents an analysis of a Curzon and Alhborn thermal engine model where both internal irreversibilities and non-instantaneous adiabatic branches are considered, operating with maximum ecological function and maximum power output regimes. Its thermodynamic properties are shown, and an analysis of its [...] Read more.
This paper presents an analysis of a Curzon and Alhborn thermal engine model where both internal irreversibilities and non-instantaneous adiabatic branches are considered, operating with maximum ecological function and maximum power output regimes. Its thermodynamic properties are shown, and an analysis of its local dynamic stability is performed. The results derived are compared throughout the work with the results obtained previously for a case in which the adiabatic branches were assumed as instantaneous. The results indicate a better performance for thermodynamic properties in the model with instantaneous adiabatic branches, whereas there is an improvement in robustness in the case where non-instantaneous adiabatic branches are considered. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

278 KiB  
Article
On Normalized Mutual Information: Measure Derivations and Properties
by Tarald O. Kvålseth
Entropy 2017, 19(11), 631; https://doi.org/10.3390/e19110631 - 22 Nov 2017
Cited by 82 | Viewed by 11150
Abstract
Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds, [...] Read more.
Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds, emphasizing the use of least upper bounds. Conditional NMI measures are also derived for three different events and three different random variables. Since the MI formulation for a pair of events is always nonnegative, it can properly be extended to include weighted MI and NMI measures for pairs of events or for random variables that are analogous to the well-known weighted entropy. This weighted MI is generalized to the case of continuous random variables. Such weighted measures have the advantage over previously proposed measures of always being nonnegative. A simple transformation is derived for the NMI, such that the transformed measures have the value-validity property necessary for making various appropriate comparisons between values of those measures. A numerical example is provided. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
228 KiB  
Article
Metacomputable
by Piotr Bołtuć
Entropy 2017, 19(11), 630; https://doi.org/10.3390/e19110630 - 22 Nov 2017
Cited by 1 | Viewed by 3424
Abstract
The paper introduces the notion of “metacomputable” processes as those which are the product of computable processes. This notion seems interesting in the instance when metacomputable processes may not be computable themselves, but are produced by computable ones. The notion of computability used [...] Read more.
The paper introduces the notion of “metacomputable” processes as those which are the product of computable processes. This notion seems interesting in the instance when metacomputable processes may not be computable themselves, but are produced by computable ones. The notion of computability used here relies on Turing computability. When we talk about something being non-computable, this can be viewed as computation that incorporates Turing’s oracle, maybe a true randomizer (perhaps a quantum one). The notions of “processes” is used broadly, so that it also covers “objects” under the functional description; for the sake of this paper an object is seen as computable if processes that fully describe relevant aspects of its functioning are computable. The paper also introduces a distinction between phenomenal content and the epistemic subject which holds that content. The distinction provides an application of the notion of the metacomputable. In accordance with the functional definition of computable objects, sketched out above, it is possible to think of objects, such as brains, as being computable. If we take the functionality of brains relevant for consideration to be their supposed ability to generate first-person consciousness, and if they were computable in this regard, it would mean that brains, as generators of consciousness, could be described, straightforwardly, by Turing-computable mathematical functions. If there were other, maybe artificial, generators of first-person consciousness, then we could hope to design those as Turing-computable machines as well. However, thinking of such generators of consciousness as computable does not preclude the stream of consciousness being non-computable. This is the main point of this article—computable processes, including functionally described machines, may be able to generate incomputable products. Those processes, while not computable, are metacomputable—by regulative definition introduced in this article. Another example of a metacomputable process that is not also computable would be a true randomizer, if we were able to build one. Presumably, it would be built according to a computable design, e.g., by a machine designed using AutoCAD, that could be programmed into an industrial robot. Yet, its product—a perfect randomizer—would be incomputable. The last point I need to make belongs to ontology in the theory of computability. The claim that computable objects, or processes, may produce incomputable ones does not commit us to what I call computational monism—the idea that non-computable processes may, strictly speaking, be transformed into computable ones. Metacomputable objects, or processes, may originate from computable systems (systems will be understood here as complex, structured objects or processes) that have non-computable admixtures. Such processes are computable as long as those non-computable admixtures are latent, or otherwise irrelevant for a given functionality, and they are non-computable if the admixtures become active and relevant. Ontology, in which computational processes, or objects, can produce non-computable processes, or objects, iff the former ones have non-computable components, may be termed computational dualism. Such objects or processes may be computable despite containing non-computable elements, in particular if there is an on and off switch of those non-computable processes, and it is off. One kind of such a switch is provided, in biology, by latent genes that become active only in specific environmental situations, or at a given age. Both ontologies, informational dualism and informational monism, are compatible with some non-computable processes being metacomputable. Full article
2659 KiB  
Article
Parametric PET Image Reconstruction via Regional Spatial Bases and Pharmacokinetic Time Activity Model
by Naoki Kawamura, Tatsuya Yokota, Hidekata Hontani, Muneyuki Sakata and Yuichi Kimura
Entropy 2017, 19(11), 629; https://doi.org/10.3390/e19110629 - 22 Nov 2017
Cited by 2 | Viewed by 3939
Abstract
It is known that the process of reconstruction of a Positron Emission Tomography (PET) image from sinogram data is very sensitive to measurement noises; it is still an important research topic to reconstruct PET images with high signal-to-noise ratios. In this paper, we [...] Read more.
It is known that the process of reconstruction of a Positron Emission Tomography (PET) image from sinogram data is very sensitive to measurement noises; it is still an important research topic to reconstruct PET images with high signal-to-noise ratios. In this paper, we propose a new reconstruction method for a temporal series of PET images from a temporal series of sinogram data. In the proposed method, PET images are reconstructed by minimizing the Kullback–Leibler divergence between the observed sinogram data and sinogram data derived from a parametric model of PET images. The contributions of the proposition include the following: (1) regions of targets in images are explicitly expressed using a set of spatial bases in order to ignore the noises in the background; (2) a parametric time activity model of PET images is explicitly introduced as a constraint; and (3) an algorithm for solving the optimization problem is clearly described. To demonstrate the advantages of the proposed method, quantitative evaluations are performed using both synthetic and clinical data of human brains. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Show Figures

Figure 1

1182 KiB  
Article
Design of Rate-Compatible Parallel Concatenated Punctured Polar Codes for IR-HARQ Transmission Schemes
by Jian Jiao, Sha Wang, Bowen Feng, Shushi Gu, Shaohua Wu and Qinyu Zhang
Entropy 2017, 19(11), 628; https://doi.org/10.3390/e19110628 - 21 Nov 2017
Cited by 21 | Viewed by 4429
Abstract
In this paper, we propose a rate-compatible (RC) parallel concatenated punctured polar (PCPP) codes for incremental redundancy hybrid automatic repeat request (IR-HARQ) transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding [...] Read more.
In this paper, we propose a rate-compatible (RC) parallel concatenated punctured polar (PCPP) codes for incremental redundancy hybrid automatic repeat request (IR-HARQ) transmission schemes, which can transmit multiple data blocks over a time-varying channel. The PCPP coding scheme can provide RC polar coding blocks in order to adapt to channel variations. First, we investigate an improved random puncturing (IRP) pattern for the PCPP coding scheme due to the code-rate and block length limitations of conventional polar codes. The proposed IRP algorithm only select puncturing bits from the frozen bits set and keep the information bits unchanged during puncturing, which can improve 0.2–1 dB decoding performance more than the existing random puncturing (RP) algorithm. Then, we develop a RC IR-HARQ transmission scheme based on PCPP codes. By analyzing the overhead of the previous successful decoded PCPP coding block in our IR-HARQ scheme, the optimal initial code-rate can be determined for each new PCPP coding block over time-varying channels. Simulation results show that the average number of transmissions is about 1.8 times for each PCPP coding block in our RC IR-HARQ scheme with a 2-level PCPP encoding construction, which can reduce half of the average number of transmissions than the existing RC polar coding schemes. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

479 KiB  
Article
Variational Characterization of Free Energy: Theory and Algorithms
by Carsten Hartmann, Lorenz Richter, Christof Schütte and Wei Zhang
Entropy 2017, 19(11), 626; https://doi.org/10.3390/e19110626 - 20 Nov 2017
Cited by 23 | Viewed by 6694
Abstract
The article surveys and extends variational formulations of the thermodynamic free energy and discusses their information-theoretic content from the perspective of mathematical statistics. We revisit the well-known Jarzynski equality for nonequilibrium free energy sampling within the framework of importance sampling and Girsanov change-of-measure [...] Read more.
The article surveys and extends variational formulations of the thermodynamic free energy and discusses their information-theoretic content from the perspective of mathematical statistics. We revisit the well-known Jarzynski equality for nonequilibrium free energy sampling within the framework of importance sampling and Girsanov change-of-measure transformations. The implications of the different variational formulations for designing efficient stochastic optimization and nonequilibrium simulation algorithms for computing free energies are discussed and illustrated. Full article
(This article belongs to the Special Issue Understanding Molecular Dynamics via Stochastic Processes)
Show Figures

Figure 1

725 KiB  
Article
Robust-BD Estimation and Inference for General Partially Linear Models
by Chunming Zhang and Zhengjun Zhang
Entropy 2017, 19(11), 625; https://doi.org/10.3390/e19110625 - 20 Nov 2017
Cited by 1 | Viewed by 3974
Abstract
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in [...] Read more.
The classical quadratic loss for the partially linear model (PLM) and the likelihood function for the generalized PLM are not resistant to outliers. This inspires us to propose a class of “robust-Bregman divergence (BD)” estimators of both the parametric and nonparametric components in the general partially linear model (GPLM), which allows the distribution of the response variable to be partially specified, without being fully known. Using the local-polynomial function estimation method, we propose a computationally-efficient procedure for obtaining “robust-BD” estimators and establish the consistency and asymptotic normality of the “robust-BD” estimator of the parametric component β o . For inference procedures of β o in the GPLM, we show that the Wald-type test statistic W n constructed from the “robust-BD” estimators is asymptotically distribution free under the null, whereas the likelihood ratio-type test statistic Λ n is not. This provides an insight into the distinction from the asymptotic equivalence (Fan and Huang 2005) between W n and Λ n in the PLM constructed from profile least-squares estimators using the non-robust quadratic loss. Numerical examples illustrate the computational effectiveness of the proposed “robust-BD” estimators and robust Wald-type test in the appearance of outlying observations. Full article
Show Figures

Figure 1

1209 KiB  
Article
Re-Evaluating Electromyogram–Force Relation in Healthy Biceps Brachii Muscles Using Complexity Measures
by Xiaofei Zhu, Xu Zhang, Xiao Tang, Xiaoping Gao and Xiang Chen
Entropy 2017, 19(11), 624; https://doi.org/10.3390/e19110624 - 19 Nov 2017
Cited by 10 | Viewed by 5261
Abstract
The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of [...] Read more.
The objective of this study is to re-evaluate the relation between surface electromyogram (EMG) and muscle contraction torque in biceps brachii (BB) muscles of healthy subjects using two different complexity measures. Ten healthy subjects were recruited and asked to complete a series of elbow flexion tasks following different isometric muscle contraction levels ranging from 10% to 80% of maximum voluntary contraction (MVC) with each increment of 10%. Meanwhile, both the elbow flexion torque and surface EMG data from the muscle were recorded. The root mean square (RMS), sample entropy (SampEn) and fuzzy entropy (FuzzyEn) of corresponding EMG data were analyzed for each contraction level, and the relation between EMG and muscle torque was accordingly quantified. The experimental results showed a nonlinear relation between the traditional RMS amplitude of EMG and the muscle torque. By contrast, the FuzzyEn of EMG exhibited an improved linear correlation with the muscle torque than the RMS amplitude of EMG, which indicates its great value in estimating BB muscle strength in a simple and straightforward manner. In addition, the SampEn of EMG was found to be insensitive to the varying muscle torques, almost presenting a flat trend with the increment of muscle force. Such a character of the SampEn implied its potential application as a promising surface EMG biomarker for examining neuromuscular changes while overcoming interference from muscle strength. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Show Figures

Figure 1

6250 KiB  
Article
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
by Duo Hao, Qiuming Li and Chengwei Li
Entropy 2017, 19(11), 623; https://doi.org/10.3390/e19110623 - 18 Nov 2017
Cited by 6 | Viewed by 3741
Abstract
Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, [...] Read more.
Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD) and relative entropy (RE). In this paper, the global motion vector (GMV) is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF), Kalman filter (KF), wavelet decomposition (MD) method, empirical mode decomposition (EMD)-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods. Full article
Show Figures

Figure 1

1015 KiB  
Article
Inquiry Calculus and the Issue of Negative Higher Order Informations
by H. R. Noel Van Erp, Ronald O. Linger and Pieter H. A. J. M. Van Gelder
Entropy 2017, 19(11), 622; https://doi.org/10.3390/e19110622 - 18 Nov 2017
Cited by 5 | Viewed by 3202
Abstract
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification [...] Read more.
In this paper, we will give the derivation of an inquiry calculus, or, equivalently, a Bayesian information theory. From simple ordering follow lattices, or, equivalently, algebras. Lattices admit a quantification, or, equivalently, algebras may be extended to calculi. The general rules of quantification are the sum and chain rules. Probability theory follows from a quantification on the specific lattice of statements that has an upper context. Inquiry calculus follows from a quantification on the specific lattice of questions that has a lower context. There will be given here a relevance measure and a product rule for relevances, which, taken together with the sum rule of relevances, will allow us to perform inquiry analyses in an algorithmic manner. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayesian Methods)
Show Figures

Figure 1

2736 KiB  
Article
Surface Interaction of Nanoscale Water Film with SDS from Computational Simulation and Film Thermodynamics
by Tiefeng Peng, Qibin Li, Longhua Xu, Chao He and Liqun Luo
Entropy 2017, 19(11), 620; https://doi.org/10.3390/e19110620 - 18 Nov 2017
Cited by 10 | Viewed by 5293
Abstract
Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies [...] Read more.
Foam systems have been attracting extensive attention due to their importance in a variety of applications, e.g., in the cleaning industry, and in bubble flotation. In the context of flotation chemistry, flotation performance is strongly affected by bubble coalescence, which in turn relies significantly on the surface forces upon the liquid film between bubbles. Conventionally, unusual short-range strongly repulsive surface interactions for Newton black films (NBF) between two interfaces with thickness of less than 5 nm were not able to be incorporated into the available classical Derjaguin, Landau, Verwey, and Overbeek (DLVO) theory. The non-DLVO interaction would increase exponentially with the decrease of film thickness, as it plays a crucial role in determining liquid film stability. However, its mechanism and origin are still unclear. In the present work, we investigate the surface interaction of free-standing sodium dodecyl-sulfate (SDS) nanoscale black films in terms of disjoining pressure using the molecular simulation method. The aqueous nanoscale film, consisting of a water coating with SDS surfactants, and with disjoining pressure and film tension of SDS-NBF as a function of film thickness, were quantitatively determined by a post-processing technique derived from film thermodynamics. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

963 KiB  
Article
An Analysis of Information Dynamic Behavior Using Autoregressive Models
by Amanda Oliveira, Adrião D. Dória Neto and Allan Martins
Entropy 2017, 19(11), 612; https://doi.org/10.3390/e19110612 - 18 Nov 2017
Cited by 3 | Viewed by 3241
Abstract
Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used [...] Read more.
Information Theory is a branch of mathematics, more specifically probability theory, that studies information quantification. Recently, several researches have been successful with the use of Information Theoretic Learning (ITL) as a new technique of unsupervised learning. In these works, information measures are used as criterion of optimality in learning. In this article, we will analyze a still unexplored aspect of these information measures, their dynamic behavior. Autoregressive models (linear and non-linear) will be used to represent the dynamics in information measures. As a source of dynamic information, videos with different characteristics like fading, monotonous sequences, etc., will be used. Full article
Show Figures

Figure 1

1833 KiB  
Article
Fault Detection for Vibration Signals on Rolling Bearings Based on the Symplectic Entropy Method
by Min Lei, Guang Meng and Guangming Dong
Entropy 2017, 19(11), 607; https://doi.org/10.3390/e19110607 - 18 Nov 2017
Cited by 17 | Viewed by 4293
Abstract
Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of [...] Read more.
Bearing vibration response studies are crucial for the condition monitoring of bearings and the quality inspection of rotating machinery systems. However, it is still very difficult to diagnose bearing faults, especially rolling element faults, due to the complex, high-dimensional and nonlinear characteristics of vibration signals as well as the strong background noise. A novel nonlinear analysis method—the symplectic entropy (SymEn) measure—is proposed to analyze the measured signals for fault monitoring of rolling bearings. The core technique of the SymEn approach is the entropy analysis based on the symplectic principal components. The dynamical characteristics of the rolling bearing data are analyzed using the SymEn method. Unlike other techniques consisting of high-dimensional features in the time-domain, frequency-domain and the empirical mode decomposition (EMD)/wavelet-domain, the SymEn approach constructs low-dimensional (i.e., two-dimensional) features based on the SymEn estimate. The vibration signals from our experiments and the Case Western Reserve University Bearing Data Center are applied to verify the effectiveness of the proposed method. Meanwhile, it is found that faulty bearings have a great influence on the other normal bearings. To sum up, the results indicate that the proposed method can be used to detect rolling bearing faults. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

1121 KiB  
Essay
Thermodynamics: The Unique Universal Science
by Wassim M. Haddad
Entropy 2017, 19(11), 621; https://doi.org/10.3390/e19110621 - 17 Nov 2017
Cited by 26 | Viewed by 20896
Abstract
Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without [...] Read more.
Thermodynamics is a physical branch of science that governs the thermal behavior of dynamical systems from those as simple as refrigerators to those as complex as our expanding universe. The laws of thermodynamics involving conservation of energy and nonconservation of entropy are, without a doubt, two of the most useful and general laws in all sciences. The first law of thermodynamics, according to which energy cannot be created or destroyed, merely transformed from one form to another, and the second law of thermodynamics, according to which the usable energy in an adiabatically isolated dynamical system is always diminishing in spite of the fact that energy is conserved, have had an impact far beyond science and engineering. In this paper, we trace the history of thermodynamics from its classical to its postmodern forms, and present a tutorial and didactic exposition of thermodynamics as it pertains to some of the deepest secrets of the universe. Full article
Show Figures

Figure 1

10936 KiB  
Article
Modal Strain Energy-Based Debonding Assessment of Sandwich Panels Using a Linear Approximation with Maximum Entropy
by Viviana Meruane, Matias Lasen, Enrique López Droguett and Alejandro Ortiz-Bernardin
Entropy 2017, 19(11), 619; https://doi.org/10.3390/e19110619 - 17 Nov 2017
Cited by 7 | Viewed by 3809
Abstract
Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of [...] Read more.
Sandwich structures are very attractive due to their high strength at a minimum weight, and, therefore, there has been a rapid increase in their applications. Nevertheless, these structures may present imperfect bonding or debonding between the skins and core as a result of manufacturing defects or impact loads, degrading their mechanical properties. To improve both the safety and functionality of these systems, structural damage assessment methodologies can be implemented. This article presents a damage assessment algorithm to localize and quantify debonds in sandwich panels. The proposed algorithm uses damage indices derived from the modal strain energy method and a linear approximation with a maximum entropy algorithm. Full-field vibration measurements of the panels were acquired using a high-speed 3D digital image correlation (DIC) system. Since the number of damage indices per panel is too large to be used directly in a regression algorithm, reprocessing of the data using principal component analysis (PCA) and kernel PCA has been performed. The results demonstrate that the proposed methodology accurately identifies debonding in composite panels. Full article
(This article belongs to the Special Issue Entropy for Characterization of Uncertainty in Risk and Reliability)
Show Figures

Figure 1

922 KiB  
Article
A New Stochastic Dominance Degree Based on Almost Stochastic Dominance and Its Application in Decision Making
by Yunna Wu, Xiaokun Sun, Hu Xu, Chuanbo Xu and Ruhang Xu
Entropy 2017, 19(11), 606; https://doi.org/10.3390/e19110606 - 17 Nov 2017
Cited by 1 | Viewed by 4509
Abstract
Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of [...] Read more.
Traditional stochastic dominance rules are so strict and qualitative conditions that generally a stochastic dominance relation between two alternatives does not exist. To solve the problem, we firstly supplement the definitions of almost stochastic dominance (ASD). Then, we propose a new definition of stochastic dominance degree (SDD) that is based on the idea of ASD. The new definition takes both the objective mean and stakeholders’ subjective preference into account, and can measure both standard and almost stochastic dominance degree. The new definition contains four kinds of SDD corresponding to different stakeholders (rational investors, risk averters, risk seekers, and prospect investors). The operator in the definition can also be changed to fit in with different circumstances. On the basis of the new SDD definition, we present a method to solve stochastic multiple criteria decision-making problem. The numerical experiment shows that the new method could produce a more accurate result according to the utility situations of stakeholders. Moreover, even when it is difficult to elicit the group utility distribution of stakeholders, or when the group utility distribution is ambiguous, the method can still rank alternatives. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

890 KiB  
Article
Information Fusion in a Multi-Source Incomplete Information System Based on Information Entropy
by Mengmeng Li and Xiaoyan Zhang
Entropy 2017, 19(11), 570; https://doi.org/10.3390/e19110570 - 17 Nov 2017
Cited by 20 | Viewed by 4142
Abstract
As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source [...] Read more.
As we move into the information age, the amount of data in various fields has increased dramatically, and data sources have become increasingly widely distributed. The corresponding phenomenon of missing data is increasingly common, and it leads to the generation of incomplete multi-source information systems. In this context, this paper’s proposal aims to address the limitations of rough set theory. We study the method of multi-source fusion in incomplete multi-source systems. This paper presents a method for fusing incomplete multi-source systems based on information entropy; in particular, by comparison with another method, our fusion method is validated. Furthermore, extensive experiments are conducted on six UCI data sets to verify the performance of the proposed method. Additionally, the experimental results indicate that multi-source information fusion approaches significantly outperform other approaches to fusion. Full article
Show Figures

Figure 1

315 KiB  
Article
Minimax Estimation of Quantum States Based on the Latent Information Priors
by Takayuki Koyama, Takeru Matsuda and Fumiyasu Komaki
Entropy 2017, 19(11), 618; https://doi.org/10.3390/e19110618 - 16 Nov 2017
Cited by 5 | Viewed by 3312
Abstract
We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and [...] Read more.
We develop priors for Bayes estimation of quantum states that provide minimax state estimation. The relative entropy from the true density operator to a predictive density operator is adopted as a loss function. The proposed prior maximizes the conditional Holevo mutual information, and it is a quantum version of the latent information prior in classical statistics. For one qubit system, we provide a class of measurements that is optimal from the viewpoint of minimax state estimation. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Show Figures

Figure 1

309 KiB  
Article
On Lower Bounds for Statistical Learning Theory
by Po-Ling Loh
Entropy 2017, 19(11), 617; https://doi.org/10.3390/e19110617 - 15 Nov 2017
Cited by 8 | Viewed by 7071
Abstract
In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the [...] Read more.
In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the sense that no other algorithm can lead to smaller statistical error. This paper provides a survey of various techniques used to derive information-theoretic lower bounds for estimation and learning. We focus on the settings of parameter and function estimation, community recovery, and online learning for multi-armed bandits. A common theme is that lower bounds are established by relating the statistical learning problem to a channel decoding problem, for which lower bounds may be derived involving information-theoretic quantities such as the mutual information, total variation distance, and Kullback–Leibler divergence. We close by discussing the use of information-theoretic quantities to measure independence in machine learning applications ranging from causality to medical imaging, and mention techniques for estimating these quantities efficiently in a data-driven manner. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
7475 KiB  
Article
Effects of Endwall Fillet and Bulb on the Temperature Uniformity of Pin-Fined Microchannel
by Zhiliang Pan, Ping Li, Jinxing Li and Yanping Li
Entropy 2017, 19(11), 616; https://doi.org/10.3390/e19110616 - 15 Nov 2017
Cited by 6 | Viewed by 4112
Abstract
Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at [...] Read more.
Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases. Full article
Show Figures

Figure 1

960 KiB  
Article
Random Walk Null Models for Time Series Data
by Daryl DeFord and Katherine Moore
Entropy 2017, 19(11), 615; https://doi.org/10.3390/e19110615 - 15 Nov 2017
Cited by 4 | Viewed by 4932
Abstract
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random [...] Read more.
Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Show Figures

Figure 1

9004 KiB  
Article
How to Identify the Most Powerful Node in Complex Networks? A Novel Entropy Centrality Approach
by Tong Qiao, Wei Shan and Chang Zhou
Entropy 2017, 19(11), 614; https://doi.org/10.3390/e19110614 - 15 Nov 2017
Cited by 48 | Viewed by 7033
Abstract
Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the [...] Read more.
Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

265 KiB  
Review
Entropy Applications to Water Monitoring Network Design: A Review
by Jongho Keum, Kurt C. Kornelsen, James M. Leach and Paulin Coulibaly
Entropy 2017, 19(11), 613; https://doi.org/10.3390/e19110613 - 15 Nov 2017
Cited by 46 | Viewed by 5833
Abstract
Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network [...] Read more.
Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network density. While one of the major challenges in the design of optimal hydrometric networks has been establishing design objectives, information theory has been successfully adopted to network design problems by providing measures of the information content that can be deliverable from a station or a network. This review firstly summarizes the common entropy terms that have been used in water monitoring network designs. Then, this paper deals with the recent applications of the entropy concept for water monitoring network designs, which are categorized into (1) precipitation; (2) streamflow and water level; (3) water quality; and (4) soil moisture and groundwater networks. The integrated design method for multivariate monitoring networks is also covered. Despite several issues, entropy theory has been well suited to water monitoring network design. However, further work is still required to provide design standards and guidelines for operational use. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
5876 KiB  
Article
Maximum Entropy-Copula Method for Hydrological Risk Analysis under Uncertainty: A Case Study on the Loess Plateau, China
by Aijun Guo, Jianxia Chang, Yimin Wang, Qiang Huang and Zhihui Guo
Entropy 2017, 19(11), 609; https://doi.org/10.3390/e19110609 - 15 Nov 2017
Cited by 16 | Viewed by 5593
Abstract
Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of [...] Read more.
Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of this paper, therefore, is twofold; first, to develop a coupled maximum entropy-copula method for hydrological risk analysis through deriving the bivariate return periods, risk, reliability and bivariate design events; and second, to reveal the impact of marginal distribution selection uncertainty and sampling uncertainty on bivariate design event identification. Particularly, the uncertainties involved in the second goal have not yet received significant consideration. The designed framework for hydrological risk analysis related to flood and extreme precipitation events is exemplarily applied in two catchments of the Loess plateau, China. Results show that (1) distribution derived by the maximum entropy principle outperforms the conventional distributions for the probabilistic modeling of flood and extreme precipitation events; (2) the bivariate return periods, risk, reliability and bivariate design events are able to be derived using the coupled entropy-copula method; (3) uncertainty analysis highlights the fact that appropriate performance of marginal distribution is closely related to bivariate design event identification. Most importantly, sampling uncertainty causes the confidence regions of bivariate design events with return periods of 30 years to be very large, overlapping with the values of flood and extreme precipitation, which have return periods of 10 and 50 years, respectively. The large confidence regions of bivariate design events greatly challenge its application in practical engineering design. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Show Figures

Figure 1

311 KiB  
Article
Multilevel Coding for the Full-Duplex Decode-Compress-Forward Relay Channel
by Ahmed Attia Abotabl and Aria Nosratinia
Entropy 2017, 19(11), 611; https://doi.org/10.3390/e19110611 - 14 Nov 2017
Cited by 51 | Viewed by 3770
Abstract
The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via [...] Read more.
The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via low-density parity-check (LDPC) codes and polar codes. We begin by revisiting the achievable rate of DCF in discrete memoryless channels under backward decoding. We then study coded modulation for the decode-compress-forward via multi-level coding. We show that the proposed multilevel coding approaches the known achievable rates of DCF. The proposed multilevel coding is implemented (and its performance verified) via a combination of standard DVB-S2 LDPC codes, and polar codes whose design follows the method of Blasco-Serrano. Full article
(This article belongs to the Special Issue Multiuser Information Theory)
Show Figures

Figure 1

267 KiB  
Article
Capacity Bounds on the Downlink of Symmetric, Multi-Relay, Single-Receiver C-RAN Networks
by Shirin Saeedi Bidokhti, Gerhard Kramer and Shlomo Shamai
Entropy 2017, 19(11), 610; https://doi.org/10.3390/e19110610 - 14 Nov 2017
Cited by 19 | Viewed by 4216
Abstract
The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. [...] Read more.
The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation. Full article
(This article belongs to the Special Issue Network Information Theory)
Show Figures

Figure 1

327 KiB  
Article
Robust and Sparse Regression via γ-Divergence
by Takayuki Kawashima and Hironori Fujisawa
Entropy 2017, 19(11), 608; https://doi.org/10.3390/e19110608 - 13 Nov 2017
Cited by 20 | Viewed by 5358
Abstract
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is [...] Read more.
In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods. Full article
Show Figures

Figure 1

831 KiB  
Article
On the Uniqueness Theorem for Pseudo-Additive Entropies
by Petr Jizba and Jan Korbel
Entropy 2017, 19(11), 605; https://doi.org/10.3390/e19110605 - 12 Nov 2017
Cited by 10 | Viewed by 4220
Abstract
The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be [...] Read more.
The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be determined uniquely only when one fixes the prescription for handling conditional entropies. By using the concept of Kolmogorov–Nagumo quasi-linear means, we prove this with the help of Darótzy’s mapping theorem. Our point is further illustrated with a number of explicit examples. Other salient issues, such as connections of conditional entropies with the de Finetti–Kolmogorov theorem for escort distributions and with Landsberg’s classification of non-extensive thermodynamic systems are also briefly discussed. Full article
(This article belongs to the Special Issue Selected Papers from 14th Joint European Thermodynamics Conference)
Show Figures

Figure 1

377 KiB  
Article
Rate Distortion Functions and Rate Distortion Function Lower Bounds for Real-World Sources
by Jerry Gibson
Entropy 2017, 19(11), 604; https://doi.org/10.3390/e19110604 - 11 Nov 2017
Cited by 13 | Viewed by 6845
Abstract
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to [...] Read more.
Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory. Full article
(This article belongs to the Special Issue Rate-Distortion Theory and Information Theory)
Show Figures

Figure 1

323 KiB  
Article
Thermodynamics, Statistical Mechanics and Entropy
by Robert H. Swendsen
Entropy 2017, 19(11), 603; https://doi.org/10.3390/e19110603 - 10 Nov 2017
Cited by 23 | Viewed by 9488
Abstract
The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with [...] Read more.
The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with non-extensive entropy, and systems that can have negative temperatures. Only the thermodynamics of finite systems are considered, with the condition that the system is large enough for the fluctuations to be smaller than the experimental resolution. The statistical basis for thermodynamics is discussed, along with four different forms of the (classical and quantum) entropy. The strengths and weaknesses of each are evaluated in relation to the requirements of thermodynamics. Effects of order 1 / N , where N is the number of particles, are included in the discussion because they have played a significant role in the literature, even if they are too small to have a measurable effect in an experiment. The discussion includes the role of discreteness, the non-zero width of the energy and particle number distributions, the extensivity of models with non-interacting particles, and the concavity of the entropy with respect to energy. The results demonstrate the validity of negative temperatures. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop