Next Issue
Volume 22, February
Previous Issue
Volume 22, December

Table of Contents

Entropy, Volume 22, Issue 1 (January 2020) – 127 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Optimal lossy data compression minimizes the storage cost of a data set X while retaining a given [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Eigenvalues of Two-State Quantum Walks Induced by the Hadamard Walk
Entropy 2020, 22(1), 127; https://doi.org/10.3390/e22010127 (registering DOI) - 20 Jan 2020
Abstract
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks [...] Read more.
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks in one dimension we had treated in our previous studies. Especially, we clarified the characteristic parameter dependence for the distributions of the eigenvalues with the aid of numerical simulation. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Open AccessArticle
A Standardized Project Gutenberg Corpus for Statistical Analysis of Natural Language and Quantitative Linguistics
Entropy 2020, 22(1), 126; https://doi.org/10.3390/e22010126 (registering DOI) - 20 Jan 2020
Abstract
The use of Project Gutenberg (PG) as a text corpus has been extremely popular in statistical analysis of language for more than 25 years. However, in contrast to other major linguistic datasets of similar importance, no consensual full version of PG exists to [...] Read more.
The use of Project Gutenberg (PG) as a text corpus has been extremely popular in statistical analysis of language for more than 25 years. However, in contrast to other major linguistic datasets of similar importance, no consensual full version of PG exists to date. In fact, most PG studies so far either consider only a small number of manually selected books, leading to potential biased subsets, or employ vastly different pre-processing strategies (often specified in insufficient details), raising concerns regarding the reproducibility of published results. In order to address these shortcomings, here we present the Standardized Project Gutenberg Corpus (SPGC), an open science approach to a curated version of the complete PG data containing more than 50,000 books and more than 3 × 10 9 word-tokens. Using different sources of annotated metadata, we not only provide a broad characterization of the content of PG, but also show different examples highlighting the potential of SPGC for investigating language variability across time, subjects, and authors. We publish our methodology in detail, the code to download and process the data, as well as the obtained corpus itself on three different levels of granularity (raw text, timeseries of word tokens, and counts of words). In this way, we provide a reproducible, pre-processed, full-size version of Project Gutenberg as a new scientific resource for corpus linguistics, natural language processing, and information retrieval. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

Open AccessArticle
Low-Complexity Rate-Distortion Optimization of Sampling Rate and Bit-Depth for Compressed Sensing of Images
Entropy 2020, 22(1), 125; https://doi.org/10.3390/e22010125 (registering DOI) - 20 Jan 2020
Abstract
Compressed sensing (CS) offers a framework for image acquisition, which has excellent potential in image sampling and compression applications due to the sub-Nyquist sampling rate and low complexity. In engineering practices, the resulting CS samples are quantized by finite bits for transmission. In [...] Read more.
Compressed sensing (CS) offers a framework for image acquisition, which has excellent potential in image sampling and compression applications due to the sub-Nyquist sampling rate and low complexity. In engineering practices, the resulting CS samples are quantized by finite bits for transmission. In circumstances where the bit budget for image transmission is constrained, knowing how to choose the sampling rate and the number of bits per measurement (bit-depth) is essential for the quality of CS reconstruction. In this paper, we first present a bit-rate model that considers the compression performance of CS, quantification, and entropy coder. The bit-rate model reveals the relationship between bit rate, sampling rate, and bit-depth. Then, we propose a relative peak signal-to-noise ratio (PSNR) model for evaluating distortion, which reveals the relationship between relative PSNR, sampling rate, and bit-depth. Finally, the optimal sampling rate and bit-depth are determined based on the rate-distortion (RD) criteria with the bit-rate model and the relative PSNR model. The experimental results show that the actual bit rate obtained by the optimized sampling rate and bit-depth is very close to the target bit rate. Compared with the traditional CS coding method with a fixed sampling rate, the proposed method provides better rate-distortion performance, and the additional calculation amount amounts to less than 1%. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Open AccessEditorial
Entropy 2019 Best Paper Award
Entropy 2020, 22(1), 124; https://doi.org/10.3390/e22010124 (registering DOI) - 20 Jan 2020
Abstract
On behalf of the Editor-in-Chief, Prof [...] Full article
Open AccessArticle
Analyzing Uncertainty in Complex Socio-Ecological Networks
Entropy 2020, 22(1), 123; https://doi.org/10.3390/e22010123 - 19 Jan 2020
Viewed by 149
Abstract
Socio-ecological systems are recognized as complex adaptive systems whose multiple interactions might change as a response to external or internal changes. Due to its complexity, the behavior of the system is often uncertain. Bayesian networks provide a sound approach for handling complex domains [...] Read more.
Socio-ecological systems are recognized as complex adaptive systems whose multiple interactions might change as a response to external or internal changes. Due to its complexity, the behavior of the system is often uncertain. Bayesian networks provide a sound approach for handling complex domains endowed with uncertainty. The aim of this paper is to analyze the impact of the Bayesian network structure on the uncertainty of the model, expressed as the Shannon entropy. In particular, three strategies for model structure have been followed: naive Bayes (NB), tree augmented network (TAN) and network with unrestricted structure (GSS). Using these network structures, two experiments are carried out: (1) the impact of the Bayesian network structure on the entropy of the model is assessed and (2) the entropy of the posterior distribution of the class variable obtained from the different structures is compared. The results show that GSS constantly outperforms both NB and TAN when it comes to evaluating the uncertainty of the entire model. On the other hand, NB and TAN yielded lower entropy values of the posterior distribution of the class variable, which makes them preferable when the goal is to carry out predictions. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Open AccessArticle
Stabilization of Port Hamiltonian Chaotic Systems with Hidden Attractors by Adaptive Terminal Sliding Mode Control
Entropy 2020, 22(1), 122; https://doi.org/10.3390/e22010122 - 19 Jan 2020
Viewed by 111
Abstract
In this study, the design of an adaptive terminal sliding mode controller for the stabilization of port Hamiltonian chaotic systems with hidden attractors is proposed. This study begins with the design methodology of a chaotic oscillator with a hidden attractor implementing the topological [...] Read more.
In this study, the design of an adaptive terminal sliding mode controller for the stabilization of port Hamiltonian chaotic systems with hidden attractors is proposed. This study begins with the design methodology of a chaotic oscillator with a hidden attractor implementing the topological framework for its respective design. With this technique it is possible to design a 2-D chaotic oscillator, which is then converted into port-Hamiltonia to track and analyze these models for the stabilization of the hidden chaotic attractors created by this analysis. Adaptive terminal sliding mode controllers (ATSMC) are built when a Hamiltonian system has a chaotic behavior and a hidden attractor is detected. A Lyapunov approach is used to formulate the adaptive device controller by creating a control law and the adaptive law, which are used online to make the system states stable while at the same time suppressing its chaotic behavior. The empirical tests obtaining the discussion and conclusions of this thesis should verify the theoretical findings. Full article
Show Figures

Figure 1

Open AccessArticle
Linear Programming and Fuzzy Optimization to Substantiate Investment Decisions in Tangible Assets
Entropy 2020, 22(1), 121; https://doi.org/10.3390/e22010121 - 19 Jan 2020
Viewed by 115
Abstract
This paper studies the problem of tangible assets acquisition within the company by proposing a new hybrid model that uses linear programming and fuzzy numbers. Regarding linear programming, two methods were implemented in the model, namely: the graphical method and the primal simplex [...] Read more.
This paper studies the problem of tangible assets acquisition within the company by proposing a new hybrid model that uses linear programming and fuzzy numbers. Regarding linear programming, two methods were implemented in the model, namely: the graphical method and the primal simplex algorithm. This hybrid model is proposed for solving investment decision problems, based on decision variables, objective function coefficients, and a matrix of constraints, all of them presented in the form of triangular fuzzy numbers. Solving the primal simplex algorithm using fuzzy numbers and coefficients, allowed the results of the linear programming problem to also be in the form of fuzzy variables. The fuzzy variables compared to the crisp variables allow the determination of optimal intervals for which the objective function has values depending on the fuzzy variables. The major advantage of this model is that the results are presented as value ranges that intervene in the decision-making process. Thus, the company’s decision makers can select any of the result values as they satisfy two basic requirements namely: minimizing/maximizing the objective function and satisfying the basic requirements regarding the constraints resulting from the company’s activity. The paper is accompanied by a practical example. Full article
Open AccessArticle
Generalized Independence in the q-Voter Model: How Do Parameters Influence the Phase Transition?
Entropy 2020, 22(1), 120; https://doi.org/10.3390/e22010120 - 19 Jan 2020
Viewed by 89
Abstract
We study the q-voter model with flexibility, which allows for describing a broad spectrum of independence from zealots, inflexibility, or stubbornness through noisy voters to self-anticonformity. Analyzing the model within the pair approximation allows us to derive the analytical formula for the [...] Read more.
We study the q-voter model with flexibility, which allows for describing a broad spectrum of independence from zealots, inflexibility, or stubbornness through noisy voters to self-anticonformity. Analyzing the model within the pair approximation allows us to derive the analytical formula for the critical point, below which an ordered (agreement) phase is stable. We determine the role of flexibility, which can be understood as an amount of variability associated with an independent behavior, as well as the role of the average network degree in shaping the character of the phase transition. We check the existence of the scaling relation, which previously was derived for the Sznajd model. We show that the scaling is universal, in a sense that it does not depend neither on the size of the group of influence nor on the average network degree. Analyzing the model in terms of the rescaled parameter, we determine the critical point, the jump of the order parameter, as well as the width of the hysteresis as a function of the average network degree k and the size of the group of influence q. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex Systems)
Show Figures

Figure 1

Open AccessArticle
Activeness and Loyalty Analysis in Event-Based Social Networks
Entropy 2020, 22(1), 119; https://doi.org/10.3390/e22010119 - 18 Jan 2020
Viewed by 150
Abstract
Event-based social networks (EBSNs) are widely used to create online social groups and organize offline events for users. Activeness and loyalty are crucial characteristics of these online social groups in terms of determining the growth or inactiveness of the social groups in a [...] Read more.
Event-based social networks (EBSNs) are widely used to create online social groups and organize offline events for users. Activeness and loyalty are crucial characteristics of these online social groups in terms of determining the growth or inactiveness of the social groups in a specific time frame. However, there is less research on these concepts to clarify the existence of groups in event-based social networks. In this paper, we study the problem of group activeness and user loyalty to provide a novel insight into online social networks. First, we analyze the structure of EBSNs and generate features from the crawled datasets. Second, we define the concepts of group activeness and user loyalty based on a series of time windows, and propose a method to measure the group activeness. In this proposed method, we first compute a ratio of a number of events between two consecutive time windows. We then develop an association matrix to assign the activeness label for each group after several consecutive time windows. Similarly, we measure the user loyalty in terms of attended events gathered in time windows and treat loyalty as a contributive feature of the group activeness. Finally, three well-known machine learning techniques are used to verify the activeness label and to generate features for each group. As a consequence, we also find a small group of features that are highly correlated and result in higher accuracy as compared to the whole features. Full article
(This article belongs to the Special Issue Social Networks and Information Diffusion II)
Open AccessArticle
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter
Entropy 2020, 22(1), 118; https://doi.org/10.3390/e22010118 - 18 Jan 2020
Viewed by 141
Abstract
Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied [...] Read more.
Image fusion is a very practical technology that can be applied in many fields, such as medicine, remote sensing and surveillance. An image fusion method using multi-scale decomposition and joint sparse representation is introduced in this paper. First, joint sparse representation is applied to decompose two source images into a common image and two innovation images. Second, two initial weight maps are generated by filtering the two source images separately. Final weight maps are obtained by joint bilateral filtering according to the initial weight maps. Then, the multi-scale decomposition of the innovation images is performed through the rolling guide filter. Finally, the final weight maps are used to generate the fused innovation image. The fused innovation image and the common image are combined to generate the ultimate fused image. The experimental results show that our method’s average metrics are: mutual information ( M I )—5.3377, feature mutual information ( F M I )—0.5600, normalized weighted edge preservation value ( Q A B / F )—0.6978 and nonlinear correlation information entropy ( N C I E )—0.8226. Our method can achieve better performance compared to the state-of-the-art methods in visual perception and objective quantification. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Open AccessArticle
Chemical Reaction Networks Possess Intrinsic, Temperature-Dependent Functionality
Entropy 2020, 22(1), 117; https://doi.org/10.3390/e22010117 - 18 Jan 2020
Viewed by 158
Abstract
Temperature influences the life of many organisms in various ways. A great number of organisms live under conditions where their ability to adapt to changes in temperature can be vital and largely determines their fitness. Understanding the mechanisms and principles underlying this ability [...] Read more.
Temperature influences the life of many organisms in various ways. A great number of organisms live under conditions where their ability to adapt to changes in temperature can be vital and largely determines their fitness. Understanding the mechanisms and principles underlying this ability to adapt can be of great advantage, for example, to improve growth conditions for crops and increase their yield. In times of imminent, increasing climate change, this becomes even more important in order to find strategies and help crops cope with these fundamental changes. There is intense research in the field of acclimation that comprises fluctuations of various environmental conditions, but most acclimation research focuses on regulatory effects and the observation of gene expression changes within the examined organism. As thermodynamic effects are a direct consequence of temperature changes, these should necessarily be considered in this field of research but are often neglected. Additionally, compensated effects might be missed even though they are equally important for the organism, since they do not cause observable changes, but rather counteract them. In this work, using a systems biology approach, we demonstrate that even simple network motifs can exhibit temperature-dependent functional features resulting from the interplay of network structure and the distribution of activation energies over the involved reactions. The demonstrated functional features are (i) the reversal of fluxes within a linear pathway, (ii) a thermo-selective branched pathway with different flux modes and (iii) the increased flux towards carbohydrates in a minimal Calvin cycle that was designed to demonstrate temperature compensation within reaction networks. Comparing a system’s response to either temperature changes or changes in enzyme activity we also dissect the influence of thermodynamic changes versus genetic regulation. By this, we expand the scope of thermodynamic modelling of biochemical processes by addressing further possibilities and effects, following established mathematical descriptions of biophysical properties. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Open AccessArticle
Permutation Entropy and Statistical Complexity in Mild Cognitive Impairment and Alzheimer’s Disease: An Analysis Based on Frequency Bands
Entropy 2020, 22(1), 116; https://doi.org/10.3390/e22010116 - 18 Jan 2020
Viewed by 168
Abstract
We present one of the first applications of Permutation Entropy (PE) and Statistical Complexity (SC) (measured as the product of PE and Jensen-Shanon Divergence) on Magnetoencephalography (MEG) recordings of 46 subjects suffering from Mild Cognitive Impairment (MCI), 17 individuals diagnosed with Alzheimer’s Disease [...] Read more.
We present one of the first applications of Permutation Entropy (PE) and Statistical Complexity (SC) (measured as the product of PE and Jensen-Shanon Divergence) on Magnetoencephalography (MEG) recordings of 46 subjects suffering from Mild Cognitive Impairment (MCI), 17 individuals diagnosed with Alzheimer’s Disease (AD) and 48 healthy controls. We studied the differences in PE and SC in broadband signals and their decomposition into frequency bands ( δ , θ , α and β ), considering two modalities: (i) raw time series obtained from the magnetometers and (ii) a reconstruction into cortical sources or regions of interest (ROIs). We conducted our analyses at three levels: (i) at the group level we compared SC in each frequency band and modality between groups; (ii) at the individual level we compared how the [PE, SC] plane differs in each modality; and (iii) at the local level we explored differences in scalp and cortical space. We recovered classical results that considered only broadband signals and found a nontrivial pattern of alterations in each frequency band, showing that SC does not necessarily decrease in AD or MCI. Full article
(This article belongs to the Special Issue Permutation Entropy: Theory and Applications)
Open AccessArticle
Association between Mean Heart Rate and Recurrence Quantification Analysis of Heart Rate Variability in End-Stage Renal Disease
Entropy 2020, 22(1), 114; https://doi.org/10.3390/e22010114 - 18 Jan 2020
Viewed by 163
Abstract
Linear heart rate variability (HRV) indices are dependent on the mean heart rate, which has been demonstrated in different models (from sinoatrial cells to humans). The association between nonlinear HRV indices, including those provided by recurrence plot quantitative analysis (RQA), and the mean [...] Read more.
Linear heart rate variability (HRV) indices are dependent on the mean heart rate, which has been demonstrated in different models (from sinoatrial cells to humans). The association between nonlinear HRV indices, including those provided by recurrence plot quantitative analysis (RQA), and the mean heart rate (or the mean cardiac period, also called meanNN) has been scarcely studied. For this purpose, we analyzed RQA indices of five minute-long HRV time series obtained in the supine position and during active standing from 30 healthy subjects and 29 end-stage renal disease (ESRD) patients (before and after hemodialysis). In the supine position, ESRD patients showed shorter meanNN (i.e., faster heart rate) and decreased variability compared to healthy subjects. The healthy subjects responded to active standing by shortening the meanNN and decreasing HRV indices to reach similar values of ESRD patients. Bivariate correlations between all RQA indices and meanNN were significant in healthy subjects and ESRD after hemodialysis and for most RQA indices in ESRD patients before hemodialysis. Multiple linear regression analyses showed that RQA indices were also dependent on the position and the ESRD condition. Then, future studies should consider the association among RQA indices, meanNN, and these other factors for a correct interpretation of HRV. Full article
Open AccessArticle
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Viewed by 122
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Open AccessArticle
Entropy-Based Effect Evaluation of Delineators in Tunnels on Drivers’ Gaze Behavior
Entropy 2020, 22(1), 113; https://doi.org/10.3390/e22010113 - 17 Jan 2020
Viewed by 136
Abstract
Driving safety in tunnels has always been an issue of great concern. Establishing delineators to improve drivers’ instantaneous cognition of the surrounding environment in tunnels can effectively enhance driver safety. Through a simulation study, this paper explored how delineators affect drivers’ gaze behavior [...] Read more.
Driving safety in tunnels has always been an issue of great concern. Establishing delineators to improve drivers’ instantaneous cognition of the surrounding environment in tunnels can effectively enhance driver safety. Through a simulation study, this paper explored how delineators affect drivers’ gaze behavior (including fixation and scanpath) in tunnels. In addition to analyzing typical parameters, such as fixation position and fixation duration in areas of interest (AOIs), by modeling drivers’ switching process as Markov chains and calculating Shannon’s entropy of the fit Markov model, this paper quantified the complexity of individual switching patterns between AOIs under different delineator configurations and with different road alignments. A total of 25 subjects participated in this research. The results show that setting delineators in tunnels can attract drivers’ attention and make them focus on the pavement. When driving in tunnels equipped with delineators, especially tunnels with both wall delineators and pavement delineators, the participants exhibited a smaller transition entropy H t and stationary entropy H s , which can greatly reduce drivers’ visual fatigue. Compared with left curve and right curve, participants obtained higher H t and H s values in the straight section. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD
Entropy 2020, 22(1), 112; https://doi.org/10.3390/e22010112 - 16 Jan 2020
Viewed by 193
Abstract
Bit-level and pixel-level methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixel-level permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bit-level [...] Read more.
Bit-level and pixel-level methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixel-level permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bit-level permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their time-consuming nature, which is owed to bit-level computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bit-plane scrambling and an SPD diffusion technique for the bit-planes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA-512 are also used, along with the adaptive block-based modular addition of pixels to encrypt the images. To prove the first-rate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many well-known attacks; in particular, brute-force attacks, known/chosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Viewed by 231
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Open AccessArticle
Gravity Wave Activity in the Stratosphere before the 2011 Tohoku Earthquake as the Mechanism of Lithosphere-atmosphere-ionosphere Coupling
Entropy 2020, 22(1), 110; https://doi.org/10.3390/e22010110 - 16 Jan 2020
Viewed by 193
Abstract
The precursory atmospheric gravity wave (AGW) activity in the stratosphere has been investigated in our previous paper by studying an inland Kumamoto earthquake (EQ). We are interested in whether the same phenomenon occurs or not before another major EQ, especially an oceanic EQ. [...] Read more.
The precursory atmospheric gravity wave (AGW) activity in the stratosphere has been investigated in our previous paper by studying an inland Kumamoto earthquake (EQ). We are interested in whether the same phenomenon occurs or not before another major EQ, especially an oceanic EQ. In this study, we have examined the stratospheric AGW activity before the oceanic 2011 Tohoku EQ (Mw 9.0), while using the temperature profiles that were retrieved from ERA5. The potential energy (EP) of AGW has enhanced from 3 to 7 March, 4–8 days before the EQ. The active region of the precursory AGW first appeared around the EQ epicenter, and then expanded omnidirectionally, but mainly toward the east, covering a wide area of 2500 km (in longitude) by 1500 km (in latitude). We also found the influence of the present AGW activity on some stratospheric parameters. The stratopause was heated and descended; the ozone concentration was also reduced and the zonal wind was reversed at the stratopause altitude before the EQ. These abnormalities of the stratospheric AGW and physical/chemical parameters are most significant on 5–6 March, which are found to be consistent in time and spatial distribution with the lower ionospheric perturbation, as detected by our VLF network observations. We have excluded the other probabilities by the processes of elimination and finally concluded that the abnormal phenomena observed in the present study are EQ precursors, although several potential sources can generate AGW activities and chemical variations in the stratosphere. The present paper shows that the abnormal stratospheric AGW activity has also been detected even before an oceanic EQ, and the AGW activity has obliquely propagated upward and further disturbed the lower ionosphere. This case study has provided further support to the AGW hypothesis of the lithosphere-atmosphere-ionosphere coupling process. Full article
Open AccessArticle
Visual Analysis on Information Theory and Science of Complexity Approaches in Healthcare Research
Entropy 2020, 22(1), 109; https://doi.org/10.3390/e22010109 - 16 Jan 2020
Viewed by 165
Abstract
In order to explore the knowledge base, research hotspot, development status, and future research direction of healthcare research based on information theory and complex science, a total of 3031 literature data samples from the core collection of Web of Science from 2003 to [...] Read more.
In order to explore the knowledge base, research hotspot, development status, and future research direction of healthcare research based on information theory and complex science, a total of 3031 literature data samples from the core collection of Web of Science from 2003 to 2019 were selected for bibliometric analysis. HistCite, CiteSpace, Excel, and other analytical tools were used to deeply analyze and visualize the temporal distribution, spatial distribution, knowledge evolution, literature co-citation, and research hotspots of this field. This paper reveals the current development of healthcare research field based on information theory and science of complexity, analyzes and discusses the research hotspots and future development that trends in this field, and provides important knowledge support for researchers in this field for further relevant research. Full article
Show Figures

Figure 1

Open AccessArticle
Generalizing Information to the Evolution of Rational Belief
Entropy 2020, 22(1), 108; https://doi.org/10.3390/e22010108 - 16 Jan 2020
Viewed by 206
Abstract
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome’s plausibility. Information measures based on Shannon’s concept of entropy include realization information, Kullback–Leibler divergence, Lindley’s information in experiment, [...] Read more.
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome’s plausibility. Information measures based on Shannon’s concept of entropy include realization information, Kullback–Leibler divergence, Lindley’s information in experiment, cross entropy, and mutual information. We derive a general theory of information from first principles that accounts for evolving belief and recovers all of these measures. Rather than simply gauging uncertainty, information is understood in this theory to measure change in belief. We may then regard entropy as the information we expect to gain upon realization of a discrete latent random variable. This theory of information is compatible with the Bayesian paradigm in which rational belief is updated as evidence becomes available. Furthermore, this theory admits novel measures of information with well-defined properties, which we explored in both analysis and experiment. This view of information illuminates the study of machine learning by allowing us to quantify information captured by a predictive model and distinguish it from residual information contained in training data. We gain related insights regarding feature selection, anomaly detection, and novel Bayesian approaches. Full article
Show Figures

Figure 1

Open AccessArticle
On the Composability of Statistically Secure Random Oblivious Transfer
Entropy 2020, 22(1), 107; https://doi.org/10.3390/e22010107 - 16 Jan 2020
Viewed by 200
Abstract
We show that random oblivious transfer protocols that are statistically secure according to a definition based on a list of information-theoretical properties are also statistically universally composable. That is, they are simulatable secure with an unlimited adversary, an unlimited simulator, and an unlimited [...] Read more.
We show that random oblivious transfer protocols that are statistically secure according to a definition based on a list of information-theoretical properties are also statistically universally composable. That is, they are simulatable secure with an unlimited adversary, an unlimited simulator, and an unlimited environment machine. Our result implies that several previous oblivious transfer protocols in the literature that were proven secure under weaker, non-composable definitions of security can actually be used in arbitrary statistically secure applications without lowering the security. Full article
Open AccessArticle
Spatio-Temporal Evolution Analysis of Drought Based on Cloud Transformation Algorithm over Northern Anhui Province
Entropy 2020, 22(1), 106; https://doi.org/10.3390/e22010106 - 16 Jan 2020
Viewed by 160
Abstract
Drought is one of the most typical and serious natural disasters, which occurs frequently in most of mainland China, and it is crucial to explore the evolution characteristics of drought for developing effective schemes and strategies of drought disaster risk management. Based on [...] Read more.
Drought is one of the most typical and serious natural disasters, which occurs frequently in most of mainland China, and it is crucial to explore the evolution characteristics of drought for developing effective schemes and strategies of drought disaster risk management. Based on the application of Cloud theory in the drought evolution research field, the cloud transformation algorithm, and the conception zooming coupling model was proposed to re-fit the distribution pattern of SPI instead of the Pearson-III distribution. Then the spatio-temporal evolution features of drought were further summarized utilizing the cloud characteristics, average, entropy, and hyper-entropy. Lastly, the application results in Northern Anhui province revealed that the drought condition was the most serious during the period from 1957 to 1970 with the SPI12 index in 49 months being less than −0.5 and 12 months with an extreme drought level. The overall drought intensity varied with the highest certainty level but lowest stability level in winter, but this was opposite in the summer. Moreover, drought hazard would be more significantly intensified along the elevation of latitude in Northern Anhui province. The overall drought hazard in Suzhou and Huaibei were the most serious, which is followed by Bozhou, Bengbu, and Fuyang. Drought intensity in Huainan was the lightest. The exploration results of drought evolution analysis were reasonable and reliable, which would supply an effective decision-making basis for establishing drought risk management strategies. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Show Figures

Figure 1

Open AccessArticle
Statistical Complexity Analysis of Turing Machine tapes with Fixed Algorithmic Complexity Using the Best-Order Markov Model
Entropy 2020, 22(1), 105; https://doi.org/10.3390/e22010105 - 16 Jan 2020
Viewed by 218
Abstract
Sources that generate symbolic sequences with algorithmic nature may differ in statistical complexity because they create structures that follow algorithmic schemes, rather than generating symbols from a probabilistic function assuming independence. In the case of Turing machines, this means that machines with the [...] Read more.
Sources that generate symbolic sequences with algorithmic nature may differ in statistical complexity because they create structures that follow algorithmic schemes, rather than generating symbols from a probabilistic function assuming independence. In the case of Turing machines, this means that machines with the same algorithmic complexity can create tapes with different statistical complexity. In this paper, we use a compression-based approach to measure global and local statistical complexity of specific Turing machine tapes with the same number of states and alphabet. Both measures are estimated using the best-order Markov model. For the global measure, we use the Normalized Compression (NC), while, for the local measures, we define and use normal and dynamic complexity profiles to quantify and localize lower and higher regions of statistical complexity. We assessed the validity of our methodology on synthetic and real genomic data showing that it is tolerant to increasing rates of editions and block permutations. Regarding the analysis of the tapes, we localize patterns of higher statistical complexity in two regions, for a different number of machine states. We show that these patterns are generated by a decrease of the tape’s amplitude, given the setting of small rule cycles. Additionally, we performed a comparison with a measure that uses both algorithmic and statistical approaches (BDM) for analysis of the tapes. Naturally, BDM is efficient given the algorithmic nature of the tapes. However, for a higher number of states, BDM is progressively approximated by our methodology. Finally, we provide a simple algorithm to increase the statistical complexity of a Turing machine tape while retaining the same algorithmic complexity. We supply a publicly available implementation of the algorithm in C++ language under the GPLv3 license. All results can be reproduced in full with scripts provided at the repository. Full article
(This article belongs to the Special Issue Shannon Information and Kolmogorov Complexity)
Show Figures

Figure 1

Open AccessArticle
Complexity of Cardiotocographic Signals as A Predictor of Labor
Entropy 2020, 22(1), 104; https://doi.org/10.3390/e22010104 - 16 Jan 2020
Viewed by 177
Abstract
Prediction of labor is of extreme importance in obstetric care to allow for preventive measures, assuring that both baby and mother have the best possible care. In this work, the authors studied how important nonlinear parameters (entropy and compression) can be as labor [...] Read more.
Prediction of labor is of extreme importance in obstetric care to allow for preventive measures, assuring that both baby and mother have the best possible care. In this work, the authors studied how important nonlinear parameters (entropy and compression) can be as labor predictors. Linear features retrieved from the SisPorto system for cardiotocogram analysis and nonlinear measures were used to predict labor in a dataset of 1072 antepartum tracings, at between 30 and 35 weeks of gestation. Two groups were defined: Group A—fetuses whose traces date was less than one or two weeks before labor, and Group B—fetuses whose traces date was at least one or two weeks before labor. Results suggest that, compared with linear features such as decelerations and variability indices, compression improves labor prediction both within one (C-Statistics of 0.728) and two weeks (C-Statistics of 0.704). Moreover, the correlation between compression and long-term variability was significantly different in groups A and B, denoting that compression and heart rate variability look at different information associated with whether the fetus is closer to or further from labor onset. Nonlinear measures, compression in particular, may be useful in improving labor prediction as a complement to other fetal heart rate features. Full article
Show Figures

Figure 1

Open AccessArticle
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Viewed by 233
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Viewed by 307
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics
Entropy 2020, 22(1), 101; https://doi.org/10.3390/e22010101 - 15 Jan 2020
Viewed by 234
Abstract
This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly [...] Read more.
This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly non-isotropic. That motivated a deterministic model in which the trajectories of our dynamical systems are described via geodesics of a family of metrics arising from a certain diffusion matrix; namely, the covariance of the stochastic gradients in SGD. Our model is analogous to models in general relativity: the role of the electromagnetic field in the latter is played by the gradient of the loss function of a deep network in the former. Full article
(This article belongs to the Special Issue The Information Bottleneck in Deep Learning)
Open AccessArticle
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Viewed by 225
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Open AccessArticle
The Convex Information Bottleneck Lagrangian
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Viewed by 514
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) - β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq - IB ( T ; β sq ) = I ( T ; Y ) - β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information–Theoretic Approaches to Computational Intelligence)
Open AccessReview
A Review of the Application of Information Theory to Clinical Diagnostic Testing
Entropy 2020, 22(1), 97; https://doi.org/10.3390/e22010097 - 14 Jan 2020
Viewed by 234
Abstract
The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review [...] Read more.
The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review the application of information theory to the quantification of diagnostic uncertainty, diagnostic information, and diagnostic test performance. An advantage of information theory functions over more established test performance measures is that they can be used when multiple disease states are under consideration as well as when the diagnostic test can yield multiple or continuous results. Since more than one diagnostic test is often required to help determine a patient’s disease state, we also discuss the application of the theory to situations in which more than one diagnostic test is used. The total diagnostic information provided by two or more tests can be partitioned into meaningful components. Full article
(This article belongs to the Special Issue Applications of Information Theory to Epidemiology)
Previous Issue
Next Issue
Back to TopTop