Open AccessArticle
Association between Mean Heart Rate and Recurrence Quantification Analysis of Heart Rate Variability in End-Stage Renal Disease
Entropy 2020, 22(1), 114; https://doi.org/10.3390/e22010114 - 18 Jan 2020
Viewed by 521
Abstract
Linear heart rate variability (HRV) indices are dependent on the mean heart rate, which has been demonstrated in different models (from sinoatrial cells to humans). The association between nonlinear HRV indices, including those provided by recurrence plot quantitative analysis (RQA), and the mean [...] Read more.
Linear heart rate variability (HRV) indices are dependent on the mean heart rate, which has been demonstrated in different models (from sinoatrial cells to humans). The association between nonlinear HRV indices, including those provided by recurrence plot quantitative analysis (RQA), and the mean heart rate (or the mean cardiac period, also called meanNN) has been scarcely studied. For this purpose, we analyzed RQA indices of five minute-long HRV time series obtained in the supine position and during active standing from 30 healthy subjects and 29 end-stage renal disease (ESRD) patients (before and after hemodialysis). In the supine position, ESRD patients showed shorter meanNN (i.e., faster heart rate) and decreased variability compared to healthy subjects. The healthy subjects responded to active standing by shortening the meanNN and decreasing HRV indices to reach similar values of ESRD patients. Bivariate correlations between all RQA indices and meanNN were significant in healthy subjects and ESRD after hemodialysis and for most RQA indices in ESRD patients before hemodialysis. Multiple linear regression analyses showed that RQA indices were also dependent on the position and the ESRD condition. Then, future studies should consider the association among RQA indices, meanNN, and these other factors for a correct interpretation of HRV. Full article
Show Figures

Figure 1

Open AccessArticle
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Viewed by 508
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Open AccessArticle
Entropy-Based Effect Evaluation of Delineators in Tunnels on Drivers’ Gaze Behavior
Entropy 2020, 22(1), 113; https://doi.org/10.3390/e22010113 - 17 Jan 2020
Viewed by 399
Abstract
Driving safety in tunnels has always been an issue of great concern. Establishing delineators to improve drivers’ instantaneous cognition of the surrounding environment in tunnels can effectively enhance driver safety. Through a simulation study, this paper explored how delineators affect drivers’ gaze behavior [...] Read more.
Driving safety in tunnels has always been an issue of great concern. Establishing delineators to improve drivers’ instantaneous cognition of the surrounding environment in tunnels can effectively enhance driver safety. Through a simulation study, this paper explored how delineators affect drivers’ gaze behavior (including fixation and scanpath) in tunnels. In addition to analyzing typical parameters, such as fixation position and fixation duration in areas of interest (AOIs), by modeling drivers’ switching process as Markov chains and calculating Shannon’s entropy of the fit Markov model, this paper quantified the complexity of individual switching patterns between AOIs under different delineator configurations and with different road alignments. A total of 25 subjects participated in this research. The results show that setting delineators in tunnels can attract drivers’ attention and make them focus on the pavement. When driving in tunnels equipped with delineators, especially tunnels with both wall delineators and pavement delineators, the participants exhibited a smaller transition entropy H t and stationary entropy H s , which can greatly reduce drivers’ visual fatigue. Compared with left curve and right curve, participants obtained higher H t and H s values in the straight section. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD
Entropy 2020, 22(1), 112; https://doi.org/10.3390/e22010112 - 16 Jan 2020
Viewed by 476
Abstract
Bit-level and pixel-level methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixel-level permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bit-level [...] Read more.
Bit-level and pixel-level methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixel-level permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bit-level permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their time-consuming nature, which is owed to bit-level computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bit-plane scrambling and an SPD diffusion technique for the bit-planes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA-512 are also used, along with the adaptive block-based modular addition of pixels to encrypt the images. To prove the first-rate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many well-known attacks; in particular, brute-force attacks, known/chosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Viewed by 549
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Show Figures

Figure 1

Open AccessArticle
Gravity Wave Activity in the Stratosphere before the 2011 Tohoku Earthquake as the Mechanism of Lithosphere-atmosphere-ionosphere Coupling
Entropy 2020, 22(1), 110; https://doi.org/10.3390/e22010110 - 16 Jan 2020
Viewed by 490
Abstract
The precursory atmospheric gravity wave (AGW) activity in the stratosphere has been investigated in our previous paper by studying an inland Kumamoto earthquake (EQ). We are interested in whether the same phenomenon occurs or not before another major EQ, especially an oceanic EQ. [...] Read more.
The precursory atmospheric gravity wave (AGW) activity in the stratosphere has been investigated in our previous paper by studying an inland Kumamoto earthquake (EQ). We are interested in whether the same phenomenon occurs or not before another major EQ, especially an oceanic EQ. In this study, we have examined the stratospheric AGW activity before the oceanic 2011 Tohoku EQ (Mw 9.0), while using the temperature profiles that were retrieved from ERA5. The potential energy (EP) of AGW has enhanced from 3 to 7 March, 4–8 days before the EQ. The active region of the precursory AGW first appeared around the EQ epicenter, and then expanded omnidirectionally, but mainly toward the east, covering a wide area of 2500 km (in longitude) by 1500 km (in latitude). We also found the influence of the present AGW activity on some stratospheric parameters. The stratopause was heated and descended; the ozone concentration was also reduced and the zonal wind was reversed at the stratopause altitude before the EQ. These abnormalities of the stratospheric AGW and physical/chemical parameters are most significant on 5–6 March, which are found to be consistent in time and spatial distribution with the lower ionospheric perturbation, as detected by our VLF network observations. We have excluded the other probabilities by the processes of elimination and finally concluded that the abnormal phenomena observed in the present study are EQ precursors, although several potential sources can generate AGW activities and chemical variations in the stratosphere. The present paper shows that the abnormal stratospheric AGW activity has also been detected even before an oceanic EQ, and the AGW activity has obliquely propagated upward and further disturbed the lower ionosphere. This case study has provided further support to the AGW hypothesis of the lithosphere-atmosphere-ionosphere coupling process. Full article
Show Figures

Figure 1

Open AccessArticle
Visual Analysis on Information Theory and Science of Complexity Approaches in Healthcare Research
Entropy 2020, 22(1), 109; https://doi.org/10.3390/e22010109 - 16 Jan 2020
Viewed by 450
Abstract
In order to explore the knowledge base, research hotspot, development status, and future research direction of healthcare research based on information theory and complex science, a total of 3031 literature data samples from the core collection of Web of Science from 2003 to [...] Read more.
In order to explore the knowledge base, research hotspot, development status, and future research direction of healthcare research based on information theory and complex science, a total of 3031 literature data samples from the core collection of Web of Science from 2003 to 2019 were selected for bibliometric analysis. HistCite, CiteSpace, Excel, and other analytical tools were used to deeply analyze and visualize the temporal distribution, spatial distribution, knowledge evolution, literature co-citation, and research hotspots of this field. This paper reveals the current development of healthcare research field based on information theory and science of complexity, analyzes and discusses the research hotspots and future development that trends in this field, and provides important knowledge support for researchers in this field for further relevant research. Full article
Show Figures

Figure 1

Open AccessArticle
Generalizing Information to the Evolution of Rational Belief
Entropy 2020, 22(1), 108; https://doi.org/10.3390/e22010108 - 16 Jan 2020
Viewed by 491
Abstract
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome’s plausibility. Information measures based on Shannon’s concept of entropy include realization information, Kullback–Leibler divergence, Lindley’s information in experiment, [...] Read more.
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome’s plausibility. Information measures based on Shannon’s concept of entropy include realization information, Kullback–Leibler divergence, Lindley’s information in experiment, cross entropy, and mutual information. We derive a general theory of information from first principles that accounts for evolving belief and recovers all of these measures. Rather than simply gauging uncertainty, information is understood in this theory to measure change in belief. We may then regard entropy as the information we expect to gain upon realization of a discrete latent random variable. This theory of information is compatible with the Bayesian paradigm in which rational belief is updated as evidence becomes available. Furthermore, this theory admits novel measures of information with well-defined properties, which we explored in both analysis and experiment. This view of information illuminates the study of machine learning by allowing us to quantify information captured by a predictive model and distinguish it from residual information contained in training data. We gain related insights regarding feature selection, anomaly detection, and novel Bayesian approaches. Full article
Show Figures

Figure 1

Open AccessArticle
On the Composability of Statistically Secure Random Oblivious Transfer
Entropy 2020, 22(1), 107; https://doi.org/10.3390/e22010107 - 16 Jan 2020
Viewed by 453
Abstract
We show that random oblivious transfer protocols that are statistically secure according to a definition based on a list of information-theoretical properties are also statistically universally composable. That is, they are simulatable secure with an unlimited adversary, an unlimited simulator, and an unlimited [...] Read more.
We show that random oblivious transfer protocols that are statistically secure according to a definition based on a list of information-theoretical properties are also statistically universally composable. That is, they are simulatable secure with an unlimited adversary, an unlimited simulator, and an unlimited environment machine. Our result implies that several previous oblivious transfer protocols in the literature that were proven secure under weaker, non-composable definitions of security can actually be used in arbitrary statistically secure applications without lowering the security. Full article
Show Figures

Figure 1

Open AccessArticle
Spatio-Temporal Evolution Analysis of Drought Based on Cloud Transformation Algorithm over Northern Anhui Province
Entropy 2020, 22(1), 106; https://doi.org/10.3390/e22010106 - 16 Jan 2020
Viewed by 390
Abstract
Drought is one of the most typical and serious natural disasters, which occurs frequently in most of mainland China, and it is crucial to explore the evolution characteristics of drought for developing effective schemes and strategies of drought disaster risk management. Based on [...] Read more.
Drought is one of the most typical and serious natural disasters, which occurs frequently in most of mainland China, and it is crucial to explore the evolution characteristics of drought for developing effective schemes and strategies of drought disaster risk management. Based on the application of Cloud theory in the drought evolution research field, the cloud transformation algorithm, and the conception zooming coupling model was proposed to re-fit the distribution pattern of SPI instead of the Pearson-III distribution. Then the spatio-temporal evolution features of drought were further summarized utilizing the cloud characteristics, average, entropy, and hyper-entropy. Lastly, the application results in Northern Anhui province revealed that the drought condition was the most serious during the period from 1957 to 1970 with the SPI12 index in 49 months being less than −0.5 and 12 months with an extreme drought level. The overall drought intensity varied with the highest certainty level but lowest stability level in winter, but this was opposite in the summer. Moreover, drought hazard would be more significantly intensified along the elevation of latitude in Northern Anhui province. The overall drought hazard in Suzhou and Huaibei were the most serious, which is followed by Bozhou, Bengbu, and Fuyang. Drought intensity in Huainan was the lightest. The exploration results of drought evolution analysis were reasonable and reliable, which would supply an effective decision-making basis for establishing drought risk management strategies. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Show Figures

Figure 1

Open AccessArticle
Statistical Complexity Analysis of Turing Machine tapes with Fixed Algorithmic Complexity Using the Best-Order Markov Model
Entropy 2020, 22(1), 105; https://doi.org/10.3390/e22010105 - 16 Jan 2020
Viewed by 492
Abstract
Sources that generate symbolic sequences with algorithmic nature may differ in statistical complexity because they create structures that follow algorithmic schemes, rather than generating symbols from a probabilistic function assuming independence. In the case of Turing machines, this means that machines with the [...] Read more.
Sources that generate symbolic sequences with algorithmic nature may differ in statistical complexity because they create structures that follow algorithmic schemes, rather than generating symbols from a probabilistic function assuming independence. In the case of Turing machines, this means that machines with the same algorithmic complexity can create tapes with different statistical complexity. In this paper, we use a compression-based approach to measure global and local statistical complexity of specific Turing machine tapes with the same number of states and alphabet. Both measures are estimated using the best-order Markov model. For the global measure, we use the Normalized Compression (NC), while, for the local measures, we define and use normal and dynamic complexity profiles to quantify and localize lower and higher regions of statistical complexity. We assessed the validity of our methodology on synthetic and real genomic data showing that it is tolerant to increasing rates of editions and block permutations. Regarding the analysis of the tapes, we localize patterns of higher statistical complexity in two regions, for a different number of machine states. We show that these patterns are generated by a decrease of the tape’s amplitude, given the setting of small rule cycles. Additionally, we performed a comparison with a measure that uses both algorithmic and statistical approaches (BDM) for analysis of the tapes. Naturally, BDM is efficient given the algorithmic nature of the tapes. However, for a higher number of states, BDM is progressively approximated by our methodology. Finally, we provide a simple algorithm to increase the statistical complexity of a Turing machine tape while retaining the same algorithmic complexity. We supply a publicly available implementation of the algorithm in C++ language under the GPLv3 license. All results can be reproduced in full with scripts provided at the repository. Full article
(This article belongs to the Special Issue Shannon Information and Kolmogorov Complexity)
Show Figures

Figure 1

Open AccessArticle
Complexity of Cardiotocographic Signals as A Predictor of Labor
Entropy 2020, 22(1), 104; https://doi.org/10.3390/e22010104 - 16 Jan 2020
Viewed by 446
Abstract
Prediction of labor is of extreme importance in obstetric care to allow for preventive measures, assuring that both baby and mother have the best possible care. In this work, the authors studied how important nonlinear parameters (entropy and compression) can be as labor [...] Read more.
Prediction of labor is of extreme importance in obstetric care to allow for preventive measures, assuring that both baby and mother have the best possible care. In this work, the authors studied how important nonlinear parameters (entropy and compression) can be as labor predictors. Linear features retrieved from the SisPorto system for cardiotocogram analysis and nonlinear measures were used to predict labor in a dataset of 1072 antepartum tracings, at between 30 and 35 weeks of gestation. Two groups were defined: Group A—fetuses whose traces date was less than one or two weeks before labor, and Group B—fetuses whose traces date was at least one or two weeks before labor. Results suggest that, compared with linear features such as decelerations and variability indices, compression improves labor prediction both within one (C-Statistics of 0.728) and two weeks (C-Statistics of 0.704). Moreover, the correlation between compression and long-term variability was significantly different in groups A and B, denoting that compression and heart rate variability look at different information associated with whether the fetus is closer to or further from labor onset. Nonlinear measures, compression in particular, may be useful in improving labor prediction as a complement to other fetal heart rate features. Full article
Show Figures

Figure 1

Open AccessArticle
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Viewed by 594
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Viewed by 719
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
A Geometric Interpretation of Stochastic Gradient Descent Using Diffusion Metrics
Entropy 2020, 22(1), 101; https://doi.org/10.3390/e22010101 - 15 Jan 2020
Viewed by 507
Abstract
This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly [...] Read more.
This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly non-isotropic. That motivated a deterministic model in which the trajectories of our dynamical systems are described via geodesics of a family of metrics arising from a certain diffusion matrix; namely, the covariance of the stochastic gradients in SGD. Our model is analogous to models in general relativity: the role of the electromagnetic field in the latter is played by the gradient of the loss function of a deep network in the former. Full article
(This article belongs to the Special Issue The Information Bottleneck in Deep Learning)
Open AccessArticle
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Viewed by 577
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Show Figures

Figure 1

Open AccessArticle
The Convex Information Bottleneck Lagrangian
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Cited by 1 | Viewed by 911
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

Open AccessReview
A Review of the Application of Information Theory to Clinical Diagnostic Testing
Entropy 2020, 22(1), 97; https://doi.org/10.3390/e22010097 - 14 Jan 2020
Viewed by 503
Abstract
The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review [...] Read more.
The fundamental information theory functions of entropy, relative entropy, and mutual information are directly applicable to clinical diagnostic testing. This is a consequence of the fact that an individual’s disease state and diagnostic test result are random variables. In this paper, we review the application of information theory to the quantification of diagnostic uncertainty, diagnostic information, and diagnostic test performance. An advantage of information theory functions over more established test performance measures is that they can be used when multiple disease states are under consideration as well as when the diagnostic test can yield multiple or continuous results. Since more than one diagnostic test is often required to help determine a patient’s disease state, we also discuss the application of the theory to situations in which more than one diagnostic test is used. The total diagnostic information provided by two or more tests can be partitioned into meaningful components. Full article
(This article belongs to the Special Issue Applications of Information Theory to Epidemiology)
Show Figures

Figure 1

Open AccessArticle
Probabilistic Ensemble of Deep Information Networks
Entropy 2020, 22(1), 100; https://doi.org/10.3390/e22010100 - 14 Jan 2020
Viewed by 466
Abstract
We describe a classifier made of an ensemble of decision trees, designed using information theory concepts. In contrast to algorithms C4.5 or ID3, the tree is built from the leaves instead of the root. Each tree is made of nodes trained independently of [...] Read more.
We describe a classifier made of an ensemble of decision trees, designed using information theory concepts. In contrast to algorithms C4.5 or ID3, the tree is built from the leaves instead of the root. Each tree is made of nodes trained independently of the others, to minimize a local cost function (information bottleneck). The trained tree outputs the estimated probabilities of the classes given the input datum, and the outputs of many trees are combined to decide the class. We show that the system is able to provide results comparable to those of the tree classifier in terms of accuracy, while it shows many advantages in terms of modularity, reduced complexity, and memory requirements. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

Open AccessArticle
Conditional Adversarial Domain Adaptation Neural Network for Motor Imagery EEG Decoding
Entropy 2020, 22(1), 96; https://doi.org/10.3390/e22010096 - 13 Jan 2020
Viewed by 470
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals for brain-computer interfaces (BCIs) is a challenging task because of the severe non-stationarity of perceptual decision processes. Recently, deep learning techniques have had great success in EEG decoding because of their prominent ability to learn features [...] Read more.
Decoding motor imagery (MI) electroencephalogram (EEG) signals for brain-computer interfaces (BCIs) is a challenging task because of the severe non-stationarity of perceptual decision processes. Recently, deep learning techniques have had great success in EEG decoding because of their prominent ability to learn features from raw EEG signals automatically. However, the challenge that the deep learning method faces is that the shortage of labeled EEG signals and EEGs sampled from other subjects cannot be used directly to train a convolutional neural network (ConvNet) for a target subject. To solve this problem, in this paper, we present a novel conditional domain adaptation neural network (CDAN) framework for MI EEG signal decoding. Specifically, in the CDAN, a densely connected ConvNet is firstly applied to obtain high-level discriminative features from raw EEG time series. Then, a novel conditional domain discriminator is introduced to work as an adversarial with the label classifier to learn commonly shared intra-subjects EEG features. As a result, the CDAN model trained with sufficient EEG signals from other subjects can be used to classify the signals from the target subject efficiently. Competitive experimental results on a public EEG dataset (High Gamma Dataset) against the state-of-the-art methods demonstrate the efficacy of the proposed framework in recognizing MI EEG signals, indicating its effectiveness in automatic perceptual decision decoding. Full article
(This article belongs to the Special Issue Entropy on Biosignals and Intelligent Systems II)
Show Figures

Figure 1

Open AccessArticle
A Blockchain-Driven Supply Chain Finance Application for Auto Retail Industry
Entropy 2020, 22(1), 95; https://doi.org/10.3390/e22010095 - 13 Jan 2020
Viewed by 542
Abstract
In this paper, a Blockchain-driven platform for supply chain finance, BCautoSCF (Zhi-lian-che-rong in Chinese), is introduced. It is successfully established as a reliable and efficient financing platform for the auto retail industry. Due to the Blockchain built-in trust mechanism, participants in the supply [...] Read more.
In this paper, a Blockchain-driven platform for supply chain finance, BCautoSCF (Zhi-lian-che-rong in Chinese), is introduced. It is successfully established as a reliable and efficient financing platform for the auto retail industry. Due to the Blockchain built-in trust mechanism, participants in the supply chain (SC) networks work extensively and transparently to run a reliable, convenient, and traceable business. Likewise, the traditional supply chain finance (SCF), partial automation of SCF workflows with fewer human errors and disruptions was achieved through smart contract in BCautoSCF. Such open and secure features suggest the feasibility of BCautoSCF in SCF. As the first Blockchain-driven SCF application for the auto retail industry in China, our contribution lies in studying these pain points existing in traditional SCF and proposing a novel Blockchain-driven design to reshape the business logic of SCF to develop an efficient and reliable financing platform for small and medium enterprises (SMEs) in the auto retail industry to decrease the cost of financing and speed up the cash flows. Currently, there are over 600 active enterprise users that adopt BCautoSCF to run their financing business. Up to October 2019, the BCautoSCF provides services to 449 online/offline auto retailors, three B2B asset exchange platforms, nine fund providers, and 78 logistic services across 21 provinces in China. There are 3296 financing transactions successfully completed in BCautoSCF, and the amount of financing is ¥566,784,802.18. In the future, we will work towards supporting a full automation of SCF workflow by smart contracts, so that the efficiency of transaction will be further improved. Full article
(This article belongs to the Special Issue Blockchain: Security, Challenges, and Opportunities)
Show Figures

Figure 1

Open AccessArticle
Spectrum Sensing Method Based on Information Geometry and Deep Neural Network
Entropy 2020, 22(1), 94; https://doi.org/10.3390/e22010094 - 12 Jan 2020
Viewed by 515
Abstract
Due to the scarcity of radio spectrum resources and the growing demand, the use of spectrum sensing technology to improve the utilization of spectrum resources has become a hot research topic. In order to improve the utilization of spectrum resources, this paper proposes [...] Read more.
Due to the scarcity of radio spectrum resources and the growing demand, the use of spectrum sensing technology to improve the utilization of spectrum resources has become a hot research topic. In order to improve the utilization of spectrum resources, this paper proposes a spectrum sensing method that combines information geometry and deep learning. Firstly, the covariance matrix of the sensing signal is projected onto the statistical manifold. Each sensing signal can be regarded as a point on the manifold. Then, the geodesic distance between the signals is perceived as its statistical characteristics. Finally, deep neural network is used to classify the dataset composed of the geodesic distance. Simulation experiments show that the proposed spectrum sensing method based on deep neural network and information geometry has better performance in terms of sensing precision. Full article
Show Figures

Figure 1

Open AccessConcept Paper
Introduction to Extreme Seeking Entropy
Entropy 2020, 22(1), 93; https://doi.org/10.3390/e22010093 - 12 Jan 2020
Viewed by 568
Abstract
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses [...] Read more.
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented. Full article
Show Figures

Figure 1

Open AccessArticle
On Unitary t-Designs from Relaxed Seeds
Entropy 2020, 22(1), 92; https://doi.org/10.3390/e22010092 - 12 Jan 2020
Cited by 1 | Viewed by 670
Abstract
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. [...] Read more.
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. In particular, they are composed of ensembles of unitaries which, for technical reasons, must contain inverses and whose entries are algebraic. In this work, we reduce the requirements for generating an ε -approximate unitary t-design. To do so, we first construct a specific n-qubit random quantum circuit composed of a sequence of randomly chosen 2-qubit gates, chosen from a set of unitaries which is approximately universal on U ( 4 ) , yet need not contain unitaries and their inverses nor are in general composed of unitaries whose entries are algebraic; dubbed r e l a x e d seed. We then show that this relaxed seed, when used as a basis for our construction, gives rise to an ε -approximate unitary t-design efficiently, where the depth of our random circuit scales as p o l y ( n , t , l o g ( 1 / ε ) ) , thereby overcoming the two requirements which limited previous constructions. We suspect the result found here is not optimal and can be improved; particularly because the number of gates in the relaxed seeds introduced here grows with n and t. We conjecture that constant sized seeds such as those which are usually present in the literature are sufficient. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Open AccessArticle
(Generalized) Maximum Cumulative Direct, Residual, and Paired Φ Entropy Approach
Entropy 2020, 22(1), 91; https://doi.org/10.3390/e22010091 - 12 Jan 2020
Viewed by 470
Abstract
A distribution that maximizes an entropy can be found by applying two different principles. On the one hand, Jaynes (1957a,b) formulated the maximum entropy principle (MaxEnt) as the search for a distribution maximizing a given entropy under some given constraints. On the other [...] Read more.
A distribution that maximizes an entropy can be found by applying two different principles. On the one hand, Jaynes (1957a,b) formulated the maximum entropy principle (MaxEnt) as the search for a distribution maximizing a given entropy under some given constraints. On the other hand, Kapur (1994) and Kesavan and Kapur (1989) introduced the generalized maximum entropy principle (GMaxEnt) as the derivation of an entropy for which a given distribution has the maximum entropy property under some given constraints. In this paper, both principles were considered for cumulative entropies. Such entropies depend either on the distribution function (direct), on the survival function (residual) or on both (paired). We incorporate cumulative direct, residual, and paired entropies in one approach called cumulative Φ entropies. Maximizing this entropy without any constraints produces an extremely U-shaped (=bipolar) distribution. Maximizing the cumulative entropy under the constraints of fixed mean and variance tries to transform a distribution in the direction of a bipolar distribution, as far as it is allowed by the constraints. A bipolar distribution represents so-called contradictory information, which is in contrast to minimum or no information. In the literature, to date, only a few maximum entropy distributions for cumulative entropies have been derived. In this paper, we extended the results to well known flexible distributions (like the generalized logistic distribution) and derived some special distributions (like the skewed logistic, the skewed Tukey λ and the extended Burr XII distribution). The generalized maximum entropy principle was applied to the generalized Tukey λ distribution and the Fechner family of skewed distributions. Finally, cumulative entropies were estimated such that the data was drawn from a maximum entropy distribution. This estimator will be applied to the daily S&P500 returns and time durations between mine explosions. Full article
Show Figures

Figure 1

Open AccessArticle
How the Probabilistic Structure of Grammatical Context Shapes Speech
Entropy 2020, 22(1), 90; https://doi.org/10.3390/e22010090 - 11 Jan 2020
Viewed by 587
Abstract
Does systematic covariation in the usage patterns of forms shape the sublexical variance observed in conversational speech? We address this question in terms of a recently proposed discriminative theory of human communication that argues that the distribution of events in communicative contexts should [...] Read more.
Does systematic covariation in the usage patterns of forms shape the sublexical variance observed in conversational speech? We address this question in terms of a recently proposed discriminative theory of human communication that argues that the distribution of events in communicative contexts should maintain mutual predictability between language users, present evidence that the distributions of words in the empirical contexts in which they are learned and used are geometric, and thus support this. Here, we extend this analysis to a corpus of conversational English, showing that the distribution of grammatical regularities and the sub-distributions of tokens discriminated by them are also geometric. Further analyses reveal a range of structural differences in the distribution of types in parts of speech categories that further support the suggestion that linguistic distributions (and codes) are subcategorized by context at multiple levels of abstraction. Finally, a series of analyses of the variation in spoken language reveals that quantifiable differences in the structure of lexical subcategories appears in turn to systematically shape sublexical variation in speech signal. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

Open AccessArticle
Time Series Complexities and Their Relationship to Forecasting Performance
Entropy 2020, 22(1), 89; https://doi.org/10.3390/e22010089 - 10 Jan 2020
Viewed by 845
Abstract
Entropy is a key concept in the characterization of uncertainty for any given signal, and its extensions such as Spectral Entropy and Permutation Entropy. They have been used to measure the complexity of time series. However, these measures are subject to the discretization [...] Read more.
Entropy is a key concept in the characterization of uncertainty for any given signal, and its extensions such as Spectral Entropy and Permutation Entropy. They have been used to measure the complexity of time series. However, these measures are subject to the discretization employed to study the states of the system, and identifying the relationship between complexity measures and the expected performance of the four selected forecasting methods that participate in the M4 Competition. This relationship allows the decision, in advance, of which algorithm is adequate. Therefore, in this paper, we found the relationships between entropy-based complexity framework and the forecasting error of four selected methods (Smyl, Theta, ARIMA, and ETS). Moreover, we present a framework extension based on the Emergence, Self-Organization, and Complexity paradigm. The experimentation with both synthetic and M4 Competition time series show that the feature space induced by complexities, visually constrains the forecasting method performance to specific regions; where the logarithm of its metric error is poorer, the Complexity based on the emergence and self-organization is maximal. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
Show Figures

Figure 1

Open AccessArticle
Single Real Goal, Magnitude-Based Deceptive Path-Planning
Entropy 2020, 22(1), 88; https://doi.org/10.3390/e22010088 - 10 Jan 2020
Cited by 1 | Viewed by 461
Abstract
Deceptive path-planning is the task of finding a path so as to minimize the probability of an observer (or a defender) identifying the observed agent’s final goal before the goal has been reached. It is one of the important approaches to solving real-world [...] Read more.
Deceptive path-planning is the task of finding a path so as to minimize the probability of an observer (or a defender) identifying the observed agent’s final goal before the goal has been reached. It is one of the important approaches to solving real-world challenges, such as public security, strategic transportation, and logistics. Existing methods either cannot make full use of the entire environments’ information, or lack enough flexibility for balancing the path’s deceptivity and available moving resource. In this work, building on recent developments in probabilistic goal recognition, we formalized a single real goal magnitude-based deceptive path-planning problem followed by a mixed-integer programming based deceptive path maximization and generation method. The model helps to establish a computable foundation for any further imposition of different deception concepts or strategies, and broadens its applicability in many scenarios. Experimental results showed the effectiveness of our methods in deceptive path-planning compared to the existing one. Full article
Show Figures

Figure 1

Open AccessArticle
Modeling Bed Shear Stress Distribution in Rectangular Channels Using the Entropic Parameter
Entropy 2020, 22(1), 87; https://doi.org/10.3390/e22010087 - 10 Jan 2020
Viewed by 440
Abstract
The evaluation of bed shear stress distribution is fundamental to predicting the transport of sediments and pollutants in rivers and to designing successful stable open channels. Such distribution cannot be determined easily as it depends on the velocity field, the shape of the [...] Read more.
The evaluation of bed shear stress distribution is fundamental to predicting the transport of sediments and pollutants in rivers and to designing successful stable open channels. Such distribution cannot be determined easily as it depends on the velocity field, the shape of the cross section, and the bed roughness conditions. In recent years, information theory has been proven to be reliable for estimating shear stress along the wetted perimeter of open channels. The entropy models require the knowledge of the shear stress maximum and mean values to calculate the Lagrange multipliers, which are necessary to the resolution of the shear stress probability distribution function. This paper proposes a new formulation which stems from the maximization of the Tsallis entropy and simplifies the calculation of the Lagrange coefficients in order to estimate the bed shear stress distribution in open-channel flows. This formulation introduces a relationship between the dimensionless mean shear stress and the entropic parameter which is based on the ratio between the observed mean and maximum velocity of an open-channel cross section. The validity of the derived expression was tested on a large set of literature laboratory measurements in rectangular cross sections having different bed and sidewall roughness conditions as well as various water discharges and flow depths. A detailed error analysis showed good agreement with the experimental data, which allowed linking the small-scale dynamic processes to the large-scale kinematic ones. Full article
Show Figures

Figure 1

Open AccessArticle
Numerical Study of Nanofluid Irreversibilities in a Heat Exchanger Used with an Aqueous Medium
Entropy 2020, 22(1), 86; https://doi.org/10.3390/e22010086 - 10 Jan 2020
Viewed by 486
Abstract
Heat exchangers play an important role in different industrial processes; therefore, it is important to characterize these devices to improve their efficiency by guaranteeing the efficient use of energy. In this study, we carry out a numerical analysis of flow dynamics, heat transfer, [...] Read more.
Heat exchangers play an important role in different industrial processes; therefore, it is important to characterize these devices to improve their efficiency by guaranteeing the efficient use of energy. In this study, we carry out a numerical analysis of flow dynamics, heat transfer, and entropy generation inside a heat exchanger; an aqueous medium used for oil extraction flows through the exchanger. Hot water flows on the shell side; nanoparticles have been added to the water in order to improve heat transfer toward the cold aqueous medium flowing on the tube side. The aqueous medium must reach a certain temperature in order to obtain its oil extraction properties. The analysis is performed for different Richardson numbers (Ri = 0.1–10), nanofluid volume fractions (φ = 0.00–0.06), and heat exchanger heights (H = 0.6–1.0). Results are presented in terms of Nusselt number, total entropy generation, Bejan number, and performance evaluation criterion. Results showed that heat exchanger performance increases with the increase in Ri when Ri > 1 and when reducing H. Full article
Show Figures

Figure 1