Open AccessArticle
Sovereign Credit Default Swap and Stock Markets in Central and Eastern European Countries: Are Feedback Effects at Work?
Entropy 2020, 22(3), 338; https://doi.org/10.3390/e22030338 (registering DOI) - 16 Mar 2020
Abstract
The purpose of the paper is to investigate the relationship between sovereign Credit Default Swap (CDS) and stock markets in nine emerging economies from Central and Eastern Europe (CEE), using daily data over the period January 2008–April 2018. The analysis deploys a Vector [...] Read more.
The purpose of the paper is to investigate the relationship between sovereign Credit Default Swap (CDS) and stock markets in nine emerging economies from Central and Eastern Europe (CEE), using daily data over the period January 2008–April 2018. The analysis deploys a Vector Autoregressive model, focusing on the direction of Granger causality between the credit and stock markets. We find evidence of the presence of bidirectional feedback between sovereign CDS and stock markets in CEE countries. The results highlight a transfer entropy of risk from the private to public sector over the whole period and respectively, from the public to private transfer entropy of risk during the European sovereign debt crisis only in Romania and Slovenia. Another finding that deserves particular attention is that the linkage between the CDS spreads and stock markets is time-varying and subject to regime shifts, depending on global financial conditions, such as the sovereign debt crisis. By providing insights on the inter-temporal causality of the comovements of the CDS–stock markets, the paper has significant practical implications for risk management practices and regulatory policies, under different market conditions of European emerging economies. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Design and Practical Stability of a New Class of Impulsive Fractional-Like Neural Networks
Entropy 2020, 22(3), 337; https://doi.org/10.3390/e22030337 (registering DOI) - 15 Mar 2020
Viewed by 134
Abstract
In this paper, a new class of impulsive neural networks with fractional-like derivatives is defined, and the practical stability properties of the solutions are investigated. The stability analysis exploits a new type of Lyapunov-like functions and their derivatives. Furthermore, the obtained results are [...] Read more.
In this paper, a new class of impulsive neural networks with fractional-like derivatives is defined, and the practical stability properties of the solutions are investigated. The stability analysis exploits a new type of Lyapunov-like functions and their derivatives. Furthermore, the obtained results are applied to a bidirectional associative memory (BAM) neural network model with fractional-like derivatives. Some new results for the introduced neural network models with uncertain values of the parameters are also obtained. Full article
(This article belongs to the Special Issue Dynamics in Complex Neural Networks)
Open AccessArticle
Magnetisation Processes in Geometrically Frustrated Spin Networks with Self-Assembled Cliques
Entropy 2020, 22(3), 336; https://doi.org/10.3390/e22030336 (registering DOI) - 14 Mar 2020
Viewed by 167
Abstract
Functional designs of nanostructured materials seek to exploit the potential of complex morphologies and disorder. In this context, the spin dynamics in disordered antiferromagnetic materials present a significant challenge due to induced geometric frustration. Here we analyse the processes of magnetisation reversal driven [...] Read more.
Functional designs of nanostructured materials seek to exploit the potential of complex morphologies and disorder. In this context, the spin dynamics in disordered antiferromagnetic materials present a significant challenge due to induced geometric frustration. Here we analyse the processes of magnetisation reversal driven by an external field in generalised spin networks with higher-order connectivity and antiferromagnetic defects. Using the model in (Tadić et al. Arxiv:1912.02433), we grow nanonetworks with geometrically constrained self-assemblies of simplexes (cliques) of a given size n, and with probability p each simplex possesses a defect edge affecting its binding, leading to a tree-like pattern of defects. The Ising spins are attached to vertices and have ferromagnetic interactions, while antiferromagnetic couplings apply between pairs of spins along each defect edge. Thus, a defect edge induces n - 2 frustrated triangles per n-clique participating in a larger-scale complex. We determine several topological, entropic, and graph-theoretic measures to characterise the structures of these assemblies. Further, we show how the sizes of simplexes building the aggregates with a given pattern of defects affects the magnetisation curves, the length of the domain walls and the shape of the hysteresis loop. The hysteresis shows a sequence of plateaus of fractional magnetisation and multiscale fluctuations in the passage between them. For fully antiferromagnetic interactions, the loop splits into two parts only in mono-disperse assemblies of cliques consisting of an odd number of vertices n. At the same time, remnant magnetisation occurs when n is even, and in poly-disperse assemblies of cliques in the range n [ 2 , 10 ] . These results shed light on spin dynamics in complex nanomagnetic assemblies in which geometric frustration arises in the interplay of higher-order connectivity and antiferromagnetic interactions. Full article
(This article belongs to the Special Issue Dynamic Processes on Complex Networks)
Open AccessArticle
Weighted Mean Squared Deviation Feature Screening for Binary Features
Entropy 2020, 22(3), 335; https://doi.org/10.3390/e22030335 (registering DOI) - 14 Mar 2020
Viewed by 152
Abstract
In this study, we propose a novel model-free feature screening method for ultrahigh dimensional binary features of binary classification, called weighted mean squared deviation (WMSD). Compared to Chi-square statistic and mutual information, WMSD provides more opportunities to the binary features with probabilities near [...] Read more.
In this study, we propose a novel model-free feature screening method for ultrahigh dimensional binary features of binary classification, called weighted mean squared deviation (WMSD). Compared to Chi-square statistic and mutual information, WMSD provides more opportunities to the binary features with probabilities near 0.5. In addition, the asymptotic properties of the proposed method are theoretically investigated under the assumption log p = o ( n ) . The number of features is practically selected by a Pearson correlation coefficient method according to the property of power-law distribution. Lastly, an empirical study of Chinese text classification illustrates that the proposed method performs well when the dimension of selected features is relatively small. Full article
(This article belongs to the Special Issue Information Theoretic Feature Selection Methods for Big Data)
Open AccessFeature PaperArticle
Semantic and Generalized Entropy Loss Functions for Semi-Supervised Deep Learning
Entropy 2020, 22(3), 334; https://doi.org/10.3390/e22030334 (registering DOI) - 14 Mar 2020
Viewed by 176
Abstract
The increasing size of modern datasets combined with the difficulty of obtaining real label information (e.g., class) has made semi-supervised learning a problem of considerable practical importance in modern data analysis. Semi-supervised learning is supervised learning with additional information on the distribution of [...] Read more.
The increasing size of modern datasets combined with the difficulty of obtaining real label information (e.g., class) has made semi-supervised learning a problem of considerable practical importance in modern data analysis. Semi-supervised learning is supervised learning with additional information on the distribution of the examples or, simultaneously, an extension of unsupervised learning guided by some constraints. In this article we present a methodology that bridges between artificial neural network output vectors and logical constraints. In order to do this, we present a semantic loss function and a generalized entropy loss function (Rényi entropy) that capture how close the neural network is to satisfying the constraints on its output. Our methods are intended to be generally applicable and compatible with any feedforward neural network. Therefore, the semantic loss and generalized entropy loss are simply a regularization term that can be directly plugged into an existing loss function. We evaluate our methodology over an artificially simulated dataset and two commonly used benchmark datasets which are MNIST and Fashion-MNIST to assess the relation between the analyzed loss functions and the influence of the various input and tuning parameters on the classification accuracy. The experimental evaluation shows that both losses effectively guide the learner to achieve (near-) state-of-the-art results on semi-supervised multiclass classification. Full article
Open AccessArticle
Assessing and Predicting the Water Resources Vulnerability under Various Climate-Change Scenarios: A Case Study of Huang-Huai-Hai River Basin, China
Entropy 2020, 22(3), 333; https://doi.org/10.3390/e22030333 (registering DOI) - 14 Mar 2020
Viewed by 169
Abstract
The Huang-Huai-Hai River Basin plays an important strategic role in China’s economic development, but severe water resources problems restrict the development of the three basins. Most of the existing research is focused on the trends of single hydrological and meteorological indicators. However, there [...] Read more.
The Huang-Huai-Hai River Basin plays an important strategic role in China’s economic development, but severe water resources problems restrict the development of the three basins. Most of the existing research is focused on the trends of single hydrological and meteorological indicators. However, there is a lack of research on the cause analysis and scenario prediction of water resources vulnerability (WRV) in the three basins, which is the very important foundation for the management of water resources. First of all, based on the analysis of the causes of water resources vulnerability, this article set up the evaluation index system of water resource vulnerability from three aspects: water quantity, water quality and disaster. Then, we use the Improved Blind Deletion Rough Set (IBDRS) method to reduce the dimension of the index system, and we reduce the original 24 indexes to 12 evaluation indexes. Third, by comparing the accuracy of random forest (RF) and artificial neural network (ANN) models, we use the RF model with high fitting accuracy as the evaluation and prediction model. Finally, we use 12 evaluation indexes and an RF model to analyze the trend and causes of water resources vulnerability in three basins during 2000–2015, and further predict the scenarios in 2020 and 2030. The results show that the vulnerability level of water resources in the three basins has been improved during 2000–2015, and the three river basins should follow the development of scenario 1 to ensure the safety of water resources. The research proved that the combination of IBDRS and an RF model is a very effective method to evaluate and forecast the vulnerability of water resources in the Huang-Huai-Hai River Basin. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Open AccessArticle
An Entropy-Based Approach to Portfolio Optimization
Entropy 2020, 22(3), 332; https://doi.org/10.3390/e22030332 (registering DOI) - 14 Mar 2020
Viewed by 130
Abstract
This paper presents an improved method of applying entropy as a risk in portfolio optimization. A new family of portfolio optimization problems called the return-entropy portfolio optimization (REPO) is introduced that simplifies the computation of portfolio entropy using a combinatorial approach. REPO addresses [...] Read more.
This paper presents an improved method of applying entropy as a risk in portfolio optimization. A new family of portfolio optimization problems called the return-entropy portfolio optimization (REPO) is introduced that simplifies the computation of portfolio entropy using a combinatorial approach. REPO addresses five main practical concerns with the mean-variance portfolio optimization (MVPO). Pioneered by Harry Markowitz, MVPO revolutionized the financial industry as the first formal mathematical approach to risk-averse investing. REPO uses a mean-entropy objective function instead of the mean-variance objective function used in MVPO. REPO also simplifies the portfolio entropy calculation by utilizing combinatorial generating functions in the optimization objective function. REPO and MVPO were compared by emulating competing portfolios over historical data and REPO significantly outperformed MVPO in a strong majority of cases. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Detrending the Waveforms of Steady-State Vowels
Entropy 2020, 22(3), 331; https://doi.org/10.3390/e22030331 (registering DOI) - 13 Mar 2020
Viewed by 199
Abstract
Steady-state vowels are vowels that are uttered with a momentarily fixed vocal tract configuration and with steady vibration of the vocal folds. In this steady-state, the vowel waveform appears as a quasi-periodic string of elementary units called pitch periods. Humans perceive this quasi-periodic [...] Read more.
Steady-state vowels are vowels that are uttered with a momentarily fixed vocal tract configuration and with steady vibration of the vocal folds. In this steady-state, the vowel waveform appears as a quasi-periodic string of elementary units called pitch periods. Humans perceive this quasi-periodic regularity as a definite pitch. Likewise, so-called pitch-synchronous methods exploit this regularity by using the duration of the pitch periods as a natural time scale for their analysis. In this work, we present a simple pitch-synchronous method using a Bayesian approach for estimating formants that slightly generalizes the basic approach of modeling the pitch periods as a superposition of decaying sinusoids, one for each vowel formant, by explicitly taking into account the additional low-frequency content in the waveform which arises not from formants but rather from the glottal pulse. We model this low-frequency content in the time domain as a polynomial trend function that is added to the decaying sinusoids. The problem then reduces to a rather familiar one in macroeconomics: estimate the cycles (our decaying sinusoids) independently from the trend (our polynomial trend function); in other words, detrend the waveform of steady-state waveforms. We show how to do this efficiently. Full article
Open AccessArticle
Permutation Entropy as a Measure of Information Gain/Loss in the Different Symbolic Descriptions of Financial Data
Entropy 2020, 22(3), 330; https://doi.org/10.3390/e22030330 (registering DOI) - 13 Mar 2020
Viewed by 125
Abstract
Financial markets give a large number of trading opportunities. However, over-complicated systems make it very difficult to be effectively used by decision-makers. Volatility and noise present in the markets evoke a need to simplify the market picture derived for the decision-makers. Symbolic representation [...] Read more.
Financial markets give a large number of trading opportunities. However, over-complicated systems make it very difficult to be effectively used by decision-makers. Volatility and noise present in the markets evoke a need to simplify the market picture derived for the decision-makers. Symbolic representation fits in this concept and greatly reduces data complexity. However, at the same time, some information from the market is lost. Our motivation is to answer the question: What is the impact of introducing different data representation on the overall amount of information derived for the decision-maker? We concentrate on the possibility of using entropy as a measure of the information gain/loss for the financial data, and as a basic form, we assume permutation entropy with later modifications. We investigate different symbolic representations and compare them with classical data representation in terms of entropy. The real-world data covering the time span of 10 years are used in the experiments. The results and the statistical verification show that extending the symbolic description of the time series does not affect the permutation entropy values. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications II)
Show Figures

Figure 1

Open AccessArticle
A Two-Stage Mutual Information Based Bayesian Lasso Algorithm for Multi-Locus Genome-Wide Association Studies
Entropy 2020, 22(3), 329; https://doi.org/10.3390/e22030329 - 13 Mar 2020
Viewed by 195
Abstract
Genome-wide association study (GWAS) has turned out to be an essential technology for exploring the genetic mechanism of complex traits. To reduce the complexity of computation, it is well accepted to remove unrelated single nucleotide polymorphisms (SNPs) before GWAS, e.g., by using iterative [...] Read more.
Genome-wide association study (GWAS) has turned out to be an essential technology for exploring the genetic mechanism of complex traits. To reduce the complexity of computation, it is well accepted to remove unrelated single nucleotide polymorphisms (SNPs) before GWAS, e.g., by using iterative sure independence screening expectation-maximization Bayesian Lasso (ISIS EM-BLASSO) method. In this work, a modified version of ISIS EM-BLASSO is proposed, which reduces the number of SNPs by a screening methodology based on Pearson correlation and mutual information, then estimates the effects via EM-Bayesian Lasso (EM-BLASSO), and finally detects the true quantitative trait nucleotides (QTNs) through likelihood ratio test. We call our method a two-stage mutual information based Bayesian Lasso (MBLASSO). Under three simulation scenarios, MBLASSO improves the statistical power and retains the higher effect estimation accuracy when comparing with three other algorithms. Moreover, MBLASSO performs best on model fitting, the accuracy of detected associations is the highest, and 21 genes can only be detected by MBLASSO in Arabidopsis thaliana datasets. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Open AccessArticle
Multi-Level Image Thresholding Based on Modified Spherical Search Optimizer and Fuzzy Entropy
Entropy 2020, 22(3), 328; https://doi.org/10.3390/e22030328 - 12 Mar 2020
Viewed by 194
Abstract
Multi-level thresholding is one of the effective segmentation methods that have been applied in many applications. Traditional methods face challenges in determining the suitable threshold values; therefore, metaheuristic (MH) methods have been adopted to solve these challenges. In general, MH methods had been [...] Read more.
Multi-level thresholding is one of the effective segmentation methods that have been applied in many applications. Traditional methods face challenges in determining the suitable threshold values; therefore, metaheuristic (MH) methods have been adopted to solve these challenges. In general, MH methods had been proposed by simulating natural behaviors of swarm ecosystems, such as birds, animals, and others. The current study proposes an alternative multi-level thresholding method based on a new MH method, a modified spherical search optimizer (SSO). This was performed by using the operators of the sine cosine algorithm (SCA) to enhance the exploitation ability of the SSO. Moreover, Fuzzy entropy is applied as the main fitness function to evaluate the quality of each solution inside the population of the proposed SSOSCA since Fuzzy entropy has established its performance in literature. Several images from the well-known Berkeley dataset were used to test and evaluate the proposed method. The evaluation outcomes approved that SSOSCA showed better performance than several existing methods according to different image segmentation measures. Full article
(This article belongs to the Special Issue Entropy in Metaheuristics and Bioinspired Algorithms)
Open AccessArticle
The Effect of Adhesive Additives on Silica Gel Water Sorption Properties
Entropy 2020, 22(3), 327; https://doi.org/10.3390/e22030327 - 12 Mar 2020
Viewed by 202
Abstract
Adsorption chillers are characterized by low electricity consumption, lack of moving parts, and high reliability. The disadvantage of these chillers is their large weight due to low adsorbent sorption capacity. Therefore, the attention is turned to finding a sorbent with a high water [...] Read more.
Adsorption chillers are characterized by low electricity consumption, lack of moving parts, and high reliability. The disadvantage of these chillers is their large weight due to low adsorbent sorption capacity. Therefore, the attention is turned to finding a sorbent with a high water sorption capacity and enhanced thermal conductivity to increase chiller efficiency. The article discusses the impact of selected adhesives used for the production of an adsorption bed in order to improve heat exchange on its surface. Experiments with silica gel with three commercial types of glue on metal plates representing heat exchanger were performed. The structure of samples was observed under a microscope to determine the coverage of adsorbent by glue. To determine the kinetics of the free adsorption, the amounts of moisture adsorbed and the desorption dynamics the prepared samples of coated bed on metal plates were moisturized and dried in a moisture analyzer. Samples made of silica gel mixed with the adhesive 2-hydroxyethyl cellulose, show high adsorption capacity, low dynamic adsorption, and medium dynamic desorption. Samples containing adhesive poly(vinyl alcohol) adsorb less moisture, but free adsorption and desorption were more dynamic. Samples containing the adhesive hydroxyethyl cellulose show lower moisture capacity, relatively dynamic adsorption, and lower dynamic desorption. Full article
Open AccessArticle
Improved Parsimonious Topic Modeling Based on the Bayesian Information Criterion
Entropy 2020, 22(3), 326; https://doi.org/10.3390/e22030326 - 12 Mar 2020
Viewed by 181
Abstract
In a previous work, a parsimonious topic model (PTM) was proposed for text corpora. In that work, unlike LDA, the modeling determined a subset of salient words for each topic, with topic-specific probabilities, with the rest of the words in the dictionary explained [...] Read more.
In a previous work, a parsimonious topic model (PTM) was proposed for text corpora. In that work, unlike LDA, the modeling determined a subset of salient words for each topic, with topic-specific probabilities, with the rest of the words in the dictionary explained by a universal shared model. Further, in LDA all topics are in principle present in every document. In contrast, PTM gives sparse topic representation, determining the (small) subset of relevant topics for each document. A customized Bayesian information criterion (BIC) was derived, balancing model complexity and goodness of fit, with the BIC minimized to jointly determine the entire model—the topic-specific words, document-specific topics, all model parameter values, and the total number of topics—in a wholly unsupervised fashion. In the present work, several important modeling and algorithm (parameter learning) extensions of PTM are proposed. First, we modify the BIC objective function using a lossless coding scheme with low modeling cost for describing words that are non-salient for all topics—such words are essentially identified as wholly noisy/uninformative. This approach increases the PTM’s model sparsity, which also allows model selection of more topics and with lower BIC cost than the original PTM. Second, in the original PTM model learning strategy, word switches were updated sequentially, which is myopic and susceptible to finding poor locally optimal solutions. Here, instead, we jointly optimize all the switches that correspond to the same word (across topics). This approach jointly optimizes many more parameters at each step than the original PTM, which in principle should be less susceptible to finding poor local minima. Results on several document data sets show that our proposed method outperformed the original PTM model with respect to multiple performance measures, and gave a sparser topic model representation than the original PTM. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Effects of Diffusion Coefficients and Permanent Charge on Reversal Potentials in Ionic Channels
Entropy 2020, 22(3), 325; https://doi.org/10.3390/e22030325 - 12 Mar 2020
Viewed by 149
Abstract
In this work, the dependence of reversal potentials and zero-current fluxes on diffusion coefficients are examined for ionic flows through membrane channels. The study is conducted for the setup of a simple structure defined by the profile of permanent charges with two mobile [...] Read more.
In this work, the dependence of reversal potentials and zero-current fluxes on diffusion coefficients are examined for ionic flows through membrane channels. The study is conducted for the setup of a simple structure defined by the profile of permanent charges with two mobile ion species, one positively charged (cation) and one negatively charged (anion). Numerical observations are obtained from analytical results established using geometric singular perturbation analysis of classical Poisson–Nernst–Planck models. For 1:1 ionic mixtures with arbitrary diffusion constants, Mofidi and Liu (arXiv:1909.01192) conducted a rigorous mathematical analysis and derived an equation for reversal potentials. We summarize and extend these results with numerical observations for biological relevant situations. The numerical investigations on profiles of the electrochemical potentials, ion concentrations, and electrical potential across ion channels are also presented for the zero-current case. Moreover, the dependence of current and fluxes on voltages and permanent charges is investigated. In the opinion of the authors, many results in the paper are not intuitive, and it is difficult, if not impossible, to reveal all cases without investigations of this type. Full article
Open AccessArticle
An Efficient Alert Aggregation Method Based on Conditional Rough Entropy and Knowledge Granularity
Entropy 2020, 22(3), 324; https://doi.org/10.3390/e22030324 - 12 Mar 2020
Viewed by 141
Abstract
With the emergence of network security issues, various security devices that generate a large number of logs and alerts are widely used. This paper proposes an alert aggregation scheme that is based on conditional rough entropy and knowledge granularity to solve the problem [...] Read more.
With the emergence of network security issues, various security devices that generate a large number of logs and alerts are widely used. This paper proposes an alert aggregation scheme that is based on conditional rough entropy and knowledge granularity to solve the problem of repetitive and redundant alert information in network security devices. Firstly, we use conditional rough entropy and knowledge granularity to determine the attribute weights. This method can determine the different important attributes and their weights for different types of attacks. We can calculate the similarity value of two alerts by weighting based on the results of attribute weighting. Subsequently, the sliding time window method is used to aggregate the alerts whose similarity value is larger than a threshold, which is set to reduce the redundant alerts. Finally, the proposed scheme is applied to the CIC-IDS 2018 dataset and the DARPA 98 dataset. The experimental results show that this method can effectively reduce the redundant alerts and improve the efficiency of data processing, thus providing accurate and concise data for the next stage of alert fusion and analysis. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Evaluation of Harmonic Contributions for Multi Harmonic Sources System Based on Mixed Entropy Screening and an Improved Independent Component Analysis Method
Entropy 2020, 22(3), 323; https://doi.org/10.3390/e22030323 - 12 Mar 2020
Viewed by 132
Abstract
Evaluating the harmonic contributions of each nonlinear customer is important for harmonic mitigation in a power system with diverse and complex harmonic sources. The existing evaluation methods have two shortcomings: (1) the calculation accuracy is easily affected by background harmonics fluctuation; and (2) [...] Read more.
Evaluating the harmonic contributions of each nonlinear customer is important for harmonic mitigation in a power system with diverse and complex harmonic sources. The existing evaluation methods have two shortcomings: (1) the calculation accuracy is easily affected by background harmonics fluctuation; and (2) they rely on Global Positioning System (GPS) measurements, which is not economic when widely applied. In this paper, based on the properties of asynchronous measurements, we propose a model for evaluating harmonic contributions without GPS technology. In addition, based on the Gaussianity of the measured harmonic data, a mixed entropy screening mechanism is proposed to assess the fluctuation degree of the background harmonics for each data segment. Only the segments with relatively stable background harmonics are chosen for calculation, which reduces the impacts of the background harmonics in a certain degree. Additionally, complex independent component analysis, as a potential method to this field, is improved in this paper. During the calculation process, the sparseness of the mixed matrix in this method is used to reduce the optimization dimension and enhance the evaluation accuracy. The validity and the effectiveness of the proposed methods are verified through simulations and field case studies. Full article
Open AccessArticle
Unification of the Nature’s Complexities via a Matrix Permanent—Critical Phenomena, Fractals, Quantum Computing, ♯P-Complexity
Entropy 2020, 22(3), 322; https://doi.org/10.3390/e22030322 - 12 Mar 2020
Viewed by 167
Abstract
We reveal the analytic relations between a matrix permanent and major nature’s complexities manifested in critical phenomena, fractal structures and chaos, quantum information processes in many-body physics, number-theoretic complexity in mathematics, and ♯P-complete problems in the theory of computational complexity. They follow from [...] Read more.
We reveal the analytic relations between a matrix permanent and major nature’s complexities manifested in critical phenomena, fractal structures and chaos, quantum information processes in many-body physics, number-theoretic complexity in mathematics, and ♯P-complete problems in the theory of computational complexity. They follow from a reduction of the Ising model of critical phenomena to the permanent and four integral representations of the permanent based on (i) the fractal Weierstrass-like functions, (ii) polynomials of complex variables, (iii) Laplace integral, and (iv) MacMahon master theorem. Full article
Show Figures

Figure 1

Open AccessEditorial
Entropy and Information Inequalities
Entropy 2020, 22(3), 320; https://doi.org/10.3390/e22030320 - 12 Mar 2020
Viewed by 180
Abstract
Entropy and information inequalities are vitally important in many areas of mathematics and engineering [...] Full article
(This article belongs to the Special Issue Entropy and Information Inequalities)
Open AccessArticle
Relative Distribution Entropy Loss Function in CNN Image Retrieval
Entropy 2020, 22(3), 321; https://doi.org/10.3390/e22030321 - 11 Mar 2020
Viewed by 198
Abstract
Convolutional neural networks (CNN) is the most mainstream solution in the field of image retrieval. Deep metric learning is introduced into the field of image retrieval, focusing on the construction of pair-based loss function. However, most pair-based loss functions of metric learning merely [...] Read more.
Convolutional neural networks (CNN) is the most mainstream solution in the field of image retrieval. Deep metric learning is introduced into the field of image retrieval, focusing on the construction of pair-based loss function. However, most pair-based loss functions of metric learning merely take common vector similarity (such as Euclidean distance) of the final image descriptors into consideration, while neglecting other distribution characters of these descriptors. In this work, we propose relative distribution entropy (RDE) to describe the internal distribution attributes of image descriptors. We combine relative distribution entropy with the Euclidean distance to obtain the relative distribution entropy weighted distance (RDE-distance). Moreover, the RDE-distance is fused with the contrastive loss and triplet loss to build the relative distributed entropy loss functions. The experimental results demonstrate that our method attains the state-of-the-art performance on most image retrieval benchmarks. Full article
(This article belongs to the Special Issue Information Transfer in Multilayer/Deep Architectures)
Open AccessArticle
Augmentation of Dispersion Entropy for Handling Missing and Outlier Samples in Physiological Signal Monitoring
Entropy 2020, 22(3), 319; https://doi.org/10.3390/e22030319 - 11 Mar 2020
Viewed by 200
Abstract
Entropy quantification algorithms are becoming a prominent tool for the physiological monitoring of individuals through the effective measurement of irregularity in biological signals. However, to ensure their effective adaptation in monitoring applications, the performance of these algorithms needs to be robust when analysing [...] Read more.
Entropy quantification algorithms are becoming a prominent tool for the physiological monitoring of individuals through the effective measurement of irregularity in biological signals. However, to ensure their effective adaptation in monitoring applications, the performance of these algorithms needs to be robust when analysing time-series containing missing and outlier samples, which are common occurrence in physiological monitoring setups such as wearable devices and intensive care units. This paper focuses on augmenting Dispersion Entropy (DisEn) by introducing novel variations of the algorithm for improved performance in such applications. The original algorithm and its variations are tested under different experimental setups that are replicated across heart rate interval, electroencephalogram, and respiratory impedance time-series. Our results indicate that the algorithmic variations of DisEn achieve considerable improvements in performance while our analysis signifies that, in consensus with previous research, outlier samples can have a major impact in the performance of entropy quantification algorithms. Consequently, the presented variations can aid the implementation of DisEn to physiological monitoring applications through the mitigation of the disruptive effect of missing and outlier samples. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications II)
Open AccessArticle
Entropy Based Pythagorean Probabilistic Hesitant Fuzzy Decision Making Technique and Its Application for Fog-Haze Factor Assessment Problem
Entropy 2020, 22(3), 318; https://doi.org/10.3390/e22030318 - 11 Mar 2020
Viewed by 199
Abstract
The Pythagorean probabilistic hesitant fuzzy set (PyPHFS) is an effective, generalized and powerful tool for expressing fuzzy information. It can cover more complex and more hesitant fuzzy evaluation information. Therefore, based on the advantages of PyPHFSs, this paper presents a new extended fuzzy [...] Read more.
The Pythagorean probabilistic hesitant fuzzy set (PyPHFS) is an effective, generalized and powerful tool for expressing fuzzy information. It can cover more complex and more hesitant fuzzy evaluation information. Therefore, based on the advantages of PyPHFSs, this paper presents a new extended fuzzy TOPSIS method for dealing with uncertainty in the form of PyPHFS in real life problems. The paper is divided into three main parts. Firstly, the novel Pythagorean probabilistic hesitant fuzzy entropy measure is established using generalized distance measure under PyPHFS information to find out the unknown weights information of the attributes. The second part consists of the algorithm sets of the TOPSIS technique under PyPHFS environment, where the weights of criteria are completely unknown. Finally, in order to verify the efficiency and superiority of the proposed method, this paper applies some practical examples of the selection of the most critical fog-haze influence factor and makes a detailed comparison with other existing methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Entropy in Heart Rate Dynamics Reflects How HRV-Biofeedback Training Improves Neurovisceral Complexity During Stress-Cognition Interactions
Entropy 2020, 22(3), 317; https://doi.org/10.3390/e22030317 - 11 Mar 2020
Viewed by 147
Abstract
Despite considerable appeal, the growing appreciation of biosignals complexity reflects that system complexity needs additional support. A dynamically coordinated network of neurovisceral integration has been described that links prefrontal-subcortical inhibitory circuits to vagally-mediated heart rate variability. Chronic stress is known to alter network [...] Read more.
Despite considerable appeal, the growing appreciation of biosignals complexity reflects that system complexity needs additional support. A dynamically coordinated network of neurovisceral integration has been described that links prefrontal-subcortical inhibitory circuits to vagally-mediated heart rate variability. Chronic stress is known to alter network interactions by impairing amygdala functional connectivity. HRV-biofeedback training can counteract stress defects. We hypothesized the great value of an entropy-based approach of beat-to-beat biosignals to illustrate how HRVB training restores neurovisceral complexity, which should be reflected in signal complexity. In thirteen moderately-stressed participants, we obtained vagal tone markers and psychological indexes (state anxiety, cognitive workload, and Perceived Stress Scale) before and after five-weeks of daily HRVB training, at rest and during stressful cognitive tasking. Refined Composite Multiscale Entropy (RCMSE) was computed over short time scales as a marker of signal complexity. Heightened vagal tone at rest and during stressful tasking illustrates training benefits in the brain-to-heart circuitry. The entropy index reached the highest significance levels in both variance and ROC curves analyses. Restored vagal activity at rest correlated with gain in entropy. We conclude that HRVB training is efficient in restoring healthy neurovisceral complexity and stress defense, which is reflected in HRV signal complexity. The very mechanisms that are involved in system complexity remain to be elucidated, despite abundant literature existing on the role played by amygdala in brain interconnections. Full article
Open AccessArticle
Conditional Rényi Divergences and Horse Betting
Entropy 2020, 22(3), 316; https://doi.org/10.3390/e22030316 - 11 Mar 2020
Viewed by 161
Abstract
Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with [...] Read more.
Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with emphasis on their behavior under data processing. In the same way that Csiszár’s and Sibson’s conditional divergence lead to the respective dependence measures, so does the new conditional divergence lead to the Lapidoth–Pfister mutual information. Moreover, the new conditional divergence is also related to the Arimoto–Rényi conditional entropy and to Arimoto’s measure of dependence. In the second part of the paper, the horse betting problem is analyzed where, instead of Kelly’s expected log-wealth criterion, a more general family of power-mean utility functions is considered. The key role in the analysis is played by the Rényi divergence, and in the setting where the gambler has access to side information, the new conditional Rényi divergence is key. The setting with side information also provides another operational meaning to the Lapidoth–Pfister mutual information. Finally, a universal strategy for independent and identically distributed races is presented that—without knowing the winning probabilities or the parameter of the utility function—asymptotically maximizes the gambler’s utility function. Full article
Open AccessArticle
Multivariate and Multiscale Complexity of Long-Range Correlated Cardiovascular and Respiratory Variability Series
Entropy 2020, 22(3), 315; https://doi.org/10.3390/e22030315 - 11 Mar 2020
Viewed by 183
Abstract
Assessing the dynamical complexity of biological time series represents an important topic with potential applications ranging from the characterization of physiological states and pathological conditions to the calculation of diagnostic parameters. In particular, cardiovascular time series exhibit a variability produced by different physiological [...] Read more.
Assessing the dynamical complexity of biological time series represents an important topic with potential applications ranging from the characterization of physiological states and pathological conditions to the calculation of diagnostic parameters. In particular, cardiovascular time series exhibit a variability produced by different physiological control mechanisms coupled with each other, which take into account several variables and operate across multiple time scales that result in the coexistence of short term dynamics and long-range correlations. The most widely employed technique to evaluate the dynamical complexity of a time series at different time scales, the so-called multiscale entropy (MSE), has been proven to be unsuitable in the presence of short multivariate time series to be analyzed at long time scales. This work aims at overcoming these issues via the introduction of a new method for the assessment of the multiscale complexity of multivariate time series. The method first exploits vector autoregressive fractionally integrated (VARFI) models to yield a linear parametric representation of vector stochastic processes characterized by short- and long-range correlations. Then, it provides an analytical formulation, within the theory of state-space models, of how the VARFI parameters change when the processes are observed across multiple time scales, which is finally exploited to derive MSE measures relevant to the overall multivariate process or to one constituent scalar process. The proposed approach is applied on cardiovascular and respiratory time series to assess the complexity of the heart period, systolic arterial pressure and respiration variability measured in a group of healthy subjects during conditions of postural and mental stress. Our results document that the proposed methodology can detect physiologically meaningful multiscale patterns of complexity documented previously, but can also capture significant variations in complexity which cannot be observed using standard methods that do not take into account long-range correlations. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Open AccessArticle
Averaging Is Probably Not the Optimum Way of Aggregating Parameters in Federated Learning
Entropy 2020, 22(3), 314; https://doi.org/10.3390/e22030314 - 11 Mar 2020
Viewed by 171
Abstract
Federated learning is a decentralized topology of deep learning, that trains a shared model through data distributed among each client (like mobile phones, wearable devices), in order to ensure data privacy by avoiding raw data exposed in data center (server). After each client [...] Read more.
Federated learning is a decentralized topology of deep learning, that trains a shared model through data distributed among each client (like mobile phones, wearable devices), in order to ensure data privacy by avoiding raw data exposed in data center (server). After each client computes a new model parameter by stochastic gradient descent (SGD) based on their own local data, these locally-computed parameters will be aggregated to generate an updated global model. Many current state-of-the-art studies aggregate different client-computed parameters by averaging them, but none theoretically explains why averaging parameters is a good approach. In this paper, we treat each client computed parameter as a random vector because of the stochastic properties of SGD, and estimate mutual information between two client computed parameters at different training phases using two methods in two learning tasks. The results confirm the correlation between different clients and show an increasing trend of mutual information with training iteration. However, when we further compute the distance between client computed parameters, we find that parameters are getting more correlated while not getting closer. This phenomenon suggests that averaging parameters may not be the optimum way of aggregating trained parameters. Full article
Open AccessArticle
Applications of Information Theory Methods for Evolutionary Optimization of Chemical Computers
Entropy 2020, 22(3), 313; https://doi.org/10.3390/e22030313 - 10 Mar 2020
Viewed by 307
Abstract
It is commonly believed that information processing in living organisms is based on chemical reactions. However, the human achievements in constructing chemical information processing devices demonstrate that it is difficult to design such devices using the bottom-up strategy. Here I discuss the alternative [...] Read more.
It is commonly believed that information processing in living organisms is based on chemical reactions. However, the human achievements in constructing chemical information processing devices demonstrate that it is difficult to design such devices using the bottom-up strategy. Here I discuss the alternative top-down design of a network of chemical oscillators that performs a selected computing task. As an example, I consider a simple network of interacting chemical oscillators that operates as a comparator of two real numbers. The information on which of the two numbers is larger is coded in the number of excitations observed on oscillators forming the network. The parameters of the network are optimized to perform this function with the maximum accuracy. I discuss how information theory methods can be applied to obtain the optimum computing structure. Full article
Open AccessArticle
A Maximum Entropy Method for the Prediction of Size Distributions
Entropy 2020, 22(3), 312; https://doi.org/10.3390/e22030312 - 10 Mar 2020
Viewed by 215
Abstract
We propose a method to derive the stationary size distributions of a system, and the degree distributions of networks, using maximisation of the Gibbs-Shannon entropy. We apply this to a preferential attachment-type algorithm for systems of constant size, which contains exit of balls [...] Read more.
We propose a method to derive the stationary size distributions of a system, and the degree distributions of networks, using maximisation of the Gibbs-Shannon entropy. We apply this to a preferential attachment-type algorithm for systems of constant size, which contains exit of balls and urns (or nodes and edges for the network case). Knowing mean size (degree) and turnover rate, the power law exponent and exponential cutoff can be derived. Our results are confirmed by simulations and by computation of exact probabilities. We also apply this entropy method to reproduce existing results like the Maxwell-Boltzmann distribution for the velocity of gas particles, the Barabasi-Albert model and multiplicative noise systems. Full article
(This article belongs to the Special Issue Entropy, Nonlinear Dynamics and Complexity)
Show Figures

Figure 1