entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7470 KiB  
Article
A New Deep Learning Based Multi-Spectral Image Fusion Method
by Jingchun Piao, Yunfan Chen and Hyunchul Shin
Entropy 2019, 21(6), 570; https://doi.org/10.3390/e21060570 - 5 Jun 2019
Cited by 54 | Viewed by 8270
Abstract
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency [...] Read more.
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

11 pages, 2231 KiB  
Article
Non-Thermal Quantum Engine in Transmon Qubits
by Cleverson Cherubim, Frederico Brito and Sebastian Deffner
Entropy 2019, 21(6), 545; https://doi.org/10.3390/e21060545 - 29 May 2019
Cited by 28 | Viewed by 6616
Abstract
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical [...] Read more.
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical findings are indispensable in developing a comprehensive framework. To this end, we theoretically propose an experimentally realistic quantum engine that uses transmon qubits as working substance. We solve the dynamics analytically and calculate its efficiency. Full article
Show Figures

Graphical abstract

37 pages, 801 KiB  
Review
Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
by Alfonso Delgado-Bonal and Alexander Marshak
Entropy 2019, 21(6), 541; https://doi.org/10.3390/e21060541 - 28 May 2019
Cited by 490 | Viewed by 37911
Abstract
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete [...] Read more.
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete guideline of the theory and application of the algorithms, intended to explain their characteristics in detail to researchers from different fields. While initially developed for physiological applications, both algorithms have been used in other fields such as medicine, telecommunications, economics or Earth sciences. In this paper, we explain the theoretical aspects involving Information Theory and Chaos Theory, provide simple source codes for their computation, and illustrate the techniques with a step by step example of how to use the algorithms properly. This paper is not intended to be an exhaustive review of all previous applications of the algorithms but rather a comprehensive tutorial where no previous knowledge is required to understand the methodology. Full article
(This article belongs to the Special Issue Approximate, Sample and Multiscale Entropy)
Show Figures

Figure 1

15 pages, 2696 KiB  
Article
EEG Characterization of the Alzheimer’s Disease Continuum by Means of Multiscale Entropies
by Aarón Maturana-Candelas, Carlos Gómez, Jesús Poza, Nadia Pinto and Roberto Hornero
Entropy 2019, 21(6), 544; https://doi.org/10.3390/e21060544 - 28 May 2019
Cited by 51 | Viewed by 6395
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and complexity can be studied applying entropy-based measures throughout multiple temporal scales. In this regard, multiscale sample entropy (MSE) and refined multiscale spectral entropy (rMSSE) were calculated from electroencephalographic (EEG) data. Five minutes of resting-state EEG activity were recorded from 51 healthy controls, 51 mild cognitive impaired (MCI) subjects, 51 mild AD patients (ADMIL), 50 moderate AD patients (ADMOD), and 50 severe AD patients (ADSEV). Our results show statistically significant differences (p-values < 0.05, FDR-corrected Kruskal–Wallis test) between the five groups at each temporal scale. Additionally, average slope values and areas under MSE and rMSSE curves revealed significant changes in complexity mainly for controls vs. MCI, MCI vs. ADMIL and ADMOD vs. ADSEV comparisons (p-values < 0.05, FDR-corrected Mann–Whitney U-test). These findings indicate that MSE and rMSSE reflect the neuronal disturbances associated with the development of dementia, and may contribute to the development of new tools to track the AD progression. Full article
(This article belongs to the Special Issue Entropy Applications in EEG/MEG)
Show Figures

Figure 1

18 pages, 771 KiB  
Article
Is Independence Necessary for a Discontinuous Phase Transition within the q-Voter Model?
by Angelika Abramiuk, Jakub Pawłowski and Katarzyna Sznajd-Weron
Entropy 2019, 21(5), 521; https://doi.org/10.3390/e21050521 - 23 May 2019
Cited by 18 | Viewed by 4847
Abstract
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an [...] Read more.
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an independent behavior, and for the model with anticonformity only continuous phase transitions are possible. However, this claim was derived from the model, in which the size of the influence group needed for the conformity was the same as the size of the group needed for the anticonformity. Here, we abandon this assumption on the equality of two types of social response and introduce the generalized model, in which the size of the influence group needed for the conformity q c and the size of the influence group needed for the anticonformity q a are independent variables and in general q c q a . We investigate the model on the complete graph, similarly as it was done for the original q-voter model with anticonformity, and we show that such a generalized model displays both types of phase transitions depending on parameters q c and q a . Full article
Show Figures

Figure 1

16 pages, 868 KiB  
Review
Physical Layer Key Generation in 5G and Beyond Wireless Communications: Challenges and Opportunities
by Guyue Li, Chen Sun, Junqing Zhang, Eduard Jorswieck, Bin Xiao and Aiqun Hu
Entropy 2019, 21(5), 497; https://doi.org/10.3390/e21050497 - 15 May 2019
Cited by 90 | Viewed by 12261
Abstract
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution [...] Read more.
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution is one of the essential primitives. However, traditional cryptography-based key distribution protocols might be challenged in the 5G and beyond communications because of special features such as device-to-device and heterogeneous communications, and ultra-low latency requirements. Channel reciprocity-based key generation (CRKG) is an emerging physical layer-based technique to establish secret keys between devices. This article reviews CRKG when the 5G and beyond networks employ three candidate technologies: duplex modes, massive multiple-input multiple-output (MIMO) and mmWave communications. We identify the opportunities and challenges for CRKG and provide corresponding solutions. To further demonstrate the feasibility of CRKG in practical communication systems, we overview existing prototypes with different IoT protocols and examine their performance in real-world environments. This article shows the feasibility and promising performances of CRKG with the potential to be commercialized. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

11 pages, 1052 KiB  
Article
Quantum Probes for Ohmic Environments at Thermal Equilibrium
by Fahimeh Salari Sehdaran, Matteo Bina, Claudia Benedetti and Matteo G. A. Paris
Entropy 2019, 21(5), 486; https://doi.org/10.3390/e21050486 - 12 May 2019
Cited by 25 | Viewed by 3975
Abstract
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence [...] Read more.
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence or to tailor quantum information protocols. Recently, the use of quantum probes in characterizing ohmic environments at zero-temperature has been discussed, showing that a single qubit provides precise estimation of the cutoff frequency. On the other hand, thermal noise often spoil quantum probing schemes, and for this reason we here extend the analysis to a complex system at thermal equilibrium. In particular, we discuss the interplay between thermal fluctuations and time evolution in determining the precision attainable by quantum probes. Our results show that the presence of thermal fluctuations degrades the precision for low values of the cutoff frequency, i.e., values of the order ω c T (in natural units). For larger values of ω c , decoherence is mostly due to the structure of environment, rather than thermal fluctuations, such that quantum probing by a single qubit is still an effective estimation procedure. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

18 pages, 10608 KiB  
Article
Solidification Microstructures of the Ingots Obtained by Arc Melting and Cold Crucible Levitation Melting in TiNbTaZr Medium-Entropy Alloy and TiNbTaZrX (X = V, Mo, W) High-Entropy Alloys
by Takeshi Nagase, Kiyoshi Mizuuchi and Takayoshi Nakano
Entropy 2019, 21(5), 483; https://doi.org/10.3390/e21050483 - 10 May 2019
Cited by 78 | Viewed by 9445
Abstract
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position [...] Read more.
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position of the ingots and the alloy system. In addition, no significant difference in the solidification microstructure was observed in TiZrNbTaMo bio-HEAs between the arc-melted (AM) ingots and cold crucible levitation melted (CCLM) ingots. A cold shut was observed in the AM ingots, but not in the CCLM ingots. The interdendrite regions tended to be enriched in Ti and Zr in the TiNbTaZr MEA and TiNbTaZrX (X = V, Mo, and W) HEAs. The distribution coefficients during solidification, which were estimated by thermodynamic calculations, could explain the distribution of the constituent elements in the dendrite and interdendrite regions. The thermodynamic calculations indicated that an increase in the concentration of the low melting-temperature V (2183 K) leads to a monotonic decrease in the liquidus temperature (TL), and that increases in the concentration of high melting-temperature Mo (2896 K) and W (3695 K) lead to a monotonic increase in TL in TiNbTaZrXx (X = V, Mo, and W) (x =  0 − 2) HEAs. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Graphical abstract

17 pages, 1899 KiB  
Article
3D CNN-Based Speech Emotion Recognition Using K-Means Clustering and Spectrograms
by Noushin Hajarolasvadi and Hasan Demirel
Entropy 2019, 21(5), 479; https://doi.org/10.3390/e21050479 - 8 May 2019
Cited by 137 | Viewed by 12217
Abstract
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames [...] Read more.
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames of the same length. Next, we extract an 88-dimensional vector of audio features including Mel Frequency Cepstral Coefficients (MFCC), pitch, and intensity for each of the respective frames. In parallel, the spectrogram of each frame is generated. In the final preprocessing step, by applying k-means clustering on the extracted features of all frames of each audio signal, we select k most discriminant frames, namely keyframes, to summarize the speech signal. Then, the sequence of the corresponding spectrograms of keyframes is encapsulated in a 3D tensor. These tensors are used to train and test a 3D Convolutional Neural network using a 10-fold cross-validation approach. The proposed 3D CNN has two convolutional layers and one fully connected layer. Experiments are conducted on the Surrey Audio-Visual Expressed Emotion (SAVEE), Ryerson Multimedia Laboratory (RML), and eNTERFACE’05 databases. The results are superior to the state-of-the-art methods reported in the literature. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Human Behaviour Analysis)
Show Figures

Graphical abstract

27 pages, 432 KiB  
Article
Distributed Hypothesis Testing with Privacy Constraints
by Atefeh Gilani, Selma Belhadj Amor, Sadaf Salehkalaibar and Vincent Y. F. Tan
Entropy 2019, 21(5), 478; https://doi.org/10.3390/e21050478 - 7 May 2019
Cited by 16 | Viewed by 4130
Abstract
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual [...] Read more.
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example. Full article
Show Figures

Figure 1

48 pages, 1647 KiB  
Article
What Caused What? A Quantitative Account of Actual Causation Using Dynamical Causal Networks
by Larissa Albantakis, William Marshall, Erik Hoel and Giulio Tononi
Entropy 2019, 21(5), 459; https://doi.org/10.3390/e21050459 - 2 May 2019
Cited by 44 | Viewed by 14130
Abstract
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which [...] Read more.
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the “what caused what?” question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Graphical abstract

13 pages, 3101 KiB  
Article
A Method for Diagnosing Gearboxes of Means of Transport Using Multi-Stage Filtering and Entropy
by Tomasz Figlus
Entropy 2019, 21(5), 441; https://doi.org/10.3390/e21050441 - 27 Apr 2019
Cited by 25 | Viewed by 3490
Abstract
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process [...] Read more.
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process signals. The presented method enables the extraction of vibration symptoms, which are symptoms of gear damage, from a complex vibration signal of a gearbox. The combination of multi-stage filtering and entropy enables the elimination of fast-changing vibration impulses, which interfere with the damage diagnosis process, and make it possible to obtain a synthetic signal that provides information about the state of the gearing. The paper demonstrates the usefulness of the developed method in the diagnosis of a gearbox in which two types of gearing damage were simulated: tooth chipping and damage to the working surface of the teeth. The research shows that the application of the proposed method of vibration of signal processing enables observation of the qualitative and quantitative changes in the entropy signal after denoising, which are unambiguous symptoms of the diagnosed damage. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

33 pages, 2684 KiB  
Article
Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty
by Sebastian Gottwald and Daniel A. Braun
Entropy 2019, 21(4), 375; https://doi.org/10.3390/e21040375 - 6 Apr 2019
Cited by 26 | Viewed by 8452
Abstract
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational [...] Read more.
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures. Full article
Show Figures

Graphical abstract

20 pages, 1486 KiB  
Article
Increase in Mutual Information During Interaction with the Environment Contributes to Perception
by Daya Shankar Gupta and Andreas Bahmer
Entropy 2019, 21(4), 365; https://doi.org/10.3390/e21040365 - 4 Apr 2019
Cited by 25 | Viewed by 5863
Abstract
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted [...] Read more.
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted for by a reduction in entropy within the probability distributions of binary states of neurons in distributed neural circuits, given the knowledge about the characteristics of stimuli in physical surroundings. This reduction in the total entropy of multiple pairs of circuits in networks, by an amount equal to the increase of mutual information, occurs as sensory information is processed successively from lower to higher cortical areas or between different areas at the same hierarchical level, but belonging to different networks. The increase in mutual information is partly accounted for by temporal coupling as well as synaptic connections as proposed by Bahmer and Gupta (Front. Neurosci. 2018). We propose that robust increases in mutual information, measuring the association between the characteristics of sensory inputs’ and neural circuits’ connectivity patterns, are partly responsible for perception and successful motor interactions with physical surroundings. The increase in mutual information, given the knowledge about environmental sensory stimuli and the type of motor response produced, is responsible for the coupling between action and perception. In addition, the processing of sensory inputs within neural circuits, with no prior knowledge of the occurrence of a sensory stimulus, increases Shannon information. Consequently, the increase in surprise serves to increase the evidence of the sensory model of physical surroundings Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Show Figures

Figure 1

29 pages, 17129 KiB  
Article
First-Stage Prostate Cancer Identification on Histopathological Images: Hand-Driven versus Automatic Learning
by Gabriel García, Adrián Colomer and Valery Naranjo
Entropy 2019, 21(4), 356; https://doi.org/10.3390/e21040356 - 2 Apr 2019
Cited by 19 | Viewed by 6377
Abstract
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, [...] Read more.
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands. Full article
Show Figures

Figure 1

12 pages, 320 KiB  
Article
Entanglement 25 Years after Quantum Teleportation: Testing Joint Measurements in Quantum Networks
by Nicolas Gisin
Entropy 2019, 21(3), 325; https://doi.org/10.3390/e21030325 - 26 Mar 2019
Cited by 62 | Viewed by 8300
Abstract
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the [...] Read more.
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the resource of quantum information science and technology. However, entanglement is exploited twice in quantum teleportation. Firstly, entanglement is the “quantum teleportation channel”, i.e., entanglement between distant systems. Second, entanglement appears in the eigenvectors of the joint measurement that Alice, the sender, has to perform jointly on the quantum state to be teleported and her half of the “quantum teleportation channel”, i.e., entanglement enabling entirely new kinds of quantum measurements. I emphasize how poorly this second kind of entanglement is understood. In particular, I use quantum networks in which each party connected to several nodes performs a joint measurement to illustrate that the quantumness of such joint measurements remains elusive, escaping today’s available tools to detect and quantify it. Full article
Show Figures

Figure 1

21 pages, 13252 KiB  
Article
Image Encryption Based on Pixel-Level Diffusion with Dynamic Filtering and DNA-Level Permutation with 3D Latin Cubes
by Taiyong Li, Jiayi Shi, Xinsheng Li, Jiang Wu and Fan Pan
Entropy 2019, 21(3), 319; https://doi.org/10.3390/e21030319 - 24 Mar 2019
Cited by 91 | Viewed by 6241
Abstract
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach [...] Read more.
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach consists of five stages: (1) a newly proposed 5D hyperchaotic system with two positive Lyapunov exponents is applied to generate a pseudorandom sequence; (2) for each pixel in an image, a filtering operation with different templates called dynamic filtering is conducted to diffuse the image; (3) DNA encoding is applied to the diffused image and then the DNA-level image is transformed into several 3D DNA-level cubes; (4) Latin cube is operated on each DNA-level cube; and (5) all the DNA cubes are integrated and decoded to a 2D cipher image. Extensive experiments are conducted on public testing images, and the results show that the proposed DFDLC can achieve state-of-the-art results in terms of several evaluation criteria. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

14 pages, 446 KiB  
Article
Using Permutations for Hierarchical Clustering of Time Series
by Jose S. Cánovas, Antonio Guillamón and María Carmen Ruiz-Abellón
Entropy 2019, 21(3), 306; https://doi.org/10.3390/e21030306 - 21 Mar 2019
Cited by 5 | Viewed by 4209
Abstract
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these [...] Read more.
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these distances to both simulated theoretical and real data series. For simulated time series the distances show good clustering results, both in the case of linear and non-linear dependencies. The effect of the embedding dimension and the linkage method are also analyzed. Finally, several real data series are properly clustered using the proposed method. Full article
Show Figures

Figure 1

19 pages, 1090 KiB  
Article
Probability Distributions with Singularities
by Federico Corberi and Alessandro Sarracino
Entropy 2019, 21(3), 312; https://doi.org/10.3390/e21030312 - 21 Mar 2019
Cited by 11 | Viewed by 4351
Abstract
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing [...] Read more.
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing the singularity and other topics such as the condensation of fluctuations, the relationships with ordinary phase-transitions, the giant response associated to anomalous fluctuations, and the interplay with fluctuation relations. Full article
Show Figures

Figure 1

19 pages, 878 KiB  
Article
Limiting Uncertainty Relations in Laser-Based Measurements of Position and Velocity Due to Quantum Shot Noise
by Andreas Fischer
Entropy 2019, 21(3), 264; https://doi.org/10.3390/e21030264 - 8 Mar 2019
Cited by 11 | Viewed by 4080
Abstract
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison [...] Read more.
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison of different measurement principles with respect to the ultimate limit due to quantum shot noise is rare. For this purpose, the Cramér-Rao bound is described as a universal information theoretic tool to calculate the minimal achievable measurement uncertainty for different measurement techniques, and a review of the respective lower bounds for laser-based measurements of position, displacement, strain and velocity at particles and surfaces is presented. As a result, the calculated Cramér-Rao bounds of different measurement principles have similar forms for each measurand including an indirect proportionality with respect to the number of photons and, in case of the position measurement for instance, the wave number squared. Furthermore, an uncertainty principle between the position uncertainty and the wave vector uncertainty was identified, i.e., the measurement uncertainty is minimized by maximizing the wave vector uncertainty. Additionally, physically complementary measurement approaches such as interferometry and time-of-flight positions measurements as well as time-of-flight and Doppler particle velocity measurements are shown to attain the same fundamental limit. Since most of the laser-based measurements perform similar with respect to the quantum shot noise, the realized measurement systems behave differently only due to the available optoelectronic components for the concrete measurement task. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Show Figures

Figure 1

35 pages, 1951 KiB  
Article
Informed Weighted Non-Negative Matrix Factorization Using αβ-Divergence Applied to Source Apportionment
by Gilles Delmaire, Mahmoud Omidvar, Matthieu Puigt, Frédéric Ledoux, Abdelhakim Limem, Gilles Roussel and Dominique Courcot
Entropy 2019, 21(3), 253; https://doi.org/10.3390/e21030253 - 6 Mar 2019
Cited by 7 | Viewed by 5378
Abstract
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row [...] Read more.
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row sum-to-one property of one matrix factor. In this contribution, we extend our previous work which partly involved some of these aspects to α β -divergence cost functions. We derive new update rules which are extendthe previous ones and take into account the available information. Experiments conducted for several operating conditions on realistic simulated mixtures of particulate matter sources show the relevance of these approaches. Results from a real dataset campaign are also presented and validated with expert knowledge. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

28 pages, 5871 KiB  
Article
Bayesian Compressive Sensing of Sparse Signals with Unknown Clustering Patterns
by Mohammad Shekaramiz, Todd K. Moon and Jacob H. Gunther
Entropy 2019, 21(3), 247; https://doi.org/10.3390/e21030247 - 5 Mar 2019
Cited by 23 | Viewed by 5566
Abstract
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or [...] Read more.
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or clumpy behavior, along each column, as well as joint sparsity across the columns. In this paper, we propose a new sparse Bayesian learning (SBL) method that incorporates a total variation-like prior as a measure of the overall clustering pattern in the solution. We further incorporate a parameter in this prior to account for the emphasis on the amount of clumpiness in the supports of the solution to improve the recovery performance of sparse signals with an unknown clustering pattern. This parameter does not exist in the other existing algorithms and is learned via our hierarchical SBL algorithm. While the proposed algorithm is constructed for the MMVs, it can also be applied to the single measurement vector (SMV) problems. Simulation results show the effectiveness of our algorithm compared to other algorithms for both SMV and MMVs. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

19 pages, 956 KiB  
Article
Centroid-Based Clustering with αβ-Divergences
by Auxiliadora Sarmiento, Irene Fondón, Iván Durán-Díaz and Sergio Cruces
Entropy 2019, 21(2), 196; https://doi.org/10.3390/e21020196 - 19 Feb 2019
Cited by 13 | Viewed by 4679
Abstract
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in [...] Read more.
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in the traditional hard k-means algorithm. In this article, we consider the problem of centroid-based clustering using the family of α β -divergences, which is governed by two parameters, α and β . We propose a new iterative algorithm, α β -k-means, giving closed-form solutions for the computation of the sided centroids. The algorithm can be fine-tuned by means of this pair of values, yielding a wide range of the most frequently used divergences. Moreover, it is guaranteed to converge to local minima for a wide range of values of the pair ( α , β ). Our theoretical contribution has been validated by several experiments performed with synthetic and real data and exploring the ( α , β ) plane. The numerical results obtained confirm the quality of the algorithm and its suitability to be used in several practical applications. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

14 pages, 5804 KiB  
Article
Complex Dynamics in a Memcapacitor-Based Circuit
by Fang Yuan, Yuxia Li, Guangyi Wang, Gang Dou and Guanrong Chen
Entropy 2019, 21(2), 188; https://doi.org/10.3390/e21020188 - 16 Feb 2019
Cited by 42 | Viewed by 4865
Abstract
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex [...] Read more.
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex system. The basins of attraction, multistability, bifurcations, Lyapunov exponents, and initial-condition-triggered similar bifurcation are analyzed. Finally, the memcapacitor-based chaotic oscillator is realized via circuit implementation with experimental results presented. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

20 pages, 3649 KiB  
Article
Entropy Generation Rate Minimization for Methanol Synthesis via a CO2 Hydrogenation Reactor
by Penglei Li, Lingen Chen, Shaojun Xia and Lei Zhang
Entropy 2019, 21(2), 174; https://doi.org/10.3390/e21020174 - 13 Feb 2019
Cited by 41 | Viewed by 4873
Abstract
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation [...] Read more.
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation rate (EGR) as the objective function is optimized by using finite time thermodynamic and optimal control theory. The exterior wall temperature (EWR) is taken as the control variable, and the fixed methanol yield and conservation equations are taken as the constraints in the optimization problem. Compared with the reference reactor with a constant EWR, the total EGR of the optimal reactor decreases by 20.5%, and the EGR caused by the heat transfer decreases by 68.8%. In the optimal reactor, the total EGRs mainly distribute in the first 30% reactor length, and the EGRs caused by the chemical reaction accounts for more than 84% of the total EGRs. The selectivity of CH3OH can be enhanced by increasing the inlet molar flow rate of CO, and the CO2 conversion rate can be enhanced by removing H2O from the reaction system. The results obtained herein are in favor of optimal designs of practical tubular MSCH reactors. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Show Figures

Figure 1

16 pages, 9764 KiB  
Article
The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery
by Xianzhi Wang, Shubin Si, Yu Wei and Yongbo Li
Entropy 2019, 21(2), 170; https://doi.org/10.3390/e21020170 - 12 Feb 2019
Cited by 23 | Viewed by 4358
Abstract
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection [...] Read more.
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection of embedding dimension and time delay. To complete the automatic parameter selection of MPE, a novel parameter optimization strategy of MPE is proposed, namely optimized multi-scale permutation entropy (OMPE). In the OMPE method, an improved Cao method is proposed to adaptively select the embedding dimension. Meanwhile, the time delay is determined based on mutual information. To verify the effectiveness of OMPE method, a simulated signal and two experimental signals are used for validation. Results demonstrate that the proposed OMPE method has a better feature extraction ability comparing with existing MPE methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

20 pages, 333 KiB  
Review
Classical (Local and Contextual) Probability Model for Bohm–Bell Type Experiments: No-Signaling as Independence of Random Variables
by Andrei Khrennikov and Alexander Alodjants
Entropy 2019, 21(2), 157; https://doi.org/10.3390/e21020157 - 8 Feb 2019
Cited by 41 | Viewed by 5369
Abstract
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper [...] Read more.
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper is devoted to the conditional probability model with conditioning on the selection of the pairs of experimental settings. From the viewpoint of quantum foundations, this is a local contextual hidden-variables model. Following the recent works of Dzhafarov and collaborators, we apply our conditional probability approach to characterize (no-)signaling. Consideration of the Bohm–Bell experimental scheme in the presence of signaling is important for applications outside quantum mechanics, e.g., in psychology and social science. The main message of this paper (rooted to Ballentine) is that quantum probabilities and more generally probabilities related to the Bohm–Bell type experiments (not only in physics, but also in psychology, sociology, game theory, economics, and finances) can be classically represented as conditional probabilities. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
15 pages, 3574 KiB  
Article
Entropy Analysis and Neural Network-Based Adaptive Control of a Non-Equilibrium Four-Dimensional Chaotic System with Hidden Attractors
by Hadi Jahanshahi, Maryam Shahriari-Kahkeshi, Raúl Alcaraz, Xiong Wang, Vijay P. Singh and Viet-Thanh Pham
Entropy 2019, 21(2), 156; https://doi.org/10.3390/e21020156 - 7 Feb 2019
Cited by 85 | Viewed by 5215
Abstract
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate [...] Read more.
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate entropy, sample entropy, and Fuzzy entropy. In order to stabilize the proposed chaotic system, an adaptive radial-basis function neural network (RBF-NN)–based control method is proposed to represent the model of the uncertain nonlinear dynamics of the system. The Lyapunov direct method-based stability analysis of the proposed approach guarantees that all of the closed-loop signals are semi-globally uniformly ultimately bounded. Also, adaptive learning laws are proposed to tune the weight coefficients of the RBF-NN. The proposed adaptive control approach requires neither the prior information about the uncertain dynamics nor the parameters value of the considered system. Results of simulation validate the performance of the proposed control method. Full article
Show Figures

Figure 1

18 pages, 1663 KiB  
Article
The Radial Propagation of Heat in Strongly Driven Non-Equilibrium Fusion Plasmas
by Boudewijn van Milligen, Benjamin Carreras, Luis García and Javier Nicolau
Entropy 2019, 21(2), 148; https://doi.org/10.3390/e21020148 - 5 Feb 2019
Cited by 13 | Viewed by 3892
Abstract
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” [...] Read more.
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” are detected that tend to form near rational magnetic surfaces, thought to be associated with zonal flows. Occasionally, heat transport “jumps” over these barriers, and this “jumping” behavior seems to increase in intensity when the heating power is raised, suggesting an explanation for the ubiquitous phenomenon of “power degradation” observed in magnetically confined plasmas. Reinterpreting the analysis results in terms of a continuous time random walk, “fast” and “slow” transport channels can be discerned. The cited results can partially be understood in the framework of a resistive Magneto-HydroDynamic model. The picture that emerges shows that plasma self-organization and competing transport mechanisms are essential ingredients for a fuller understanding of heat transport in fusion plasmas. Full article
Show Figures

Graphical abstract

17 pages, 269 KiB  
Article
Are Virtual Particles Less Real?
by Gregg Jaeger
Entropy 2019, 21(2), 141; https://doi.org/10.3390/e21020141 - 2 Feb 2019
Cited by 24 | Viewed by 12571
Abstract
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic [...] Read more.
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic calculations and the diagrammatic aids that are often used in them is clarified. It is pointed out that the distinction between virtual particles and others and, therefore, judgments regarding their reality have been made on basis of these methods rather than on their physical characteristics. As such, it has obscured the question of their existence. It is here argued that the most influential arguments against the existence of virtual particles but not other particles fail because they either are arguments against the existence of particles in general rather than virtual particles per se, or are dependent on the imposition of classical intuitions on quantum systems, or are simply beside the point. Several reasons are then provided for considering virtual particles real, such as their descriptive, explanatory, and predictive value, and a clearer characterization of virtuality—one in terms of intermediate states—that also applies beyond perturbation theory is provided. It is also pointed out that in the role of force mediators, they serve to preclude action-at-a-distance between interacting particles. For these reasons, it is concluded that virtual particles are as real as other quantum particles. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
21 pages, 12169 KiB  
Article
Entropy Generation Analysis and Thermodynamic Optimization of Jet Impingement Cooling Using Large Eddy Simulation
by Florian Ries, Yongxiang Li, Kaushal Nishad, Johannes Janicka and Amsini Sadiki
Entropy 2019, 21(2), 129; https://doi.org/10.3390/e21020129 - 30 Jan 2019
Cited by 34 | Viewed by 6453
Abstract
In this work, entropy generation analysis is applied to characterize and optimize a turbulent impinging jet on a heated solid surface. In particular, the influence of plate inclinations and Reynolds numbers on the turbulent heat and fluid flow properties and its impact on [...] Read more.
In this work, entropy generation analysis is applied to characterize and optimize a turbulent impinging jet on a heated solid surface. In particular, the influence of plate inclinations and Reynolds numbers on the turbulent heat and fluid flow properties and its impact on the thermodynamic performance of such flow arrangements are numerically investigated. For this purpose, novel model equations are derived in the frame of Large Eddy Simulation (LES) that allows calculation of local entropy generation rates in a post-processing phase including the effect of unresolved subgrid-scale irreversibilities. From this LES-based study, distinctive features of heat and flow dynamics of the impinging fluid are detected and optimal operating designs for jet impingement cooling are identified. It turned out that (1) the location of the stagnation point and that of the maximal Nusselt number differ in the case of plate inclination; (2) predominantly the impinged wall acts as a strong source of irreversibility; and (3) a flow arrangement with a jet impinging normally on the heated surface allows the most efficient use of energy which is associated with lowest exergy lost. Furthermore, it is found that increasing the Reynolds number intensifies the heat transfer and upgrades the second law efficiency of such thermal systems. Thereby, the thermal efficiency enhancement can overwhelm the frictional exergy loss. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Show Figures

Graphical abstract

21 pages, 544 KiB  
Article
PT Symmetry, Non-Gaussian Path Integrals, and the Quantum Black–Scholes Equation
by Will Hicks
Entropy 2019, 21(2), 105; https://doi.org/10.3390/e21020105 - 23 Jan 2019
Cited by 6 | Viewed by 4586
Abstract
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to [...] Read more.
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to understand their behaviour. In this paper we develop further links between quantum stochastic processes, and nonlocal diffusions, by inverting the question, and showing how certain nonlocal diffusions can be written as quantum stochastic processes. We then go on to show how one can use path integral formalism, and PT symmetric quantum mechanics, to build a non-Gaussian kernel function for the Accardi–Boukas quantum Black–Scholes. Behaviours observed in the real market are a natural model output, rather than something that must be deliberately included. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

20 pages, 11617 KiB  
Article
Quantifying Data Dependencies with Rényi Mutual Information and Minimum Spanning Trees
by Anne Eggels and Daan Crommelin
Entropy 2019, 21(2), 100; https://doi.org/10.3390/e21020100 - 22 Jan 2019
Cited by 3 | Viewed by 3924
Abstract
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and [...] Read more.
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and sensitivity analysis. The latter is closely related to the question how strongly dependent the output of, e.g., a computer simulation, is on the individual random input variables. To estimate the Rényi mutual information from data, we use a method due to Hero et al. that relies on computing minimum spanning trees (MSTs) of the data and uses the length of the MST in an estimator for the entropy. To reduce the computational cost of constructing the exact MST for large datasets, we explore methods to compute approximations to the exact MST, and find the multilevel approach introduced recently by Zhong et al. (2015) to be the most accurate. Because the MST computation does not require knowledge (or estimation) of the distributions, our methodology is well-suited for situations where only data are available. Furthermore, we show that, in the case where only the ranking of several dependencies is required rather than their exact value, it is not necessary to compute the Rényi divergence, but only an estimator derived from it. The main contributions of this paper are the introduction of this quantifier of dependency, as well as the novel combination of using approximate methods for MSTs with estimating the Rényi mutual information via MSTs. We applied our proposed method to an artificial test case based on the Ishigami function, as well as to a real-world test case involving an El Nino dataset. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 6061 KiB  
Article
Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
by Gilles Brassard and Paul Raymond-Robichaud
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087 - 18 Jan 2019
Cited by 21 | Viewed by 9822
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

19 pages, 2236 KiB  
Article
Configurational Entropy in Multicomponent Alloys: Matrix Formulation from Ab Initio Based Hamiltonian and Application to the FCC Cr-Fe-Mn-Ni System
by Antonio Fernández-Caballero, Mark Fedorov, Jan S. Wróbel, Paul M. Mummery and Duc Nguyen-Manh
Entropy 2019, 21(1), 68; https://doi.org/10.3390/e21010068 - 15 Jan 2019
Cited by 28 | Viewed by 12431
Abstract
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, [...] Read more.
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, a new formalism based on a hybrid combination of the Cluster Expansion (CE) Hamiltonian and Monte Carlo simulations is developed to predict the configuration entropy as a function of temperature from multi-body cluster probability in a multi-component system with arbitrary average composition. The multi-body probabilities are worked out by explicit inversion and direct product of a matrix formulation within orthonomal sets of point functions in the clusters obtained from symmetry independent correlation functions. The matrix quantities are determined from semi canonical Monte Carlo simulations with Effective Cluster Interactions (ECIs) derived from Density Functional Theory (DFT) calculations. The formalism is applied to analyze the 4-body cluster probabilities for the quaternary system Cr-Fe-Mn-Ni as a function of temperature and alloy concentration. It is shown that, for two specific compositions (Cr 25Fe 25Mn 25Ni 25 and Cr 18Fe 27Mn 27Ni 28), the high value of probabilities for Cr-Fe-Fe-Fe and Mn-Mn-Ni-Ni are strongly correlated with the presence of the ordered phases L1 2 -CrFe 3 and L1 0-MnNi, respectively. These results are in an excellent agreement with predictions of these ground state structures by ab initio calculations. The general formalism is used to investigate the configuration entropy as a function of temperature and for 285 different alloy compositions. It is found that our matrix formulation of cluster probabilities provides an efficient tool to compute configuration entropy in multi-component alloys in a comparison with the result obtained by the thermodynamic integration method. At high temperatures, it is shown that many-body cluster correlations still play an important role in understanding the configuration entropy before reaching the solid solution limit of high-entroy alloys (HEAs). Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Graphical abstract

13 pages, 1460 KiB  
Article
The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
by Thomas Haid and Peter Federolf
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070 - 15 Jan 2019
Cited by 14 | Viewed by 5551
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

13 pages, 4392 KiB  
Article
Double Entropy Joint Distribution Function and Its Application in Calculation of Design Wave Height
by Guilin Liu, Baiyu Chen, Song Jiang, Hanliang Fu, Liping Wang and Wei Jiang
Entropy 2019, 21(1), 64; https://doi.org/10.3390/e21010064 - 14 Jan 2019
Cited by 35 | Viewed by 4305
Abstract
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy [...] Read more.
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy distribution function with four undetermined parameters is derived by means of coordinate transformation and solving conditional variational problems. A double entropy joint distribution function of wave height and wave period is also derived. The function is derived from the maximum entropy wave height function and the maximum entropy periodic function, with the help of structures of the Copula function. The double entropy joint distribution function of wave height and wave period is not limited by weak nonlinearity, nor by normal stochastic process and narrow spectrum. Besides, it can fit the observed data more carefully and be more widely applicable to nonlinear waves in various cases, owing to the many undetermined parameters it contains. The engineering cases show that the recurrence level derived from the double entropy joint distribution function is higher than that from the extreme value distribution using the single variables of wave height or wave period. It is also higher than that from the traditional joint distribution function of wave height and wave period. Full article
Show Figures

Graphical abstract

14 pages, 2837 KiB  
Article
Unfolding the Complexity of the Global Value Chain: Strength and Entropy in the Single-Layer, Multiplex, and Multi-Layer International Trade Networks
by Luiz G. A. Alves, Giuseppe Mangioni, Francisco A. Rodrigues, Pietro Panzarasa and Yamir Moreno
Entropy 2018, 20(12), 909; https://doi.org/10.3390/e20120909 - 28 Nov 2018
Cited by 34 | Viewed by 8330
Abstract
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss [...] Read more.
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss differences among the three network frameworks in terms of their relative advantages in capturing salient topological features of trade. We draw on the World Input-Output Database to build the three networks. We then uncover sources of heterogeneity in the way strength is allocated among countries and transactions by computing the strength distribution and entropy in each network. Additionally, we trace how entropy evolved, and show how the observed peaks can be associated with the onset of the global economic downturn. Findings suggest how more complex representations of trade, such as the multi-layer network, enable us to disambiguate the distinct roles of intra- and cross-industry transactions in driving the evolution of entropy at a more aggregate level. We discuss our results and the implications of our comparative analysis of networks for research on international trade and other empirical domains across the natural and social sciences. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

27 pages, 568 KiB  
Article
Entropic Steering Criteria: Applications to Bipartite and Tripartite Systems
by Ana C. S. Costa, Roope Uola and Otfried Gühne
Entropy 2018, 20(10), 763; https://doi.org/10.3390/e20100763 - 5 Oct 2018
Cited by 28 | Viewed by 5118
Abstract
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we [...] Read more.
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we investigate the applicability of a recently proposed method for building steering criteria from generalized entropic uncertainty relations. This method works for any entropy which satisfy the properties of (i) (pseudo-) additivity for independent distributions; (ii) state independent entropic uncertainty relation (EUR); and (iii) joint convexity of a corresponding relative entropy. Our study extends the former analysis to Tsallis and Rényi entropies on bipartite and tripartite systems. As examples, we investigate the steerability of the three-qubit GHZ and W states. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

43 pages, 543 KiB  
Article
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
by Chao Tian
Entropy 2018, 20(8), 603; https://doi.org/10.3390/e20080603 - 13 Aug 2018
Cited by 46 | Viewed by 5254
Abstract
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space [...] Read more.
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Show Figures

Figure 1

80 pages, 11447 KiB  
Article
Conditional Gaussian Systems for Multiscale Nonlinear Stochastic Systems: Prediction, State Estimation and Uncertainty Quantification
by Nan Chen and Andrew J. Majda
Entropy 2018, 20(7), 509; https://doi.org/10.3390/e20070509 - 4 Jul 2018
Cited by 55 | Viewed by 6881
Abstract
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed [...] Read more.
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed analytical formulae for solving the conditional statistics and is thus computationally efficient. A rich gallery of examples of conditional Gaussian systems are illustrated here, which includes data-driven physics-constrained nonlinear stochastic models, stochastically coupled reaction–diffusion models in neuroscience and ecology, and large-scale dynamical models in turbulence, fluids and geophysical flows. Making use of the conditional Gaussian structure, efficient statistically accurate algorithms involving a novel hybrid strategy for different subspaces, a judicious block decomposition and statistical symmetry are developed for solving the Fokker–Planck equation in large dimensions. The conditional Gaussian framework is also applied to develop extremely cheap multiscale data assimilation schemes, such as the stochastic superparameterization, which use particle filters to capture the non-Gaussian statistics on the large-scale part whose dimension is small whereas the statistics of the small-scale part are conditional Gaussian given the large-scale part. Other topics of the conditional Gaussian systems studied here include designing new parameter estimation schemes and understanding model errors. Full article
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)
Show Figures

Figure 1

54 pages, 1965 KiB  
Article
The Gibbs Paradox: Early History and Solutions
by Olivier Darrigol
Entropy 2018, 20(6), 443; https://doi.org/10.3390/e20060443 - 6 Jun 2018
Cited by 16 | Viewed by 9267
Abstract
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical [...] Read more.
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical mechanics as a reservoir of ideas for clarifying foundations and removing prejudices, and to relate the paradox to broad misunderstandings of the nature of physical theory. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
Show Figures

Figure 1

24 pages, 377 KiB  
Article
Criterion of Existence of Power-Law Memory for Economic Processes
by Vasily E. Tarasov and Valentina V. Tarasova
Entropy 2018, 20(6), 414; https://doi.org/10.3390/e20060414 - 29 May 2018
Cited by 27 | Viewed by 4224
Abstract
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We [...] Read more.
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We also suggest the criterion of existence of PLT memory for frequency domain by using the concept of non-integer dimensions. For an economic process, for which it is known that an endogenous variable depends on an exogenous variable, the proposed criteria make it possible to identify the presence of the PLT memory. The suggested criteria are illustrated in various examples. The use of the proposed criteria allows apply the fractional calculus to construct dynamic models of economic processes. These criteria can be also used to identify the linear integro-differential operators that can be considered as fractional derivatives and integrals of non-integer orders. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
18 pages, 8649 KiB  
Article
Quantum Trajectories: Real or Surreal?
by Basil J. Hiley and Peter Van Reeth
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353 - 8 May 2018
Cited by 12 | Viewed by 8315
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context [...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

23 pages, 5735 KiB  
Review
Levitated Nanoparticles for Microscopic Thermodynamics—A Review
by Jan Gieseler and James Millen
Entropy 2018, 20(5), 326; https://doi.org/10.3390/e20050326 - 28 Apr 2018
Cited by 79 | Viewed by 10754
Abstract
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we [...] Read more.
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we review the application of levitated nanoparticles as a new experimental platform to explore stochastic thermodynamics in small systems. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

12 pages, 791 KiB  
Article
Password Security as a Game of Entropies
by Stefan Rass and Sandra König
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312 - 25 Apr 2018
Cited by 16 | Viewed by 6838
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability [...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
36 pages, 529 KiB  
Article
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
by Conor Finn and Joseph T. Lizier
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297 - 18 Apr 2018
Cited by 69 | Viewed by 9321
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The [...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Show Figures

Figure 1

15 pages, 1150 KiB  
Article
Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
by Margarida Sousa and Alexandra M. Carvalho
Entropy 2018, 20(4), 274; https://doi.org/10.3390/e20040274 - 12 Apr 2018
Cited by 5 | Viewed by 5089
Abstract
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that [...] Read more.
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

12 pages, 809 KiB  
Article
Distance Entropy Cartography Characterises Centrality in Complex Networks
by Massimo Stella and Manlio De Domenico
Entropy 2018, 20(4), 268; https://doi.org/10.3390/e20040268 - 11 Apr 2018
Cited by 28 | Viewed by 6539
Abstract
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network [...] Read more.
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network models. By coupling distance entropy information with closeness centrality, we introduce a network cartography which allows one to reduce the degeneracy of ranking based on closeness alone. We apply this methodology to the empirical multiplex lexical network encoding the linguistic relationships known to English speaking toddlers. We show that the distance entropy cartography better predicts how children learn words compared to closeness centrality. Our results highlight the importance of distance entropy for gaining insights from distance patterns in complex networks. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

23 pages, 1755 KiB  
Article
Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting
by Zahra Karevan and Johan A. K. Suykens
Entropy 2018, 20(4), 264; https://doi.org/10.3390/e20040264 - 10 Apr 2018
Cited by 15 | Viewed by 5134
Abstract
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. [...] Read more.
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Back to TopTop