Next Issue
Volume 21, December
Previous Issue
Volume 21, October
 
 

Entropy, Volume 21, Issue 11 (November 2019) – 105 articles

Cover Story (view full-size image): We applied the measures of axiomatically proposed transfer entropy (TE) and the first principle-based information flow (IF) from information theory to detect and quantify climate interactions. As estimating TE is quite challenging, we applied various estimators of TE to idealized test cases and measured their sensitivity on sample size. We propose composite use of TE-kernel and TE-k-nearest neighbour with parameter testing in addition to TE-linear and IF-linear for linear systems. A two way realistic Indo-Pacific coupling is detected, however, an unrealistic information exchange from European air temperatures to NAO was also detected, which hints at a hidden driving process. Hence, the limitations, time series length, and the system at hand must be taken into account before drawing any conclusions from TE and IF-linear estimations. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
A Secure and Robust Image Hashing Scheme Using Gaussian Pyramids
Entropy 2019, 21(11), 1132; https://doi.org/10.3390/e21111132 - 19 Nov 2019
Cited by 11 | Viewed by 1661
Abstract
Image hash is an alternative to cryptographic hash functions for checking integrity of digital images. Compared to cryptographic hash functions, an image hash or a Perceptual Hash Function (PHF) is resilient to content preserving distortions and sensitive to malicious tampering. In this paper, [...] Read more.
Image hash is an alternative to cryptographic hash functions for checking integrity of digital images. Compared to cryptographic hash functions, an image hash or a Perceptual Hash Function (PHF) is resilient to content preserving distortions and sensitive to malicious tampering. In this paper, a robust and secure image hashing technique using a Gaussian pyramid is proposed. A Gaussian pyramid decomposes an image into different resolution levels which can be utilized to obtain robust and compact hash features. These stable features have been utilized in the proposed work to construct a secure and robust image hash. The proposed scheme uses Laplacian of Gaussian (LOG) and disk filters to filter the low-resolution Gaussian decomposed image. The filtered images are then subtracted and their difference is used as a hash. To make the hash secure, a key is introduced before feature extraction, thus making the entire feature space random. The proposed hashing scheme has been evaluated through a number of experiments involving cases of non-malicious distortions and malicious tampering. Experimental results reveal that the proposed hashing scheme is robust against non-malicious distortions and is sensitive to detect minute malicious tampering. Moreover, False Positive Probability (FPP) and False Negative Probability (FNP) results demonstrate the effectiveness of the proposed scheme when compared to state-of-the-art image hashing algorithms proposed in the literature. Full article
Show Figures

Figure 1

Article
Measurement Based Quantum Heat Engine with Coupled Working Medium
Entropy 2019, 21(11), 1131; https://doi.org/10.3390/e21111131 - 19 Nov 2019
Cited by 9 | Viewed by 1285
Abstract
We consider measurement based single temperature quantum heat engine without feedback control, introduced recently by Yi, Talkner and Kim [Phys. Rev. E 96, 022108 (2017)]. Taking the working medium of the engine to be a one-dimensional Heisenberg model of two spins, [...] Read more.
We consider measurement based single temperature quantum heat engine without feedback control, introduced recently by Yi, Talkner and Kim [Phys. Rev. E 96, 022108 (2017)]. Taking the working medium of the engine to be a one-dimensional Heisenberg model of two spins, we calculate the efficiency of the engine undergoing a cyclic process. Starting with two spin-1/2 particles, we investigate the scenario of higher spins also. We show that, for this model of coupled working medium, efficiency can be higher than that of an uncoupled one. However, the relationship between the coupling constant and the efficiency of the engine is rather involved. We find that in the higher spin scenario efficiency can sometimes be negative (this means work has to be done to run the engine cycle) for certain range of coupling constants, in contrast to the aforesaid work of Yi, Talkner and Kim, where they showed that the extracted work is always positive in the absence of coupling. We provide arguments for this negative efficiency in higher spin scenarios. Interestingly, this happens only in the asymmetric scenarios, where the two spins are different. Given these facts, for judiciously chosen conditions, an engine with coupled working medium gives advantage for the efficiency over the uncoupled one. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Article
Dynamical Behavior of β-Lactamases and Penicillin- Binding Proteins in Different Functional States and Its Potential Role in Evolution
Entropy 2019, 21(11), 1130; https://doi.org/10.3390/e21111130 - 19 Nov 2019
Cited by 5 | Viewed by 1376
Abstract
β-Lactamases are enzymes produced by bacteria to hydrolyze β-lactam-based antibiotics, and pose serious threat to public health through related antibiotic resistance. Class A β-lactamases are structurally and functionally related to penicillin-binding proteins (PBPs). Despite the extensive studies of the structures, catalytic mechanisms and [...] Read more.
β-Lactamases are enzymes produced by bacteria to hydrolyze β-lactam-based antibiotics, and pose serious threat to public health through related antibiotic resistance. Class A β-lactamases are structurally and functionally related to penicillin-binding proteins (PBPs). Despite the extensive studies of the structures, catalytic mechanisms and dynamics of both β-lactamases and PBPs, the potentially different dynamical behaviors of these proteins in different functional states still remain elusive in general. In this study, four evolutionarily related proteins, including TEM-1 and TOHO-1 as class A β-lactamases, PBP-A and DD-transpeptidase as two PBPs, are subjected to molecular dynamics simulations and various analyses to characterize their dynamical behaviors in different functional states. Penicillin G and its ring opening product serve as common ligands for these four proteins of interest. The dynamic analyses of overall structures, the active sites with penicillin G, and three catalytically important residues commonly shared by all four proteins reveal unexpected cross similarities between Class A β-lactamases and PBPs. These findings shed light on both the hidden relations among dynamical behaviors of these proteins and the functional and evolutionary relations among class A β-lactamases and PBPs. Full article
Show Figures

Graphical abstract

Article
Solvability of the p-Adic Analogue of Navier–Stokes Equation via the Wavelet Theory
Entropy 2019, 21(11), 1129; https://doi.org/10.3390/e21111129 - 17 Nov 2019
Cited by 13 | Viewed by 1171
Abstract
P-adic numbers serve as the simplest ultrametric model for the tree-like structures arising in various physical and biological phenomena. Recently p-adic dynamical equations started to be applied to geophysics, to model propagation of fluids (oil, water, and oil-in-water and water-in-oil emulsion) [...] Read more.
P-adic numbers serve as the simplest ultrametric model for the tree-like structures arising in various physical and biological phenomena. Recently p-adic dynamical equations started to be applied to geophysics, to model propagation of fluids (oil, water, and oil-in-water and water-in-oil emulsion) in capillary networks in porous random media. In particular, a p-adic analog of the Navier–Stokes equation was derived starting with a system of differential equations respecting the hierarchic structure of a capillary tree. In this paper, using the Schauder fixed point theorem together with the wavelet functions, we extend the study of the solvability of a p-adic field analog of the Navier–Stokes equation derived from a system of hierarchic equations for fluid flow in a capillary network in porous medium. This equation describes propagation of fluid’s flow through Geo-conduits, consisting of the mixture of fractures (as well as fracture’s corridors) and capillary networks, detected by seismic as joint wave/mass conducts. Furthermore, applying the Adomian decomposition method we formulate the solution of the p-adic analog of the Navier–Stokes equation in term of series in general form. This solution may help researchers to come closer and find more facts, taking into consideration the scaling, hierarchies, and formal derivations, imprinted from the analogous aspects of the real world phenomena. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Complexity-Based Measures of Postural Sway during Walking at Different Speeds and Durations Using Multiscale Entropy
Entropy 2019, 21(11), 1128; https://doi.org/10.3390/e21111128 - 16 Nov 2019
Cited by 11 | Viewed by 1804
Abstract
Participation in various physical activities requires successful postural control in response to the changes in position of our body. It is important to assess postural control for early detection of falls and foot injuries. Walking at various speeds and for various durations is [...] Read more.
Participation in various physical activities requires successful postural control in response to the changes in position of our body. It is important to assess postural control for early detection of falls and foot injuries. Walking at various speeds and for various durations is essential in daily physical activities. The purpose of this study was to evaluate the changes in complexity of the center of pressure (COP) during walking at different speeds and for different durations. In this study, a total of 12 participants were recruited for walking at two speeds (slow at 3 km/h and moderate at 6 km/h) for two durations (10 and 20 min). An insole-type plantar pressure measurement system was used to measure and calculate COP as participants walked on a treadmill. Multiscale entropy (MSE) was used to quantify the complexity of COP. Our results showed that the complexity of COP significantly decreased (p < 0.05) after 20 min of walking (complexity index, CI = −3.51) compared to 10 min of walking (CI = −3.20) while walking at 3 km/h, but not at 6 km/h. Our results also showed that the complexity index of COP indicated a significant difference (p < 0.05) between walking at speeds of 3 km/h (CI = −3.2) and 6 km/h (CI = −3.6) at the walking duration of 10 min, but not at 20 min. This study demonstrated an interaction between walking speeds and walking durations on the complexity of COP. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Show Figures

Figure 1

Article
Coevolutionary Analysis of Protein Subfamilies by Sequence Reweighting
Entropy 2019, 21(11), 1127; https://doi.org/10.3390/e21111127 - 16 Nov 2019
Cited by 4 | Viewed by 1495
Abstract
Extracting structural information from sequence co-variation has become a common computational biology practice in the recent years, mainly due to the availability of large sequence alignments of protein families. However, identifying features that are specific to sub-classes and not shared by all members [...] Read more.
Extracting structural information from sequence co-variation has become a common computational biology practice in the recent years, mainly due to the availability of large sequence alignments of protein families. However, identifying features that are specific to sub-classes and not shared by all members of the family using sequence-based approaches has remained an elusive problem. We here present a coevolutionary-based method to differentially analyze subfamily specific structural features by a continuous sequence reweighting (SR) approach. We introduce the underlying principles and test its predictive capabilities on the Response Regulator family, whose subfamilies have been previously shown to display distinct, specific homo-dimerization patterns. Our results show that this reweighting scheme is effective in assigning structural features known a priori to subfamilies, even when sequence data is relatively scarce. Furthermore, sequence reweighting allows assessing if individual structural contacts pertain to specific subfamilies and it thus paves the way for the identification specificity-determining contacts from sequence variation data. Full article
Show Figures

Figure 1

Article
Multisensor Estimation Fusion with Gaussian Process for Nonlinear Dynamic Systems
Entropy 2019, 21(11), 1126; https://doi.org/10.3390/e21111126 - 16 Nov 2019
Cited by 2 | Viewed by 1079
Abstract
The Gaussian process is gaining increasing importance in different areas such as signal processing, machine learning, robotics, control and aerospace and electronic systems, since it can represent unknown system functions by posterior probability. This paper investigates multisensor fusion in the setting of Gaussian [...] Read more.
The Gaussian process is gaining increasing importance in different areas such as signal processing, machine learning, robotics, control and aerospace and electronic systems, since it can represent unknown system functions by posterior probability. This paper investigates multisensor fusion in the setting of Gaussian process estimation for nonlinear dynamic systems. In order to overcome the difficulty caused by the unknown nonlinear system models, we associate the transition and measurement functions with the Gaussian process regression models, then the advantages of the non-parametric feature of the Gaussian process can be fully extracted for state estimation. Next, based on the Gaussian process filters, we propose two different fusion methods, centralized estimation fusion and distributed estimation fusion, to utilize the multisensor measurement information. Furthermore, the equivalence of the two proposed fusion methods is established by rigorous analysis. Finally, numerical examples for nonlinear target tracking systems demonstrate the equivalence and show that the multisensor estimation fusion performs better than the single sensor. Meanwhile, the proposed fusion methods outperform the convex combination method and the relaxed Chebyshev center covariance intersection fusion algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction
Entropy 2019, 21(11), 1125; https://doi.org/10.3390/e21111125 - 15 Nov 2019
Viewed by 1050
Abstract
Dimensionality reduction has always been a major problem for handling huge dimensionality datasets. Due to the utilization of labeled data, supervised dimensionality reduction methods such as Linear Discriminant Analysis tend achieve better classification performance compared with unsupervised methods. However, supervised methods need sufficient [...] Read more.
Dimensionality reduction has always been a major problem for handling huge dimensionality datasets. Due to the utilization of labeled data, supervised dimensionality reduction methods such as Linear Discriminant Analysis tend achieve better classification performance compared with unsupervised methods. However, supervised methods need sufficient labeled data in order to achieve satisfying results. Therefore, semi-supervised learning (SSL) methods can be a practical selection rather than utilizing labeled data. In this paper, we develop a novel SSL method by extending anchor graph regularization (AGR) for dimensionality reduction. In detail, the AGR is an accelerating semi-supervised learning method to propagate the class labels to unlabeled data. However, it cannot handle new incoming samples. We thereby improve AGR by adding kernel regression on the basic objective function of AGR. Therefore, the proposed method can not only estimate the class labels of unlabeled data but also achieve dimensionality reduction. Extensive simulations on several benchmark datasets are conducted, and the simulation results verify the effectiveness for the proposed work. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Article
Transfer Entropy between Communities in Complex Financial Networks
Entropy 2019, 21(11), 1124; https://doi.org/10.3390/e21111124 - 15 Nov 2019
Cited by 7 | Viewed by 1869
Abstract
In this paper, we analyze information flows between communities of financial markets, represented as complex networks. Each community, typically corresponding to a business sector, represents a significant part of the financial market and the detection of interactions between communities is crucial in the [...] Read more.
In this paper, we analyze information flows between communities of financial markets, represented as complex networks. Each community, typically corresponding to a business sector, represents a significant part of the financial market and the detection of interactions between communities is crucial in the analysis of risk spreading in the financial markets. We show that the transfer entropy provides a coherent description of information flows in and between communities, also capturing non-linear interactions. Particularly, we focus on information transfer of rare events—typically large drops which can spread in the network. These events can be analyzed by Rényi transfer entropy, which enables to accentuate particular types of events. We analyze transfer entropies between communities of the five largest financial markets and compare the information flows with the correlation network of each market. From the transfer entropy picture, we can also identify the non-linear interactions, which are typical in the case of extreme events. The strongest flows can be typically observed between specific types of business sectors—financial sectors is the most significant example. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
A Novel Residual Dense Pyramid Network for Image Dehazing
Entropy 2019, 21(11), 1123; https://doi.org/10.3390/e21111123 - 15 Nov 2019
Cited by 5 | Viewed by 1285
Abstract
Recently, convolutional neural network (CNN) based on the encoder-decoder structure have been successfully applied to image dehazing. However, these CNN based dehazing methods have two limitations: First, these dehazing models are large in size with enormous parameters, which not only consumes much GPU [...] Read more.
Recently, convolutional neural network (CNN) based on the encoder-decoder structure have been successfully applied to image dehazing. However, these CNN based dehazing methods have two limitations: First, these dehazing models are large in size with enormous parameters, which not only consumes much GPU memory, but also is hard to train from scratch. Second, these models, which ignore the structural information at different resolutions of intermediate layers, cannot capture informative texture and edge information for dehazing by stacking more layers. In this paper, we propose a light-weight end-to-end network named the residual dense pyramid network (RDPN) to address the above problems. To exploit the structural information at different resolutions of intermediate layers fully, a new residual dense pyramid (RDP) is proposed as a building block. By introducing a dense information fusion layer and the residual learning module, the RDP can maximize the information flow and extract local features. Furthermore, the RDP further learns the structural information from intermediate layers via a multiscale pyramid fusion mechanism. To reduce the number of network parameters and to ease the training process, we use one RDP in the encoder and two RDPs in the decoder, following a multilevel pyramid pooling layer for incorporating global context features before estimating the final result. The extensive experimental results on a synthetic dataset and real-world images demonstrate that the new RDPN achieves favourable performance compared with some state-of-the-art methods, e.g., the recent densely connected pyramid dehazing network, the all-in-one dehazing network, the enhanced pix2pix dehazing network, pixel-based alpha blending, artificial multi-exposure image fusions and the genetic programming estimator, in terms of accuracy, run time and number of parameters. To be specific, RDPN outperforms all of the above methods in terms of PSNR by at least 4.25 dB. The run time of the proposed method is 0.021 s, and the number of parameters is 1,534,799, only 6% of that used by the densely connected pyramid dehazing network. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
An Improved Belief Entropy to Measure Uncertainty of Basic Probability Assignments Based on Deng Entropy and Belief Interval
Entropy 2019, 21(11), 1122; https://doi.org/10.3390/e21111122 - 15 Nov 2019
Cited by 7 | Viewed by 1187
Abstract
It is still an open issue to measure uncertainty of the basic probability assignment function under Dempster-Shafer theory framework, which is the foundation and preliminary work for conflict degree measurement and combination of evidences. This paper proposes an improved belief entropy to measure [...] Read more.
It is still an open issue to measure uncertainty of the basic probability assignment function under Dempster-Shafer theory framework, which is the foundation and preliminary work for conflict degree measurement and combination of evidences. This paper proposes an improved belief entropy to measure uncertainty of the basic probability assignment based on Deng entropy and the belief interval, which takes the belief function and the plausibility function as the lower bound and the upper bound, respectively. Specifically, the center and the span of the belief interval are employed to define the total uncertainty degree. It can be proved that the improved belief entropy will be degenerated to Shannon entropy when the the basic probability assignment is Bayesian. The results of numerical examples and a case study show that its efficiency and flexibility are better compared with previous uncertainty measures. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Article
Distribution Structure Learning Loss (DSLL) Based on Deep Metric Learning for Image Retrieval
Entropy 2019, 21(11), 1121; https://doi.org/10.3390/e21111121 - 15 Nov 2019
Cited by 4 | Viewed by 1372
Abstract
The massive number of images demands highly efficient image retrieval tools. Deep distance metric learning (DDML) is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, which has achieved encouraging results. The loss function is crucial [...] Read more.
The massive number of images demands highly efficient image retrieval tools. Deep distance metric learning (DDML) is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, which has achieved encouraging results. The loss function is crucial in DDML frameworks. However, we found limitations to this model. When learning the similarity of positive and negative examples, the current methods aim to pull positive pairs as close as possible and separate negative pairs into equal distances in the embedding space. Consequently, the data distribution might be omitted. In this work, we focus on the distribution structure learning loss (DSLL) algorithm that aims to preserve the geometric information of images. To achieve this, we firstly propose a metric distance learning for highly matching figures to preserve the similarity structure inside it. Second, we introduce an entropy weight-based structural distribution to set the weight of the representative negative samples. Third, we incorporate their weights into the process of learning to rank. So, the negative samples can preserve the consistency of their structural distribution. Generally, we display comprehensive experimental results drawing on three popular landmark building datasets and demonstrate that our method achieves state-of-the-art performance. Full article
Show Figures

Figure 1

Article
Universal Sample Size Invariant Measures for Uncertainty Quantification in Density Estimation
Entropy 2019, 21(11), 1120; https://doi.org/10.3390/e21111120 - 15 Nov 2019
Cited by 3 | Viewed by 1320
Abstract
Previously, we developed a high throughput non-parametric maximum entropy method (PLOS ONE, 13(5): e0196937, 2018) that employs a log-likelihood scoring function to characterize uncertainty in trial probability density estimates through a scaled quantile residual (SQR). The SQR for the true probability density has [...] Read more.
Previously, we developed a high throughput non-parametric maximum entropy method (PLOS ONE, 13(5): e0196937, 2018) that employs a log-likelihood scoring function to characterize uncertainty in trial probability density estimates through a scaled quantile residual (SQR). The SQR for the true probability density has universal sample size invariant properties equivalent to sampled uniform random data (SURD). Alternative scoring functions are considered that include the Anderson-Darling test. Scoring function effectiveness is evaluated using receiver operator characteristics to quantify efficacy in discriminating SURD from decoy-SURD, and by comparing overall performance characteristics during density estimation across a diverse test set of known probability distributions. Full article
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)
Show Figures

Graphical abstract

Article
Uncovering the Dependence of Cascading Failures on Network Topology by Constructing Null Models
Entropy 2019, 21(11), 1119; https://doi.org/10.3390/e21111119 - 15 Nov 2019
Cited by 4 | Viewed by 1067
Abstract
Cascading failures are the significant cause of network breakdowns in a variety of complex infrastructure systems. Given such a system, uncovering the dependence of cascading failures on its underlying topology is essential but still not well explored in the field of complex networks. [...] Read more.
Cascading failures are the significant cause of network breakdowns in a variety of complex infrastructure systems. Given such a system, uncovering the dependence of cascading failures on its underlying topology is essential but still not well explored in the field of complex networks. This study offers an original approach to systematically investigate the association between cascading failures and topological variation occurring in realistic complex networks by constructing different types of null models. As an example of its application, we study several standard Internet networks in detail. The null models first transform the original network into a series of randomized networks representing alternate realistic topologies, while taking its basic topological characteristics into account. Then considering the routing rule of shortest-path flow, it is sought to determine the implications of different topological circumstances, and the findings reveal the effects of micro-scale (such as degree distribution, assortativity, and transitivity) and meso-scale (such as rich-club and community structure) features on the cascade damage caused by deliberate node attacks. Our results demonstrate that the proposed method is suitable and promising to comprehensively analyze realistic influence of various topological properties, providing insight into designing the networks to make them more robust against cascading failures. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Article
A Note on Graphs with Prescribed Orbit Structure
Entropy 2019, 21(11), 1118; https://doi.org/10.3390/e21111118 - 15 Nov 2019
Cited by 2 | Viewed by 864
Abstract
This paper presents a proof of the existence of connected, undirected graphs with prescribed orbit structure, giving an explicit construction procedure for these graphs. Trees with prescribed orbit structure are also investigated. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Dissipative Endoreversible Engine with Given Efficiency
Entropy 2019, 21(11), 1117; https://doi.org/10.3390/e21111117 - 15 Nov 2019
Cited by 15 | Viewed by 879
Abstract
Endoreversible thermodynamics is a finite time thermodynamics ansatz based on the assumption that reversible or equilibrated subsystems of a system interact via reversible or irreversible energy transfers. This gives a framework where irreversibilities and thus entropy production only occur in interactions, while subsystems [...] Read more.
Endoreversible thermodynamics is a finite time thermodynamics ansatz based on the assumption that reversible or equilibrated subsystems of a system interact via reversible or irreversible energy transfers. This gives a framework where irreversibilities and thus entropy production only occur in interactions, while subsystems (engines, for instance) act as reversible. In order to give an opportunity to incorporate dissipative engines with given efficiencies into an endoreversible model, we build a new dissipative engine setup. To do this, in the first step, we introduce a more general interaction type where energy loss not only results from different intensive quantities between the connected subsystems, which has been the standard in endoreversible thermodynamics up to now, but is also caused by an actual loss of the extensive quantity that is transferred via this interaction. On the one hand, this allows the modeling of leakages and friction losses, for instance, which can be represented as leaky particle or torque transfers. On the other hand, we can use it to build an endoreversible engine setup that is suitable to model engines with given efficiencies or efficiency maps and, among other things, gives an expression for their entropy production rates. By way of example, the modeling of an AC motor and its loss fluxes and entropy production rates are shown. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Article
Information Flow between Bitcoin and Other Investment Assets
Entropy 2019, 21(11), 1116; https://doi.org/10.3390/e21111116 - 14 Nov 2019
Cited by 14 | Viewed by 3088
Abstract
This paper studies the causal relationship between Bitcoin and other investment assets. We first test Granger causality and then calculate transfer entropy as an information-theoretic approach. Unlike the Granger causality test, we discover that transfer entropy clearly identifies causal interdependency between Bitcoin and [...] Read more.
This paper studies the causal relationship between Bitcoin and other investment assets. We first test Granger causality and then calculate transfer entropy as an information-theoretic approach. Unlike the Granger causality test, we discover that transfer entropy clearly identifies causal interdependency between Bitcoin and other assets, including gold, stocks, and the U.S. dollar. However, for symbolic transfer entropy, the dynamic rise–fall pattern in return series shows an asymmetric information flow from other assets to Bitcoin. Our results imply that the Bitcoin market actively interacts with major asset markets, and its long-term equilibrium, as a nascent market, gradually synchronizes with that of other investment assets. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Complex Chaotic Attractor via Fractal Transformation
Entropy 2019, 21(11), 1115; https://doi.org/10.3390/e21111115 - 14 Nov 2019
Cited by 8 | Viewed by 1359
Abstract
Based on simplified Lorenz multiwing and Chua multiscroll chaotic systems, a rotation compound chaotic system is presented via transformation. Based on a binary fractal algorithm, a new ternary fractal algorithm is proposed. In the ternary fractal algorithm, the number of input sequences is [...] Read more.
Based on simplified Lorenz multiwing and Chua multiscroll chaotic systems, a rotation compound chaotic system is presented via transformation. Based on a binary fractal algorithm, a new ternary fractal algorithm is proposed. In the ternary fractal algorithm, the number of input sequences is extended from 2 to 3, which means the chaotic attractor with fractal transformation can be presented in the three-dimensional space. Taking Lorenz system, rotation Lorenz system and compound chaotic system as the seed chaotic systems, the dynamics of the complex chaotic attractors with fractal transformation are analyzed by means of bifurcation diagram, complexity and power spectrum, and the results show that the chaotic sequences with fractal transformation have higher complexity. As the experimental verification, one kind of complex chaotic attractors is implemented by DSP, and the result is consistent with that of the simulation, which verifies the feasibility of digital circuit implement. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
Recognition of a Single Dynamic Gesture with the Segmentation Technique HS-ab and Principle Components Analysis (PCA)
Entropy 2019, 21(11), 1114; https://doi.org/10.3390/e21111114 - 14 Nov 2019
Cited by 5 | Viewed by 947
Abstract
A continuous path performed by the hand in a period of time is considered for the purpose of gesture recognition. Dynamic gestures recognition is a complex topic since it spans from the conventional method of separating the hand from surrounding environment to searching [...] Read more.
A continuous path performed by the hand in a period of time is considered for the purpose of gesture recognition. Dynamic gestures recognition is a complex topic since it spans from the conventional method of separating the hand from surrounding environment to searching for the fingers and palm. This paper proposes a strategy of hand recognition using a PC webcam, a segmentation technique (HS-ab which means HSV and CIELab color space), pre-processing of images to reduce noise and a classifier such as Principle Components Analysis (PCA) for the detection and tracking of the hand of the user. The results show that the segmentation technique HS-ab and the method PCA are robust in the execution of the system, although there are various conditions such as illumination, speed and precision of the movements. It is for this reason that a suitable extraction and classification of features allows the location of the gesture. The system was tested with the database of the training images and has a 94.74% accuracy. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Article
Information Theoretic Modeling of High Precision Disparity Data for Lossy Compression and Object Segmentation
Entropy 2019, 21(11), 1113; https://doi.org/10.3390/e21111113 - 13 Nov 2019
Viewed by 1447
Abstract
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the [...] Read more.
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the image pixels, is viewed as a model structure selection problem, where the decisions are based on the implementable codelength of the model, akin to minimum description length for lossy representations. The intended usage of the extracted disparity map is to provide to the decoder the geometry information at a very small fraction from what is required for a lossless compressed version, and secondly, to convey to the decoder a segmentation describing the contours of the objects from the scene. We propose first an algorithm for constructing a hierarchical segmentation based on the persistency of the contours of regions in an iterative re-estimation algorithm. Then, we propose a second algorithm for constructing a new sequence of segmentations, by selecting the order in which the persistent contours are included in the model, driven by decisions based on the descriptive codelength. We consider real disparity datasets which have the geometry information at a high precision, in floating point format, but for which encoding of the raw information, in about 32 bits per pixels, is too expensive, and we then demonstrate good approximations preserving the object structure of the scene, achieved for rates below 0.2 bits per pixels. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

Article
On Integrating Size and Shape Distributions into a Spatio-Temporal Information Entropy Framework
Entropy 2019, 21(11), 1112; https://doi.org/10.3390/e21111112 - 13 Nov 2019
Cited by 4 | Viewed by 1604
Abstract
Understanding the structuration of spatio-temporal information is a common endeavour to many disciplines and application domains, e.g., geography, ecology, urban planning, epidemiology. Revealing the processes involved, in relation to one or more phenomena, is often the first step before elaborating spatial functioning theories [...] Read more.
Understanding the structuration of spatio-temporal information is a common endeavour to many disciplines and application domains, e.g., geography, ecology, urban planning, epidemiology. Revealing the processes involved, in relation to one or more phenomena, is often the first step before elaborating spatial functioning theories and specific planning actions, e.g., epidemiological modelling, urban planning. To do so, the spatio-temporal distributions of meaningful variables from a decision-making viewpoint, can be explored, analysed separately or jointly from an information viewpoint. Using metrics based on the measure of entropy has a long practice in these domains with the aim of quantification of how uniform the distributions are. However, the level of embedding of the spatio-temporal dimension in the metrics used is often minimal. This paper borrows from the landscape ecology concept of patch size distribution and the approach of permutation entropy used in biomedical signal processing to derive a spatio-temporal entropy analysis framework for categorical variables. The framework is based on a spatio-temporal structuration of the information allowing to use a decomposition of the Shannon entropy which can also embrace some existing spatial or temporal entropy indices to reinforce the spatio-temporal structuration. Multiway correspondence analysis is coupled to the decomposition entropy to propose further decomposition and entropy quantification of the spatio-temporal structuring information. The flexibility from these different choices, including geographic scales, allows for a range of domains to take into account domain specifics of the data; some of which are explored on a dataset linked to climate change and evolution of land cover types in Nordic areas. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Show Figures

Figure 1

Article
Impact of Investor Behavior and Stock Market Liquidity: Evidence from China
Entropy 2019, 21(11), 1111; https://doi.org/10.3390/e21111111 - 13 Nov 2019
Cited by 1 | Viewed by 1493
Abstract
Investor behavior is one of the important factors that affects market liquidity. It is very interesting to find out how investor behavior affects stock market liquidity. The Investor sentiment changes and information cognitive ability affect not only their expected returns but also market [...] Read more.
Investor behavior is one of the important factors that affects market liquidity. It is very interesting to find out how investor behavior affects stock market liquidity. The Investor sentiment changes and information cognitive ability affect not only their expected returns but also market liquidity through short-selling restrained market behavior. This paper gives a comprehensive index of investor sentiment based on the entropy method. According to the empirical analysis based on evidence from China, we obtain the following results: The investor sentiment has a positive impact on market liquidity; the development of margin trading has curbed the positive impact of investor sentiment on market liquidity; the information cognitive ability has a negative impact on market liquidity; the explosive information volume enhances the market liquidity in the bull, weakens the market liquidity in the bear, and has no significant impact while shocked. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Radiomics Analysis on Contrast-Enhanced Spectral Mammography Images for Breast Cancer Diagnosis: A Pilot Study
Entropy 2019, 21(11), 1110; https://doi.org/10.3390/e21111110 - 13 Nov 2019
Cited by 33 | Viewed by 1676
Abstract
Contrast-enhanced spectral mammography is one of the latest diagnostic tool for breast care; therefore, the literature is poor in radiomics image analysis useful to drive the development of automatic diagnostic support systems. In this work, we propose a preliminary exploratory analysis to evaluate [...] Read more.
Contrast-enhanced spectral mammography is one of the latest diagnostic tool for breast care; therefore, the literature is poor in radiomics image analysis useful to drive the development of automatic diagnostic support systems. In this work, we propose a preliminary exploratory analysis to evaluate the impact of different sets of textural features in the discrimination of benign and malignant breast lesions. The analysis is performed on 55 ROIs extracted from 51 patients referred to Istituto Tumori “Giovanni Paolo II” of Bari (Italy) from the breast cancer screening phase between March 2017 and June 2018. We extracted feature sets by calculating statistical measures on original ROIs, gradiented images, Haar decompositions of the same original ROIs, and on gray-level co-occurrence matrices of the each sub-ROI obtained by Haar transform. First, we evaluated the overall impact of each feature set on the diagnosis through a principal component analysis by training a support vector machine classifier. Then, in order to identify a sub-set for each set of features with higher diagnostic power, we developed a feature importance analysis by means of wrapper and embedded methods. Finally, we trained an SVM classifier on each sub-set of previously selected features to compare their classification performances with respect to those of the overall set. We found a sub-set of significant features extracted from the original ROIs with a diagnostic accuracy greater than 80 % . The features extracted from each sub-ROI decomposed by two levels of Haar transform were predictive only when they were all used without any selection, reaching the best mean accuracy of about 80 % . Moreover, most of the significant features calculated by HAAR decompositions and their GLCMs were extracted from recombined CESM images. Our pilot study suggested that textural features could provide complementary information about the characterization of breast lesions. In particular, we found a sub-set of significant features extracted from the original ROIs, gradiented ROI images, and GLCMs calculated from each sub-ROI previously decomposed by the Haar transform. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Article
Stochastic Gradient Annealed Importance Sampling for Efficient Online Marginal Likelihood Estimation
Entropy 2019, 21(11), 1109; https://doi.org/10.3390/e21111109 - 12 Nov 2019
Cited by 1 | Viewed by 1355
Abstract
We consider estimating the marginal likelihood in settings with independent and identically distributed (i.i.d.) data. We propose estimating the predictive distributions in a sequential factorization of the marginal likelihood in such settings by using stochastic gradient Markov Chain Monte Carlo techniques. This approach [...] Read more.
We consider estimating the marginal likelihood in settings with independent and identically distributed (i.i.d.) data. We propose estimating the predictive distributions in a sequential factorization of the marginal likelihood in such settings by using stochastic gradient Markov Chain Monte Carlo techniques. This approach is far more efficient than traditional marginal likelihood estimation techniques such as nested sampling and annealed importance sampling due to its use of mini-batches to approximate the likelihood. Stability of the estimates is provided by an adaptive annealing schedule. The resulting stochastic gradient annealed importance sampling (SGAIS) technique, which is the key contribution of our paper, enables us to estimate the marginal likelihood of a number of models considerably faster than traditional approaches, with no noticeable loss of accuracy. An important benefit of our approach is that the marginal likelihood is calculated in an online fashion as data becomes available, allowing the estimates to be used for applications such as online weighted model combination. Full article
Show Figures

Figure 1

Article
Bayesian Maximum-A-Posteriori Approach with Global and Local Regularization to Image Reconstruction Problem in Medical Emission Tomography
Entropy 2019, 21(11), 1108; https://doi.org/10.3390/e21111108 - 12 Nov 2019
Cited by 3 | Viewed by 1222
Abstract
The Bayesian approach Maximum a Posteriori (MAP) provides a common basis for developing statistical methods for solving ill-posed image reconstruction problems. MAP solutions are dependent on a priori model. Approaches developed in literature are based on prior models that describe the properties of [...] Read more.
The Bayesian approach Maximum a Posteriori (MAP) provides a common basis for developing statistical methods for solving ill-posed image reconstruction problems. MAP solutions are dependent on a priori model. Approaches developed in literature are based on prior models that describe the properties of the expected image rather than the properties of the studied object. In this paper, such models have been analyzed and it is shown that they lead to global regularization of the solution. Prior models that are based on the properties of the object under study are developed and conditions for local and global regularization are obtained. A new reconstruction algorithm has been developed based on the method of local statistical regularization. Algorithms with global and local regularization were compared in numerical simulations. The simulations were performed close to the real oncologic single photon emission computer tomography (SPECT) study. It is shown that the approach with local regularization produces more accurate images of ‘hot spots’, which is especially important to tumor diagnostics in nuclear oncology. Full article
Show Figures

Figure 1

Article
Hypergraph Contextuality
Entropy 2019, 21(11), 1107; https://doi.org/10.3390/e21111107 - 12 Nov 2019
Cited by 4 | Viewed by 1231
Abstract
Quantum contextuality is a source of quantum computational power and a theoretical delimiter between classical and quantum structures. It has been substantiated by numerous experiments and prompted generation of state independent contextual sets, that is, sets of quantum observables capable of revealing quantum [...] Read more.
Quantum contextuality is a source of quantum computational power and a theoretical delimiter between classical and quantum structures. It has been substantiated by numerous experiments and prompted generation of state independent contextual sets, that is, sets of quantum observables capable of revealing quantum contextuality for any quantum state of a given dimension. There are two major classes of state-independent contextual sets—the Kochen-Specker ones and the operator-based ones. In this paper, we present a third, hypergraph-based class of contextual sets. Hypergraph inequalities serve as a measure of contextuality. We limit ourselves to qutrits and obtain thousands of 3-dim contextual sets. The simplest of them involves only 5 quantum observables, thus enabling a straightforward implementation. They also enable establishing new entropic contextualities. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Show Figures

Graphical abstract

Article
Application of a Novel Adaptive Med Fault Diagnosis Method in Gearboxes
Entropy 2019, 21(11), 1106; https://doi.org/10.3390/e21111106 - 12 Nov 2019
Cited by 2 | Viewed by 1042
Abstract
Minimum entropy deconvolution (MED) is not effective in extracting fault features in strong noise environments, which can easily lead to misdiagnosis. Moreover, the noise reduction effect of MED is affected by the size of the filter. In the face of different vibration signals, [...] Read more.
Minimum entropy deconvolution (MED) is not effective in extracting fault features in strong noise environments, which can easily lead to misdiagnosis. Moreover, the noise reduction effect of MED is affected by the size of the filter. In the face of different vibration signals, the size of the filter is not adaptive. In order to improve the efficiency of MED fault feature extraction, this paper proposes a firefly optimization algorithm (FA) to improve the MED fault diagnosis method. Firstly, the original vibration signal is stratified by white noise-assisted singular spectral decomposition (SSD), and the stratified signal components are divided into residual signal components and noisy signal components by a detrended fluctuation analysis (DFA) algorithm. Then, the noisy components are preprocessed by an autoregressive (AR) model. Secondly, the envelope spectral entropy is proposed as the fitness function of the FA algorithm, and the filter size of MED is optimized by the FA algorithm. Finally, the preprocessed signal is denoised and the pulse enhanced with the proposed adaptive MED. The new method is validated by simulation experiments and practical engineering cases. The application results show that this method improves the shortcomings of MED and can extract fault features more effectively than the traditional MED method. Full article
Show Figures

Figure 1

Article
Modeling the Disorder of Closed System by Multi-Agent Based Simulation
Entropy 2019, 21(11), 1105; https://doi.org/10.3390/e21111105 - 12 Nov 2019
Cited by 1 | Viewed by 1183
Abstract
Mess (disorder)—there are many different meanings related to this problem. The explicit majority comes from the area of philosophical, social and medical sciences. In our paper, we try to present the engineering aspect of the concept of disorder. We propose a mathematical model [...] Read more.
Mess (disorder)—there are many different meanings related to this problem. The explicit majority comes from the area of philosophical, social and medical sciences. In our paper, we try to present the engineering aspect of the concept of disorder. We propose a mathematical model which describes the effects and consequences concerning the process of making the mess. We use Multi-Agent Modeling, where there are several independent agents with decision-making ability. Each agent has the ability to communicate and perceive for achieving its own aim. We use square grid n × n with objects which can be moved by agents to another places. The degree of disorder of the system is examined by the value of entropy. Using computer simulation, we investigate the time needed to find the desired thing in an environment in which agents (in real life, people) co-exist and they have different tendencies to tidiness. The cost of mess is counted as the number of attempts to access the object in the analyzed system and the time needed to locate the object. Full article
Show Figures

Figure 1

Article
An Entropy-Based Failure Prediction Model for the Creep and Fatigue of Metallic Materials
Entropy 2019, 21(11), 1104; https://doi.org/10.3390/e21111104 - 12 Nov 2019
Cited by 18 | Viewed by 1524
Abstract
It is well accepted that the second law of thermodynamics describes an irreversible process, which can be reflected by the entropy increase. Irreversible creep and fatigue damage can also be represented by a gradually increasing damage parameter. In the current study, an entropy-based [...] Read more.
It is well accepted that the second law of thermodynamics describes an irreversible process, which can be reflected by the entropy increase. Irreversible creep and fatigue damage can also be represented by a gradually increasing damage parameter. In the current study, an entropy-based failure prediction model for creep and fatigue is proposed based on the Boltzmann probabilistic entropy theory and continuum damage mechanics. A new method to determine the entropy increment rate for creep and fatigue processes is proposed. The relationship between entropy increase rate during creep process and normalized creep failure time is developed and compared with the experimental results. An empirical formula is proposed to describe the evolution law of entropy increase rate and normalized creep time. An entropy-based model is developed to predict the change of creep strain during the damage process. Experimental results of metals and alloys with different stresses and at different temperatures are adopted to verify the proposed model. It shows that the theoretical predictions agree well with experimental data. Full article
Show Figures

Figure 1

Article
Voronoi Decomposition of Cardiovascular Dependency Structures in Different Ambient Conditions: An Entropy Study
Entropy 2019, 21(11), 1103; https://doi.org/10.3390/e21111103 - 11 Nov 2019
Cited by 5 | Viewed by 1319
Abstract
This paper proposes a method that maps the coupling strength of an arbitrary number of signals D, D ≥ 2, into a single time series. It is motivated by the inability of multiscale entropy to jointly analyze more than two signals. The [...] Read more.
This paper proposes a method that maps the coupling strength of an arbitrary number of signals D, D ≥ 2, into a single time series. It is motivated by the inability of multiscale entropy to jointly analyze more than two signals. The coupling strength is determined using the copula density defined over a [0 1]D copula domain. The copula domain is decomposed into the Voronoi regions, with volumes inversely proportional to the dependency level (coupling strength) of the observed joint signals. A stream of dependency levels, ordered in time, creates a new time series that shows the fluctuation of the signals’ coupling strength along the time axis. The composite multiscale entropy (CMSE) is then applied to three signals, systolic blood pressure (SBP), pulse interval (PI), and body temperature (tB), simultaneously recorded from rats exposed to different ambient temperatures (tA). The obtained results are consistent with the results from the classical studies, and the method itself offers more levels of freedom than the classical analysis. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop