Previous Issue
Volume 21, October

Table of Contents

Entropy, Volume 21, Issue 11 (November 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Solvability of the p-adic Analogue of Navier–Stokes Equation via the Wavelet Theory
Entropy 2019, 21(11), 1129; https://doi.org/10.3390/e21111129 - 17 Nov 2019
Abstract
P-adic numbers serve as the simplest ultrametric model for the tree-like structures arising in various physical and biological phenomena. Recently p-adic dynamical equations started to be applied to geophysics, to model propagation of fluids (oil, water, and oil-in-water and water-in-oil emulsion) [...] Read more.
P-adic numbers serve as the simplest ultrametric model for the tree-like structures arising in various physical and biological phenomena. Recently p-adic dynamical equations started to be applied to geophysics, to model propagation of fluids (oil, water, and oil-in-water and water-in-oil emulsion) in capillary networks in porous random media. In particular, a p-adic analog of the Navier–Stokes equation was derived starting with a system of differential equations respecting the hierarchic structure of a capillary tree. In this paper, using the Schauder fixed point theorem together with the wavelet functions, we extend the study of the solvability of a p-adic field analog of the Navier–Stokes equation derived from a system of hierarchic equations for fluid flow in a capillary network in porous medium. This equation describes propagation of fluid’s flow through Geo-conduits, consisting of the mixture of fractures (as well as fracture’s corridors) and capillary networks, detected by seismic as joint wave/mass conducts. Furthermore, applying the Adomian decomposition method we formulate the solution of the p-adic analog of the Navier–Stokes equation in term of series in general form. This solution may help researchers to come closer and find more facts, taking into consideration the scaling, hierarchies, and formal derivations, imprinted from the analogous aspects of the real world phenomena. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Complexity-Based Measures of Postural Sway during Walking at Different Speeds and Durations Using Multiscale Entropy
Entropy 2019, 21(11), 1128; https://doi.org/10.3390/e21111128 - 16 Nov 2019
Abstract
: Participation in various physical activities requires successful postural control in response to the changes in position of our body. It is important to assess postural control for early detection of falls and foot injuries. Walking at various speeds and for various durations [...] Read more.
: Participation in various physical activities requires successful postural control in response to the changes in position of our body. It is important to assess postural control for early detection of falls and foot injuries. Walking at various speeds and for various durations is essential in daily physical activities. The purpose of this study was to evaluate the changes in complexity of the center of pressure (COP) during walking at different speeds and for different durations. In this study, a total of 12 participants were recruited for walking at two speeds (slow at 3 km/h and moderate at 6 km/h) for two durations (10 and 20 minutes). An insole-type plantar pressure measurement system was used to measure and calculate COP as participants walked on a treadmill. Multiscale entropy (MSE) was used to quantify the complexity of COP. Our results showed that the complexity of COP significantly decreased (p < 0.05) after 20 min of walking (complexity index, CI = −3.51) compared to 10 min of walking (CI = −3.20) while walking at 3 km/h, but not at 6 km/h. Our results also showed that the complexity index of COP indicated a significant difference (p < 0.05) between walking at speeds of 3 km/h (CI = −3.2) and 6 km/h (CI = −3.6) at the walking duration of 10 minutes, but not at 20 minutes. This study demonstrated an interaction between walking speeds and walking durations on the complexity of COP. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Open AccessArticle
Coevolutionary Analysis of Protein Subfamilies by Sequence Reweighting
Entropy 2019, 21(11), 1127; https://doi.org/10.3390/e21111127 - 16 Nov 2019
Abstract
Extracting structural information from sequence co-variation has become a common computational biology practice in the recent years, mainly due to the availability of large sequence alignments of protein families. However, identifying features that are specific to sub-classes and not shared by all members [...] Read more.
Extracting structural information from sequence co-variation has become a common computational biology practice in the recent years, mainly due to the availability of large sequence alignments of protein families. However, identifying features that are specific to sub-classes and not shared by all members of the family using sequence-based approaches has remained an elusive problem. We here present a coevolutionary-based method to differentially analyze subfamily specific structural features by a continuous sequence reweighting (SR) approach. We introduce the underlying principles and test its predictive capabilities on the Response Regulator family, whose subfamilies have been previously shown to display distinct, specific homo-dimerization patterns. Our results show that this reweighting scheme is effective in assigning structural features known a priori to subfamilies, even when sequence data is relatively scarce. Furthermore, sequence reweighting allows assessing if individual structural contacts pertain to specific subfamilies and it thus paves the way for the identification specificity-determining contacts from sequence variation data. Full article
Open AccessArticle
Multisensor Estimation Fusion with Gaussian Process for Nonlinear Dynamic Systems
Entropy 2019, 21(11), 1126; https://doi.org/10.3390/e21111126 - 16 Nov 2019
Abstract
The Gaussian process is gaining increasing importance in different areas such as signal processing, machine learning, robotics, control and aerospace and electronic systems, since it can represent unknown system functions by posterior probability. This paper investigates multisensor fusion in the setting of Gaussian [...] Read more.
The Gaussian process is gaining increasing importance in different areas such as signal processing, machine learning, robotics, control and aerospace and electronic systems, since it can represent unknown system functions by posterior probability. This paper investigates multisensor fusion in the setting of Gaussian process estimation for nonlinear dynamic systems. In order to overcome the difficulty caused by the unknown nonlinear system models, we associate the transition and measurement functions with the Gaussian process regression models, then the advantages of the non-parametric feature of the Gaussian process can be fully extracted for state estimation. Next, based on the Gaussian process filters, we propose two different fusion methods, centralized estimation fusion and distributed estimation fusion, to utilize the multisensor measurement information. Furthermore, the equivalence of the two proposed fusion methods is established by rigorous analysis. Finally, numerical examples for nonlinear target tracking systems demonstrate the equivalence and show that the multisensor estimation fusion performs better than the single sensor. Meanwhile, the proposed fusion methods outperform the convex combination method and the relaxed Chebyshev center covariance intersection fusion algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Open AccessArticle
Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction
Entropy 2019, 21(11), 1125; https://doi.org/10.3390/e21111125 - 15 Nov 2019
Abstract
Dimensionality reduction has always been a major problem for handling huge dimensionality datasets. Due to the utilization of labeled data, supervised dimensionality reduction methods such as Linear Discriminant Analysis tend achieve better classification performance compared with unsupervised methods. However, supervised methods need sufficient [...] Read more.
Dimensionality reduction has always been a major problem for handling huge dimensionality datasets. Due to the utilization of labeled data, supervised dimensionality reduction methods such as Linear Discriminant Analysis tend achieve better classification performance compared with unsupervised methods. However, supervised methods need sufficient labeled data in order to achieve satisfying results. Therefore, semi-supervised learning (SSL) methods can be a practical selection rather than utilizing labeled data. In this paper, we develop a novel SSL method by extending anchor graph regularization (AGR) for dimensionality reduction. In detail, the AGR is an accelerating semi-supervised learning method to propagate the class labels to unlabeled data. However, it cannot handle new incoming samples. We thereby improve AGR by adding kernel regression on the basic objective function of AGR. Therefore, the proposed method can not only estimate the class labels of unlabeled data but also achieve dimensionality reduction. Extensive simulations on several benchmark datasets are conducted, and the simulation results verify the effectiveness for the proposed work. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Open AccessArticle
Transfer Entropy between Communities in Complex Financial Networks
Entropy 2019, 21(11), 1124; https://doi.org/10.3390/e21111124 - 15 Nov 2019
Abstract
In this paper, we analyze information flows between communities of financial markets, represented as complex networks. Each community, typically corresponding to a business sector, represents a significant part of the financial market and the detection of interactions between communities is crucial in the [...] Read more.
In this paper, we analyze information flows between communities of financial markets, represented as complex networks. Each community, typically corresponding to a business sector, represents a significant part of the financial market and the detection of interactions between communities is crucial in the analysis of risk spreading in the financial markets. We show that the transfer entropy provides a coherent description of information flows in and between communities, also capturing non-linear interactions. Particularly, we focus on information transfer of rare events—typically large drops which can spread in the network. These events can be analyzed by Rényi transfer entropy, which enables to accentuate particular types of events. We analyze transfer entropies between communities of the five largest financial markets and compare the information flows with the correlation network of each market. From the transfer entropy picture, we can also identify the non-linear interactions, which are typical in the case of extreme events. The strongest flows can be typically observed between specific types of business sectors—financial sectors is the most significant example. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
A Novel Residual Dense Pyramid Network for Image Dehazing
Entropy 2019, 21(11), 1123; https://doi.org/10.3390/e21111123 - 15 Nov 2019
Abstract
Recently, convolutional neural network (CNN) based on the encoder-decoder structure
have been successfully applied to image dehazing. However, these CNN based dehazing methods
have two limitations: First, these dehazing models are large in size with enormous parameters, which
not only consumes much GPU [...] Read more.
Recently, convolutional neural network (CNN) based on the encoder-decoder structure
have been successfully applied to image dehazing. However, these CNN based dehazing methods
have two limitations: First, these dehazing models are large in size with enormous parameters, which
not only consumes much GPU memory, but also is hard to train from scratch. Second, these models,
which ignore the structural information at different resolutions of intermediate layers, cannot capture
informative texture and edge information for dehazing by stacking more layers. In this paper, we
propose a light-weight end-to-end network named the residual dense pyramid network (RDPN)
to address the above problems. To exploit the structural information at different resolutions of
intermediate layers fully, a new residual dense pyramid (RDP) is proposed as a building block.
By introducing a dense information fusion layer and the residual learning module, the RDP can
maximize the information flow and extract local features. Furthermore, the RDP further learns
the structural information from intermediate layers via a multiscale pyramid fusion mechanism.
To reduce the number of network parameters and to ease the training process, we use one RDP
in the encoder and two RDPs in the decoder, following a multilevel pyramid pooling layer for
incorporating global context features before estimating the final result. The extensive experimental
results on a synthetic dataset and real-world images demonstrate that the new RDPN achieves
favourable performance compared with some state-of-the-art methods, e.g., the recent densely
connected pyramid dehazing network, the all-in-one dehazing network, the enhanced pix2pix
dehazing network, pixel-based alpha blending, artificial multi-exposure image fusions and the
genetic programming estimator, in terms of accuracy, run time and number of parameters. To be
specific, RDPN outperforms all of the above methods in terms of PSNR by at least 4.25 dB. The run
time of the proposed method is 0.021 s, and the number of parameters is 1,534,799, only 6% of that
used by the densely connected pyramid dehazing network. Full article
(This article belongs to the Section Signal and Data Analysis)
Open AccessArticle
An Improved Belief Entropy to Measure Uncertainty of Basic Probability Assignments Based on Deng Entropy and Belief Interval
Entropy 2019, 21(11), 1122; https://doi.org/10.3390/e21111122 - 15 Nov 2019
Abstract
It is still an open issue to measure uncertainty of the basic probability assignment function under Dempster-Shafer theory framework, which is the foundation and preliminary work for conflict degree measurement and combination of evidences. This paper proposes an improved belief entropy to measure [...] Read more.
It is still an open issue to measure uncertainty of the basic probability assignment function under Dempster-Shafer theory framework, which is the foundation and preliminary work for conflict degree measurement and combination of evidences. This paper proposes an improved belief entropy to measure uncertainty of the basic probability assignment based on Deng entropy and the belief interval, which takes the belief function and the plausibility function as the lower bound and the upper bound, respectively. Specifically, the center and the span of the belief interval are employed to define the total uncertainty degree. It can be proved that the improved belief entropy will be degenerated to Shannon entropy when the the basic probability assignment is Bayesian. The results of numerical examples and a case study show that its efficiency and flexibility are better compared with previous uncertainty measures. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Open AccessArticle
Distribution Structure Learning Loss (DSLL) Based on Deep Metric Learning for Image Retrieval
Entropy 2019, 21(11), 1121; https://doi.org/10.3390/e21111121 - 15 Nov 2019
Abstract
The massive number of images demands highly efficient image retrieval tools. Deep distance metric learning (DDML) is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, which has achieved encouraging results. The loss function is crucial [...] Read more.
The massive number of images demands highly efficient image retrieval tools. Deep distance metric learning (DDML) is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, which has achieved encouraging results. The loss function is crucial in DDML frameworks. However, we found limitations to this model. When learning the similarity of positive and negative examples, the current methods aim to pull positive pairs as close as possible and separate negative pairs into equal distances in the embedding space. Consequently, the data distribution might be omitted. In this work, we focus on the distribution structure learning loss (DSLL) algorithm that aims to preserve the geometric information of images. To achieve this, we firstly propose a metric distance learning for highly matching figures to preserve the similarity structure inside it. Second, we introduce an entropy weight-based structural distribution to set the weight of the representative negative samples. Third, we incorporate their weights into the process of learning to rank. So, the negative samples can preserve the consistency of their structural distribution. Generally, we display comprehensive experimental results drawing on three popular landmark building datasets and demonstrate that our method achieves state-of-the-art performance. Full article
Show Figures

Figure 1

Open AccessArticle
Universal Sample Size Invariant Measures for Uncertainty Quantification in Density Estimation
Entropy 2019, 21(11), 1120; https://doi.org/10.3390/e21111120 - 15 Nov 2019
Abstract
Previously, we developed a high throughput non-parametric maximum entropy method (PLOS ONE, 13(5): e0196937, 2018) that employs a log-likelihood scoring function to characterize uncertainty in trial probability density estimates through a scaled quantile residual (SQR). The SQR for the true probability density has [...] Read more.
Previously, we developed a high throughput non-parametric maximum entropy method (PLOS ONE, 13(5): e0196937, 2018) that employs a log-likelihood scoring function to characterize uncertainty in trial probability density estimates through a scaled quantile residual (SQR). The SQR for the true probability density has universal sample size invariant properties equivalent to sampled uniform random data (SURD). Alternative scoring functions are considered that include the Anderson-Darling test. Scoring function effectiveness is evaluated using receiver operator characteristics to quantify efficacy in discriminating SURD from decoy-SURD, and by comparing overall performance characteristics during density estimation across a diverse test set of known probability distributions. Full article
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)
Show Figures

Graphical abstract

Open AccessArticle
Uncovering the Dependence of Cascading Failures on Network Topology by Constructing Null Models
Entropy 2019, 21(11), 1119; https://doi.org/10.3390/e21111119 - 15 Nov 2019
Abstract
Cascading failures are the significant cause of network breakdowns in a variety of complex infrastructure systems. Given such a system, uncovering the dependence of cascading failures on its underlying topology is essential but still not well explored in the field of complex networks. [...] Read more.
Cascading failures are the significant cause of network breakdowns in a variety of complex infrastructure systems. Given such a system, uncovering the dependence of cascading failures on its underlying topology is essential but still not well explored in the field of complex networks. This study offers an original approach to systematically investigate the association between cascading failures and topological variation occurring in realistic complex networks by constructing different types of null models. As an example of its application, we study several standard Internet networks in detail. The null models first transform the original network into a series of randomized networks representing alternate realistic topologies, while taking its basic topological characteristics into account. Then considering the routing rule of shortest-path flow, it is sought to determine the implications of different topological circumstances, and the findings reveal the effects of micro-scale (such as degree distribution, assortativity, and transitivity) and meso-scale (such as rich-club and community structure) features on the cascade damage caused by deliberate node attacks. Our results demonstrate that the proposed method is suitable and promising to comprehensively analyze realistic influence of various topological properties, providing insight into designing the networks to make them more robust against cascading failures. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Open AccessArticle
A Note on Graphs with Prescribed Orbit Structure
Entropy 2019, 21(11), 1118; https://doi.org/10.3390/e21111118 - 15 Nov 2019
Abstract
This paper presents a proof of the existence of connected, undirected graphs with prescribed orbit structure, giving an explicit construction procedure for these graphs. Trees with prescribed orbit structure are also investigated. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Dissipative Endoreversible Engine with Given Efficiency
Entropy 2019, 21(11), 1117; https://doi.org/10.3390/e21111117 - 15 Nov 2019
Abstract
Endoreversible thermodynamics is a finite time thermodynamics ansatz based on the assumption that reversible or equilibrated subsystems of a system interact via reversible or irreversible energy transfers. This gives a framework where irreversibilities and thus entropy production only occur in interactions, while subsystems [...] Read more.
Endoreversible thermodynamics is a finite time thermodynamics ansatz based on the assumption that reversible or equilibrated subsystems of a system interact via reversible or irreversible energy transfers. This gives a framework where irreversibilities and thus entropy production only occur in interactions, while subsystems (engines, for instance) act as reversible. In order to give an opportunity to incorporate dissipative engines with given efficiencies into an endoreversible model, we build a new dissipative engine setup. To do this, in the first step, we introduce a more general interaction type where energy loss not only results from different intensive quantities between the connected subsystems, which has been the standard in endoreversible thermodynamics up to now, but is also caused by an actual loss of the extensive quantity that is transferred via this interaction. On the one hand, this allows the modeling of leakages and friction losses, for instance, which can be represented as leaky particle or torque transfers. On the other hand, we can use it to build an endoreversible engine setup that is suitable to model engines with given efficiencies or efficiency maps and, among other things, gives an expression for their entropy production rates. By way of example, the modeling of an AC motor and its loss fluxes and entropy production rates are shown. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Information Flow between Bitcoin and Other Investment Assets
Entropy 2019, 21(11), 1116; https://doi.org/10.3390/e21111116 - 14 Nov 2019
Abstract
This paper studies the causal relationship between Bitcoin and other investment assets. We first test Granger causality and then calculate transfer entropy as an information-theoretic approach. Unlike the Granger causality test, we discover that transfer entropy clearly identifies causal interdependency between Bitcoin and [...] Read more.
This paper studies the causal relationship between Bitcoin and other investment assets. We first test Granger causality and then calculate transfer entropy as an information-theoretic approach. Unlike the Granger causality test, we discover that transfer entropy clearly identifies causal interdependency between Bitcoin and other assets, including gold, stocks, and the U.S. dollar. However, for symbolic transfer entropy, the dynamic rise–fall pattern in return series shows an asymmetric information flow from other assets to Bitcoin. Our results imply that the Bitcoin market actively interacts with major asset markets, and its long-term equilibrium, as a nascent market, gradually synchronizes with that of other investment assets. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Complex Chaotic Attractor via Fractal Transformation
Entropy 2019, 21(11), 1115; https://doi.org/10.3390/e21111115 - 14 Nov 2019
Abstract
Based on simplified Lorenz multiwing and Chua multiscroll chaotic systems, a rotation compound chaotic system is presented via transformation. Based on a binary fractal algorithm, a new ternary fractal algorithm is proposed. In the ternary fractal algorithm, the number of input sequences is [...] Read more.
Based on simplified Lorenz multiwing and Chua multiscroll chaotic systems, a rotation compound chaotic system is presented via transformation. Based on a binary fractal algorithm, a new ternary fractal algorithm is proposed. In the ternary fractal algorithm, the number of input sequences is extended from 2 to 3, which means the chaotic attractor with fractal transformation can be presented in the three-dimensional space. Taking Lorenz system, rotation Lorenz system and compound chaotic system as the seed chaotic systems, the dynamics of the complex chaotic attractors with fractal transformation are analyzed by means of bifurcation diagram, complexity and power spectrum, and the results show that the chaotic sequences with fractal transformation have higher complexity. As the experimental verification, one kind of complex chaotic attractors is implemented by DSP, and the result is consistent with that of the simulation, which verifies the feasibility of digital circuit implement. Full article
(This article belongs to the Section Complexity)
Open AccessArticle
Recognition of a Single Dynamic Gesture with the Segmentation Technique HS-ab and Principle Components Analysis (PCA)
Entropy 2019, 21(11), 1114; https://doi.org/10.3390/e21111114 - 14 Nov 2019
Abstract
A continuous path performed by the hand in a period of time is considered for the purpose of gesture recognition. Dynamic gestures recognition is a complex topic since it spans from the conventional method of separating the hand from surrounding environment to searching [...] Read more.
A continuous path performed by the hand in a period of time is considered for the purpose of gesture recognition. Dynamic gestures recognition is a complex topic since it spans from the conventional method of separating the hand from surrounding environment to searching for the fingers and palm. This paper proposes a strategy of hand recognition using a PC webcam, a segmentation technique (HS-ab which means HSV and CIELab color space), pre-processing of images to reduce noise and a classifier such as Principle Components Analysis (PCA) for the detection and tracking of the hand of the user. The results show that the segmentation technique HS-ab and the method PCA are robust in the execution of the system, although there are various conditions such as illumination, speed and precision of the movements. It is for this reason that a suitable extraction and classification of features allows the location of the gesture. The system was tested with the database of the training images and has a 94.74% accuracy. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Open AccessArticle
Information Theoretic Modeling of High Precision Disparity Data for Lossy Compression and Object Segmentation
Entropy 2019, 21(11), 1113; https://doi.org/10.3390/e21111113 - 13 Nov 2019
Abstract
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the [...] Read more.
In this paper, we study the geometry data associated with disparity map or depth map images in order to extract easy to compress polynomial surface models at different bitrates, proposing an efficient mining strategy for geometry information. The segmentation, or partition of the image pixels, is viewed as a model structure selection problem, where the decisions are based on the implementable codelength of the model, akin to minimum description length for lossy representations. The intended usage of the extracted disparity map is to provide to the decoder the geometry information at a very small fraction from what is required for a lossless compressed version, and secondly, to convey to the decoder a segmentation describing the contours of the objects from the scene. We propose first an algorithm for constructing a hierarchical segmentation based on the persistency of the contours of regions in an iterative re-estimation algorithm. Then, we propose a second algorithm for constructing a new sequence of segmentations, by selecting the order in which the persistent contours are included in the model, driven by decisions based on the descriptive codelength. We consider real disparity datasets which have the geometry information at a high precision, in floating point format, but for which encoding of the raw information, in about 32 bits per pixels, is too expensive, and we then demonstrate good approximations preserving the object structure of the scene, achieved for rates below 0.2 bits per pixels. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Open AccessArticle
On Integrating Size and Shape Distributions into a Spatio-Temporal Information Entropy Framework
Entropy 2019, 21(11), 1112; https://doi.org/10.3390/e21111112 - 13 Nov 2019
Abstract
Understanding the structuration of spatio-temporal information is a common endeavour to many disciplines and application domains, e.g., geography, ecology, urban planning, epidemiology. Revealing the processes involved, in relation to one or more phenomena, is often the first step before elaborating spatial functioning theories [...] Read more.
Understanding the structuration of spatio-temporal information is a common endeavour to many disciplines and application domains, e.g., geography, ecology, urban planning, epidemiology. Revealing the processes involved, in relation to one or more phenomena, is often the first step before elaborating spatial functioning theories and specific planning actions, e.g., epidemiological modelling, urban planning. To do so, the spatio-temporal distributions of meaningful variables from a decision-making viewpoint, can be explored, analysed separately or jointly from an information viewpoint. Using metrics based on the measure of entropy has a long practice in these domains with the aim of quantification of how uniform the distributions are. However, the level of embedding of the spatio-temporal dimension in the metrics used is often minimal. This paper borrows from the landscape ecology concept of patch size distribution and the approach of permutation entropy used in biomedical signal processing to derive a spatio-temporal entropy analysis framework for categorical variables. The framework is based on a spatio-temporal structuration of the information allowing to use a decomposition of the Shannon entropy which can also embrace some existing spatial or temporal entropy indices to reinforce the spatio-temporal structuration. Multiway correspondence analysis is coupled to the decomposition entropy to propose further decomposition and entropy quantification of the spatio-temporal structuring information. The flexibility from these different choices, including geographic scales, allows for a range of domains to take into account domain specifics of the data; some of which are explored on a dataset linked to climate change and evolution of land cover types in Nordic areas. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Open AccessArticle
Impact of Investor Behavior and Stock Market Liquidity: Evidence from China
Entropy 2019, 21(11), 1111; https://doi.org/10.3390/e21111111 - 13 Nov 2019
Abstract
Investor behavior is one of the important factors that affects market liquidity. It is very interesting to find out how investor behavior affects stock market liquidity. The Investor sentiment changes and information cognitive ability affect not only their expected returns but also market [...] Read more.
Investor behavior is one of the important factors that affects market liquidity. It is very interesting to find out how investor behavior affects stock market liquidity. The Investor sentiment changes and information cognitive ability affect not only their expected returns but also market liquidity through short-selling restrained market behavior. This paper gives a comprehensive index of investor sentiment based on the entropy method. According to the empirical analysis based on evidence from China, we obtain the following results: The investor sentiment has a positive impact on market liquidity; the development of margin trading has curbed the positive impact of investor sentiment on market liquidity; the information cognitive ability has a negative impact on market liquidity; the explosive information volume enhances the market liquidity in the bull, weakens the market liquidity in the bear, and has no significant impact while shocked. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Radiomics Analysis on Contrast-Enhanced Spectral Mammography Images for Breast Cancer Diagnosis: A Pilot Study
Entropy 2019, 21(11), 1110; https://doi.org/10.3390/e21111110 - 13 Nov 2019
Abstract
Contrast-enhanced spectral mammography is one of the latest diagnostic tool for breast care; therefore, the literature is poor in radiomics image analysis useful to drive the development of automatic diagnostic support systems. In this work, we propose a preliminary exploratory analysis to evaluate [...] Read more.
Contrast-enhanced spectral mammography is one of the latest diagnostic tool for breast care; therefore, the literature is poor in radiomics image analysis useful to drive the development of automatic diagnostic support systems. In this work, we propose a preliminary exploratory analysis to evaluate the impact of different sets of textural features in the discrimination of benign and malignant breast lesions. The analysis is performed on 55 ROIs extracted from 51 patients referred to Istituto Tumori “Giovanni Paolo II” of Bari (Italy) from the breast cancer screening phase between March 2017 and June 2018. We extracted feature sets by calculating statistical measures on original ROIs, gradiented images, Haar decompositions of the same original ROIs, and on gray-level co-occurrence matrices of the each sub-ROI obtained by Haar transform. First, we evaluated the overall impact of each feature set on the diagnosis through a principal component analysis by training a support vector machine classifier. Then, in order to identify a sub-set for each set of features with higher diagnostic power, we developed a feature importance analysis by means of wrapper and embedded methods. Finally, we trained an SVM classifier on each sub-set of previously selected features to compare their classification performances with respect to those of the overall set. We found a sub-set of significant features extracted from the original ROIs with a diagnostic accuracy greater than 80 % . The features extracted from each sub-ROI decomposed by two levels of Haar transform were predictive only when they were all used without any selection, reaching the best mean accuracy of about 80 % . Moreover, most of the significant features calculated by HAAR decompositions and their GLCMs were extracted from recombined CESM images. Our pilot study suggested that textural features could provide complementary information about the characterization of breast lesions. In particular, we found a sub-set of significant features extracted from the original ROIs, gradiented ROI images, and GLCMs calculated from each sub-ROI previously decomposed by the Haar transform. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Open AccessArticle
Stochastic Gradient Annealed Importance Sampling for Efficient Online Marginal Likelihood Estimation
Entropy 2019, 21(11), 1109; https://doi.org/10.3390/e21111109 - 12 Nov 2019
Abstract
We consider estimating the marginal likelihood in settings with independent and identically distributed (i.i.d.) data. We propose estimating the predictive distributions in a sequential factorization of the marginal likelihood in such settings by using stochastic gradient Markov Chain Monte Carlo techniques. This approach [...] Read more.
We consider estimating the marginal likelihood in settings with independent and identically distributed (i.i.d.) data. We propose estimating the predictive distributions in a sequential factorization of the marginal likelihood in such settings by using stochastic gradient Markov Chain Monte Carlo techniques. This approach is far more efficient than traditional marginal likelihood estimation techniques such as nested sampling and annealed importance sampling due to its use of mini-batches to approximate the likelihood. Stability of the estimates is provided by an adaptive annealing schedule. The resulting stochastic gradient annealed importance sampling (SGAIS) technique, which is the key contribution of our paper, enables us to estimate the marginal likelihood of a number of models considerably faster than traditional approaches, with no noticeable loss of accuracy. An important benefit of our approach is that the marginal likelihood is calculated in an online fashion as data becomes available, allowing the estimates to be used for applications such as online weighted model combination. Full article
Open AccessArticle
Bayesian Maximum-A-Posteriori Approach with Global and Local Regularization to Image Reconstruction Problem in Medical Emission Tomography
Entropy 2019, 21(11), 1108; https://doi.org/10.3390/e21111108 - 12 Nov 2019
Abstract
The Bayesian approach Maximum a Posteriori (MAP) provides a common basis for developing statistical methods for solving ill-posed image reconstruction problems. MAP solutions are dependent on a priori model. Approaches developed in literature are based on prior models that describe the properties of [...] Read more.
The Bayesian approach Maximum a Posteriori (MAP) provides a common basis for developing statistical methods for solving ill-posed image reconstruction problems. MAP solutions are dependent on a priori model. Approaches developed in literature are based on prior models that describe the properties of the expected image rather than the properties of the studied object. In this paper, such models have been analyzed and it is shown that they lead to global regularization of the solution. Prior models that are based on the properties of the object under study are developed and conditions for local and global regularization are obtained. A new reconstruction algorithm has been developed based on the method of local statistical regularization. Algorithms with global and local regularization were compared in numerical simulations. The simulations were performed close to the real oncologic single photon emission computer tomography (SPECT) study. It is shown that the approach with local regularization produces more accurate images of ‘hot spots’, which is especially important to tumor diagnostics in nuclear oncology. Full article
Show Figures

Figure 1

Open AccessArticle
Hypergraph Contextuality
Entropy 2019, 21(11), 1107; https://doi.org/10.3390/e21111107 - 12 Nov 2019
Abstract
Quantum contextuality is a source of quantum computational power and a theoretical delimiter between classical and quantum structures. It has been substantiated by numerous experiments and prompted generation of state independent contextual sets, that is, sets of quantum observables capable of revealing quantum [...] Read more.
Quantum contextuality is a source of quantum computational power and a theoretical delimiter between classical and quantum structures. It has been substantiated by numerous experiments and prompted generation of state independent contextual sets, that is, sets of quantum observables capable of revealing quantum contextuality for any quantum state of a given dimension. There are two major classes of state-independent contextual sets—the Kochen-Specker ones and the operator-based ones. In this paper, we present a third, hypergraph-based class of contextual sets. Hypergraph inequalities serve as a measure of contextuality. We limit ourselves to qutrits and obtain thousands of 3-dim contextual sets. The simplest of them involves only 5 quantum observables, thus enabling a straightforward implementation. They also enable establishing new entropic contextualities. Full article
(This article belongs to the Special Issue Entropy in Foundations of Quantum Physics)
Show Figures

Graphical abstract

Open AccessArticle
Application of a Novel Adaptive Med Fault Diagnosis Method in Gearboxes
Entropy 2019, 21(11), 1106; https://doi.org/10.3390/e21111106 - 12 Nov 2019
Abstract
Minimum entropy deconvolution (MED) is not effective in extracting fault features in strong noise environments, which can easily lead to misdiagnosis. Moreover, the noise reduction effect of MED is affected by the size of the filter. In the face of different vibration signals, [...] Read more.
Minimum entropy deconvolution (MED) is not effective in extracting fault features in strong noise environments, which can easily lead to misdiagnosis. Moreover, the noise reduction effect of MED is affected by the size of the filter. In the face of different vibration signals, the size of the filter is not adaptive. In order to improve the efficiency of MED fault feature extraction, this paper proposes a firefly optimization algorithm (FA) to improve the MED fault diagnosis method. Firstly, the original vibration signal is stratified by white noise-assisted singular spectral decomposition (SSD), and the stratified signal components are divided into residual signal components and noisy signal components by a detrended fluctuation analysis (DFA) algorithm. Then, the noisy components are preprocessed by an autoregressive (AR) model. Secondly, the envelope spectral entropy is proposed as the fitness function of the FA algorithm, and the filter size of MED is optimized by the FA algorithm. Finally, the preprocessed signal is denoised and the pulse enhanced with the proposed adaptive MED. The new method is validated by simulation experiments and practical engineering cases. The application results show that this method improves the shortcomings of MED and can extract fault features more effectively than the traditional MED method. Full article
Show Figures

Figure 1

Open AccessArticle
Modeling the Disorder of Closed System by Multi-Agent Based Simulation
Entropy 2019, 21(11), 1105; https://doi.org/10.3390/e21111105 - 12 Nov 2019
Abstract
Mess (disorder)—there are many different meanings related to this problem. The explicit majority comes from the area of philosophical, social and medical sciences. In our paper, we try to present the engineering aspect of the concept of disorder. We propose a mathematical model [...] Read more.
Mess (disorder)—there are many different meanings related to this problem. The explicit majority comes from the area of philosophical, social and medical sciences. In our paper, we try to present the engineering aspect of the concept of disorder. We propose a mathematical model which describes the effects and consequences concerning the process of making the mess. We use Multi-Agent Modeling, where there are several independent agents with decision-making ability. Each agent has the ability to communicate and perceive for achieving its own aim. We use square grid n × n with objects which can be moved by agents to another places. The degree of disorder of the system is examined by the value of entropy. Using computer simulation, we investigate the time needed to find the desired thing in an environment in which agents (in real life, people) co-exist and they have different tendencies to tidiness. The cost of mess is counted as the number of attempts to access the object in the analyzed system and the time needed to locate the object. Full article
Show Figures

Figure 1

Open AccessArticle
An Entropy-Based Failure Prediction Model for the Creep and Fatigue of Metallic Materials
Entropy 2019, 21(11), 1104; https://doi.org/10.3390/e21111104 - 12 Nov 2019
Abstract
It is well accepted that the second law of thermodynamics describes an irreversible process, which can be reflected by the entropy increase. Irreversible creep and fatigue damage can also be represented by a gradually increasing damage parameter. In the current study, an entropy-based [...] Read more.
It is well accepted that the second law of thermodynamics describes an irreversible process, which can be reflected by the entropy increase. Irreversible creep and fatigue damage can also be represented by a gradually increasing damage parameter. In the current study, an entropy-based failure prediction model for creep and fatigue is proposed based on the Boltzmann probabilistic entropy theory and continuum damage mechanics. A new method to determine the entropy increment rate for creep and fatigue processes is proposed. The relationship between entropy increase rate during creep process and normalized creep failure time is developed and compared with the experimental results. An empirical formula is proposed to describe the evolution law of entropy increase rate and normalized creep time. An entropy-based model is developed to predict the change of creep strain during the damage process. Experimental results of metals and alloys with different stresses and at different temperatures are adopted to verify the proposed model. It shows that the theoretical predictions agree well with experimental data. Full article
Show Figures

Figure 1

Open AccessArticle
Voronoi Decomposition of Cardiovascular Dependency Structures in Different Ambient Conditions: An Entropy Study
Entropy 2019, 21(11), 1103; https://doi.org/10.3390/e21111103 - 11 Nov 2019
Abstract
This paper proposes a method that maps the coupling strength of an arbitrary number of signals D, D ≥ 2, into a single time series. It is motivated by the inability of multiscale entropy to jointly analyze more than two signals. The [...] Read more.
This paper proposes a method that maps the coupling strength of an arbitrary number of signals D, D ≥ 2, into a single time series. It is motivated by the inability of multiscale entropy to jointly analyze more than two signals. The coupling strength is determined using the copula density defined over a [0 1]D copula domain. The copula domain is decomposed into the Voronoi regions, with volumes inversely proportional to the dependency level (coupling strength) of the observed joint signals. A stream of dependency levels, ordered in time, creates a new time series that shows the fluctuation of the signals’ coupling strength along the time axis. The composite multiscale entropy (CMSE) is then applied to three signals, systolic blood pressure (SBP), pulse interval (PI), and body temperature (tB), simultaneously recorded from rats exposed to different ambient temperatures (tA). The obtained results are consistent with the results from the classical studies, and the method itself offers more levels of freedom than the classical analysis. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Open AccessArticle
Causal Analysis of Learning Performance Based on a Bayesian Network and Mutual Information
Entropy 2019, 21(11), 1102; https://doi.org/10.3390/e21111102 - 11 Nov 2019
Abstract
Over the past few years, online learning has exploded in popularity due to the potentially unlimited enrollment, lack of geographical limitations, and free accessibility of many courses. However, learners are prone to have poor performance due to the unconstrained learning environment, lack of [...] Read more.
Over the past few years, online learning has exploded in popularity due to the potentially unlimited enrollment, lack of geographical limitations, and free accessibility of many courses. However, learners are prone to have poor performance due to the unconstrained learning environment, lack of academic pressure, and low interactivity. Personalized intervention design with the learners’ background and learning behavior factors in mind may improve the learners’ performance. Causality strictly distinguishes cause from outcome factors and plays an irreplaceable role in designing guiding interventions. The goal of this paper is to construct a Bayesian network to make causal analysis and then provide personalized interventions for different learners to improve learning. This paper first constructs a Bayesian network based on background and learning behavior factors, combining expert knowledge and a structure learning algorithm. Then the important factors in the constructed network are selected using mutual information based on entropy. At last, we identify learners with poor performance using inference and propose personalized interventions, which may help with successful applications in education. Experimental results verify the effectiveness of the proposed method and demonstrate the impact of factors on learning performance. Full article
Open AccessArticle
Intuitionistic Fuzzy Entropy for Group Decision Making of Water Engineering Project Delivery System Selection
Entropy 2019, 21(11), 1101; https://doi.org/10.3390/e21111101 - 11 Nov 2019
Abstract
The project delivery mode is an extremely important link in the life cycle of water engineering. Many cases show that increases in the costs, construction period, and claims in the course of the implementation of water engineering are related to the decision of [...] Read more.
The project delivery mode is an extremely important link in the life cycle of water engineering. Many cases show that increases in the costs, construction period, and claims in the course of the implementation of water engineering are related to the decision of the project delivery mode in the early stages. Therefore, it is particularly important to choose a delivery mode that matches the water engineering. On the basis of identifying the key factors that affect the decision on the project delivery system and establishing a set of index systems, a comprehensive decision of engineering transaction is essentially considered to be a fuzzy multi-attribute group decision. In this study, intuitionistic fuzzy entropy was used to determine the weight of the influencing factors on the engineering transaction mode; then, intuitionistic fuzzy entropy was used to determine the weight of decision experts. Thus, a comprehensive scheme-ranking model based on an intuitionistic fuzzy hybrid average (IFHA) operator and intuitionistic fuzzy weighted average (IFWA) operator was established. Finally, a practical case analysis of a hydropower station further demonstrated the feasibility, objectivity, and scientific nature of the decision model. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Open AccessArticle
Unidimensional Continuous-Variable Quantum Key Distribution with Untrusted Detection under Realistic Conditions
Entropy 2019, 21(11), 1100; https://doi.org/10.3390/e21111100 - 11 Nov 2019
Abstract
A unidimensional continuous-variable quantum key distribution protocol with untrusted detection is proposed, where the two legitimate partners send unidimensional modulated or Gaussian-modulated coherent states to an untrusted third party, i.e., Charlie, to realize the measurement. Compared with the Gaussian-modulated coherent-state protocols, the unidimensional [...] Read more.
A unidimensional continuous-variable quantum key distribution protocol with untrusted detection is proposed, where the two legitimate partners send unidimensional modulated or Gaussian-modulated coherent states to an untrusted third party, i.e., Charlie, to realize the measurement. Compared with the Gaussian-modulated coherent-state protocols, the unidimensional modulated protocols take the advantage of easy modulation, low cost, and only a small number of random numbers required. Security analysis shows that the proposed protocol cannot just defend all detectors side channels, but also achieve great performance under certain conditions. Specifically, three cases are discussed in detail, including using unidimensional modulated coherent states in Alice’s side, in Bob’s side, and in both sides under realistic conditions, respectively. Under the three conditions, we derive the expressions of the secret key rate and give the optimal gain parameters. It is found that the optimal performance of the protocol is achieved by using unidimensional modulated coherent states in both Alice’s and Bob’s side. The resulting protocol shows the potential for long-distance secure communication using the unidimensional quantum key distribution protocol with simple modulation method and untrusted detection under realistic conditions. Full article
(This article belongs to the Special Issue Quantum Information Processing)
Show Figures

Figure 1

Previous Issue
Back to TopTop