Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 7 (July 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-53
Export citation of selected articles as:
Open AccessArticle From Identity to Uniqueness: The Emergence of Increasingly Higher Levels of Hierarchy in the Process of the Matter Evolution
Entropy 2018, 20(7), 533; https://doi.org/10.3390/e20070533 (registering DOI)
Received: 25 June 2018 / Revised: 7 July 2018 / Accepted: 16 July 2018 / Published: 17 July 2018
PDF Full-text (2888 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the
[...] Read more.
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the chemical evolution-diversification; and during the biological and human evolution—a process of distinctifying. The main event in the evolution of the Universe was the emergence of new levels of hierarchy, which together constitute the process of hierarchogenesis. This process contains 14 such events so far, and its dynamics is presented graphically by a very regular and smooth curve. The function that the curve presents is odd, i.e., symmetric about its central part, due to the similarity of patterns of the deceleration during the cosmic/chemical evolution (1st half of the general evolution) and the acceleration during the biological/human evolution (its 2nd half). The main driver of the hierarchogenesis as described by this odd function is counteraction and counterbalance of attraction and repulsion that take various forms at the different hierarchical levels. Direction and pace of the irreversible and inevitable increase of the Universe complexity in accordance with the general law of complification result from a consistent influence of all these factors. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessEditorial Thermodynamics in Material Science
Entropy 2018, 20(7), 532; https://doi.org/10.3390/e20070532
Received: 28 June 2018 / Accepted: 28 June 2018 / Published: 16 July 2018
PDF Full-text (156 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Thermodynamics in Material Science)
Open AccessArticle Automatic Analysis of Archimedes’ Spiral for Characterization of Genetic Essential Tremor Based on Shannon’s Entropy and Fractal Dimension
Entropy 2018, 20(7), 531; https://doi.org/10.3390/e20070531
Received: 20 May 2018 / Revised: 11 July 2018 / Accepted: 11 July 2018 / Published: 16 July 2018
PDF Full-text (3867 KB) | HTML Full-text | XML Full-text
Abstract
Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson’s disease. The drawing of the Archimedes’ spiral is the gold standard test to distinguish between both pathologies. The aim of this
[...] Read more.
Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson’s disease. The drawing of the Archimedes’ spiral is the gold standard test to distinguish between both pathologies. The aim of this paper is to select non-linear biomarkers based on the analysis of digital drawings. It belongs to a larger cross study for early diagnosis of essential tremor that also includes genetic information. The proposed automatic analysis system consists in a hybrid solution: Machine Learning paradigms and automatic selection of features based on statistical tests using medical criteria. Moreover, the selected biomarkers comprise not only commonly used linear features (static and dynamic), but also other non-linear ones: Shannon entropy and Fractal Dimension. The results are hopeful, and the developed tool can easily be adapted to users; and taking into account social and economic points of view, it could be very helpful in real complex environments. Full article
(This article belongs to the Special Issue Selected Papers from IWOBI—Entropy-Based Applied Signal Processing)
Figures

Figure 1

Open AccessArticle On Chaos in the Fractional-Order Discrete-Time Unified System and Its Control Synchronization
Entropy 2018, 20(7), 530; https://doi.org/10.3390/e20070530
Received: 9 June 2018 / Revised: 11 July 2018 / Accepted: 12 July 2018 / Published: 15 July 2018
PDF Full-text (2713 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a fractional map based on the integer-order unified map. The chaotic behavior of the proposed map is analyzed by means of bifurcations plots, and experimental bounds are placed on the parameters and fractional order. Different control laws are
[...] Read more.
In this paper, we propose a fractional map based on the integer-order unified map. The chaotic behavior of the proposed map is analyzed by means of bifurcations plots, and experimental bounds are placed on the parameters and fractional order. Different control laws are proposed to force the states to zero asymptotically and to achieve the complete synchronization of a pair of fractional unified maps with identical or nonidentical parameters. Numerical results are used throughout the paper to illustrate the findings. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Curvature Invariants of Statistical Submanifolds in Kenmotsu Statistical Manifolds of Constant ϕ-Sectional Curvature
Entropy 2018, 20(7), 529; https://doi.org/10.3390/e20070529
Received: 11 June 2018 / Revised: 6 July 2018 / Accepted: 11 July 2018 / Published: 14 July 2018
PDF Full-text (288 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we consider statistical submanifolds of Kenmotsu statistical manifolds of constant ϕ-sectional curvature. For such submanifold, we investigate curvature properties. We establish some inequalities involving the normalized δ-Casorati curvatures (extrinsic invariants) and the scalar curvature (intrinsic invariant). Moreover, we
[...] Read more.
In this article, we consider statistical submanifolds of Kenmotsu statistical manifolds of constant ϕ-sectional curvature. For such submanifold, we investigate curvature properties. We establish some inequalities involving the normalized δ-Casorati curvatures (extrinsic invariants) and the scalar curvature (intrinsic invariant). Moreover, we prove that the equality cases of the inequalities hold if and only if the imbedding curvature tensors h and h of the submanifold (associated with the dual connections) satisfy h=h, i.e., the submanifold is totally geodesic with respect to the Levi–Civita connection. Full article
Open AccessArticle Free Final Time Input Design Problem for Robust Entropy-Like System Parameter Estimation
Entropy 2018, 20(7), 528; https://doi.org/10.3390/e20070528
Received: 3 June 2018 / Revised: 6 July 2018 / Accepted: 12 July 2018 / Published: 14 July 2018
PDF Full-text (1735 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra
[...] Read more.
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra state variable representing the free final time scaling factor, formulated in the Bolza functional form, subject to the D-efficiency constraint as well as the input energy constraint. The objective function used for the model of the system identification provides robustness regarding the outlying data and was constructed using the so-called Entropy-like estimator. The perturbation time interval has a significant impact on the cost of the real-life system identification experiment. The contribution of this work is to examine the economic aspects between the imposed constraints on the input signal design, and the experiment duration while undertaking an identification experiment in the real operating conditions. The methodology is applicable to the general class of systems and was supported by numerical examples. Illustrative examples of the Least Squares, and the Entropy-Like estimators for the system parameter data validation where measurements include additive white noise are compared using ellipsoidal confidence regions. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Category Structure and Categorical Perception Jointly Explained by Similarity-Based Information Theory
Entropy 2018, 20(7), 527; https://doi.org/10.3390/e20070527
Received: 27 April 2018 / Revised: 8 July 2018 / Accepted: 10 July 2018 / Published: 14 July 2018
PDF Full-text (1097 KB) | HTML Full-text | XML Full-text
Abstract
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any
[...] Read more.
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any element belongs to a category. Interestingly, this categorization leads to an altered perception referred to as categorical perception: for a given physical distance, items within a category are perceived closer than items in two different categories. A subtler effect is the perceptual magnet: discriminability is reduced close to the prototypes of a category and increased near its boundaries. Here, starting from predefined abstract categories, we naturally derive the internal structure of categories and the phenomenon of categorical perception, using an information theoretical framework that involves both probabilities and pairwise similarities between items. Essentially, we suggest that pairwise similarities between items are to be tuned to render some predefined categories as well as possible. However, constraints on these pairwise similarities only produce an approximate matching, which explains concurrently the notion of goodness and the warping of perception. Overall, we demonstrate that similarity-based information theory may offer a global and unified principled understanding of categorization and categorical perception simultaneously. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Universal Features in Phonological Neighbor Networks
Entropy 2018, 20(7), 526; https://doi.org/10.3390/e20070526
Received: 22 May 2018 / Revised: 29 June 2018 / Accepted: 10 July 2018 / Published: 12 July 2018
PDF Full-text (760 KB) | HTML Full-text | XML Full-text
Abstract
Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two
[...] Read more.
Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two forms [words] that differ by one phoneme) compete significantly for recognition as a spoken word is heard. This definition of phonological similarity can be extended to an entire corpus of forms to produce a phonological neighbor network (PNN). We study PNNs for five languages: English, Spanish, French, Dutch, and German. Consistent with previous work, we find that the PNNs share a consistent set of topological features. Using an approach that generates random lexicons with increasing levels of phonological realism, we show that even random forms with minimal relationship to any real language, combined with only the empirical distribution of language-specific phonological form lengths, are sufficient to produce the topological properties observed in the real language PNNs. The resulting pseudo-PNNs are insensitive to the level of lingustic realism in the random lexicons but quite sensitive to the shape of the form length distribution. We therefore conclude that “universal” features seen across multiple languages are really string universals, not language universals, and arise primarily due to limitations in the kinds of networks generated by the one-step neighbor definition. Taken together, our results indicate that caution is warranted when linking the dynamics of human spoken word recognition to the topological properties of PNNs, and that the investigation of alternative similarity metrics for phonological forms should be a priority. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle A New Hyperchaotic System-Based Design for Efficient Bijective Substitution-Boxes
Entropy 2018, 20(7), 525; https://doi.org/10.3390/e20070525
Received: 25 May 2018 / Revised: 28 June 2018 / Accepted: 9 July 2018 / Published: 12 July 2018
PDF Full-text (2351 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a novel method to construct cryptographically strong bijective substitution-boxes based on the complicated dynamics of a new hyperchaotic system. The new hyperchaotic system was found to have good characteristics when compared with other systems utilized for S-box construction.
[...] Read more.
In this paper, we present a novel method to construct cryptographically strong bijective substitution-boxes based on the complicated dynamics of a new hyperchaotic system. The new hyperchaotic system was found to have good characteristics when compared with other systems utilized for S-box construction. The performance assessment of the proposed S-box method was carried out based on criteria, such as high nonlinearity, a good avalanche effect, bit-independent criteria, and low differential uniformity. The proposed method was also analyzed for the batch-generation of 8 × 8 S-boxes. The analyses found that through a proposed purely chaos-based method, an 8 × 8 S-box with a maximum average high nonlinearity of 108.5, or S-boxes with differential uniformity as low as 8, can be retrieved. Moreover, small-sized S-boxes with high nonlinearity and low differential uniformity are also obtainable. A performance comparison of the anticipated method with recent S-box proposals proved its dominance and effectiveness for a strong bijective S-box construction. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1a

Open AccessArticle The Entropy of Deep Eutectic Solvent Formation
Entropy 2018, 20(7), 524; https://doi.org/10.3390/e20070524
Received: 29 May 2018 / Revised: 25 June 2018 / Accepted: 11 July 2018 / Published: 12 July 2018
PDF Full-text (196 KB) | HTML Full-text | XML Full-text
Abstract
The standard entropies S298°E of deep eutectic solvents (DESs), which are liquid binary mixtures of a hydrogen bond acceptor component and a hydrogen bod donor one, are calculated from their molecular volumes, derived from their densities or crystal structures. These values are
[...] Read more.
The standard entropies S298°E of deep eutectic solvents (DESs), which are liquid binary mixtures of a hydrogen bond acceptor component and a hydrogen bod donor one, are calculated from their molecular volumes, derived from their densities or crystal structures. These values are compared with those of the components—pro-rated according to the DES composition—to obtain the standard entropies of DES formation ΔfS. These quantities are positive, due to the increased number and kinds of hydrogen bonds present in the DESs relative to those in the components. The ΔfS values are also compared with the freezing point depressions of the DESs ΔfusT/K, but no general conclusions on their mutual relationship could be drawn. Full article
Open AccessArticle Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance
Entropy 2018, 20(7), 523; https://doi.org/10.3390/e20070523
Received: 14 May 2018 / Revised: 22 June 2018 / Accepted: 6 July 2018 / Published: 12 July 2018
PDF Full-text (578 KB) | HTML Full-text | XML Full-text
Abstract
A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from
[...] Read more.
A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are obtained; third, the positive and negative target centers of two-tuple (determinacy, uncertainty) numbers are calculated; then the K-L distances of all alternatives to their positive and negative target centers are integrated by the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method; the final decision is based on the integrated value on a bigger the better basis. A case study exemplifies the proposed approach. Full article
Figures

Figure 1

Open AccessArticle An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain
Entropy 2018, 20(7), 522; https://doi.org/10.3390/e20070522
Received: 18 May 2018 / Revised: 24 June 2018 / Accepted: 9 July 2018 / Published: 11 July 2018
PDF Full-text (5390 KB) | HTML Full-text | XML Full-text
Abstract
Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the
[...] Read more.
Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the same time, traditional sparse-representation based image fusion methods suffer the weak representation ability of fixed dictionary. In order to overcome these deficiencies of MST- and SR-based methods, this paper proposes an image fusion framework which integrates nonsubsampled contour transformation (NSCT) into sparse representation (SR). In this fusion framework, NSCT is applied to source images decomposition for obtaining corresponding low- and high-pass coefficients. It fuses low- and high-pass coefficients by using SR and Sum Modified-laplacian (SML) respectively. NSCT inversely transforms the fused coefficients to obtain the final fused image. In this framework, a principal component analysis (PCA) is implemented in dictionary training to reduce the dimension of learned dictionary and computation costs. A novel high-pass fusion rule based on SML is applied to suppress pseudo-Gibbs phenomena around singularities of fused image. Compared to three mainstream image fusion solutions, the proposed solution achieves better performance on structural similarity and detail preservation in fused images. Full article
(This article belongs to the Special Issue Women in Information Theory 2018)
Figures

Figure 1

Open AccessArticle A New Compound Fault Feature Extraction Method Based on Multipoint Kurtosis and Variational Mode Decomposition
Entropy 2018, 20(7), 521; https://doi.org/10.3390/e20070521
Received: 21 June 2018 / Revised: 6 July 2018 / Accepted: 10 July 2018 / Published: 10 July 2018
PDF Full-text (6392 KB) | HTML Full-text | XML Full-text
Abstract
Due to the weak entropy of the vibration signal in the strong noise environment, it is very difficult to extract compound fault features. EMD (Empirical Mode Decomposition), EEMD (Ensemble Empirical Mode Decomposition) and LMD (Local Mean Decomposition) are widely used in compound fault
[...] Read more.
Due to the weak entropy of the vibration signal in the strong noise environment, it is very difficult to extract compound fault features. EMD (Empirical Mode Decomposition), EEMD (Ensemble Empirical Mode Decomposition) and LMD (Local Mean Decomposition) are widely used in compound fault feature extraction. Although they can decompose different characteristic components into each IMF (Intrinsic Mode Function), there is still serious mode mixing because of the noise. VMD (Variational Mode Decomposition) is a rigorous mathematical theory that can alleviate the mode mixing. Each characteristic component of VMD contains a unique center frequency but it is a parametric decomposition method. An improper value of K will lead to over-decomposition or under-decomposition. So, the number of decomposition levels of VMD needs an adaptive determination. The commonly used adaptive methods are particle swarm optimization and ant colony algorithm but they consume a lot of computing time. This paper proposes a compound fault feature extraction method based on Multipoint Kurtosis (MKurt)-VMD. Firstly, MED (Minimum Entropy Deconvolution) denoises the vibration signal in the strong noise environment. Secondly, multipoint kurtosis extracts the periodic multiple faults and a multi-periodic vector is further constructed to determine the number of impulse periods which determine the K value of VMD. Thirdly, the noise-reduced signal is processed by VMD and the fault features are further determined by FFT. Finally, the proposed compound fault feature extraction method can alleviate the mode mixing in comparison with EEMD. The validity of this method is further confirmed by processing the measured signal and extracting the compound fault features such as the gear spalling and the roller fault, their fault periods are 22.4 and 111.2 respectively and the corresponding frequencies are 360 Hz and 72 Hz, respectively. Full article
Figures

Figure 1

Open AccessArticle Interfacial Properties of Active-Passive Polymer Mixtures
Entropy 2018, 20(7), 520; https://doi.org/10.3390/e20070520
Received: 29 May 2018 / Revised: 3 July 2018 / Accepted: 8 July 2018 / Published: 10 July 2018
PDF Full-text (2522 KB) | HTML Full-text | XML Full-text
Abstract
Active matter consists of particles that dissipate energy, from their own sources, in the form of mechanical work on their surroundings. Recent interest in active-passive polymer mixtures has been driven by their relevance in phase separation of (e.g., transcriptionally) active and inactive (transcriptionally
[...] Read more.
Active matter consists of particles that dissipate energy, from their own sources, in the form of mechanical work on their surroundings. Recent interest in active-passive polymer mixtures has been driven by their relevance in phase separation of (e.g., transcriptionally) active and inactive (transcriptionally silent) DNA strands in nuclei of living cells. In this paper, we study the interfacial properties of the phase separated steady states of the active-passive polymer mixtures and compare them with equilibrium phase separation. We model the active constituents by assigning them stronger-than-thermal fluctuations. We demonstrate that the entropy production is an accurate indicator of the phase transition. We then construct phase diagrams and analyze kinetic properties of the particles as a function of the distance from the interface. Studying the interface fluctuations, we find that they follow the capillary waves spectrum. This allows us to establish a mechanistic definition of the interfacial stiffness and its dependence on the relative level of activity with respect to the passive constituents. We show how the interfacial width depends on the activity ratio and comment on the finite size effects. Our results highlight similarities and differences of the non-equilibrium steady states with an equilibrium phase separated polymer mixture with a lower critical solution temperature. We present several directions in which the non-equilibrium system can be studied further and point out interesting observations that indicate general principles behind the non-equilibrium phase separation. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics of Interfaces)
Figures

Figure 1

Open AccessArticle Projected Affinity Values for Nyström Spectral Clustering
Entropy 2018, 20(7), 519; https://doi.org/10.3390/e20070519
Received: 19 May 2018 / Revised: 6 July 2018 / Accepted: 9 July 2018 / Published: 10 July 2018
PDF Full-text (887 KB) | HTML Full-text | XML Full-text
Abstract
In kernel methods, Nyström approximation is a popular way of calculating out-of-sample extensions and can be further applied to large-scale data clustering and classification tasks. Given a new data point, Nyström employs its empirical affinity vector, k, for calculation. This vector is
[...] Read more.
In kernel methods, Nyström approximation is a popular way of calculating out-of-sample extensions and can be further applied to large-scale data clustering and classification tasks. Given a new data point, Nyström employs its empirical affinity vector, k, for calculation. This vector is assumed to be a proper measurement of the similarity between the new point and the training set. In this paper, we suggest replacing the affinity vector by its projections on the leading eigenvectors learned from the training set, i.e., using k*=i=1ckTuiui instead, where ui is the i-th eigenvector of the training set and c is the number of eigenvectors used, which is typically equal to the number of classes designed by users. Our work is motivated by the constraints that in kernel space, the kernel-mapped new point should (a) also lie on the unit sphere defined by the Gaussian kernel and (b) generate training set affinity values close to k. These two constraints define a Quadratic Optimization Over a Sphere (QOOS) problem. In this paper, we prove that the projection on the leading eigenvectors, rather than the original affinity vector, is the solution to the QOOS problem. The experimental results show that the proposed replacement of k by k* slightly improves the performance of the Nyström approximation. Compared with other affinity matrix modification methods, our k* obtains comparable or higher clustering performance in terms of accuracy and Normalized Mutual Information (NMI). Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Back to Top