Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 20, Issue 7 (July 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Variation in the transcriptional activity of DNA fibres has been hypothesized to promote their [...] Read more.
View options order results:
result details:
Displaying articles 1-64
Export citation of selected articles as:
Open AccessArticle Residual Multiparticle Entropy for a Fractal Fluid of Hard Spheres
Entropy 2018, 20(7), 544; https://doi.org/10.3390/e20070544
Received: 1 July 2018 / Revised: 18 July 2018 / Accepted: 20 July 2018 / Published: 23 July 2018
PDF Full-text (440 KB) | HTML Full-text | XML Full-text
Abstract
The residual multiparticle entropy (RMPE) of a fluid is defined as the difference, Δs, between the excess entropy per particle (relative to an ideal gas with the same temperature and density), sex, and the pair-correlation contribution, s2.
[...] Read more.
The residual multiparticle entropy (RMPE) of a fluid is defined as the difference, Δs, between the excess entropy per particle (relative to an ideal gas with the same temperature and density), sex, and the pair-correlation contribution, s2. Thus, the RMPE represents the net contribution to sex due to spatial correlations involving three, four, or more particles. A heuristic “ordering” criterion identifies the vanishing of the RMPE as an underlying signature of an impending structural or thermodynamic transition of the system from a less ordered to a more spatially organized condition (freezing is a typical example). Regardless of this, the knowledge of the RMPE is important to assess the impact of non-pair multiparticle correlations on the entropy of the fluid. Recently, an accurate and simple proposal for the thermodynamic and structural properties of a hard-sphere fluid in fractional dimension 1<d<3 has been proposed (Santos, A.; López de Haro, M. Phys. Rev. E 2016, 93, 062126). The aim of this work is to use this approach to evaluate the RMPE as a function of both d and the packing fraction ϕ. It is observed that, for any given dimensionality d, the RMPE takes negative values for small densities, reaches a negative minimum Δsmin at a packing fraction ϕmin, and then rapidly increases, becoming positive beyond a certain packing fraction ϕ0. Interestingly, while both ϕmin and ϕ0 monotonically decrease as dimensionality increases, the value of Δsmin exhibits a nonmonotonic behavior, reaching an absolute minimum at a fractional dimensionality d2.38. A plot of the scaled RMPE Δs/|Δsmin| shows a quasiuniversal behavior in the region 0.14ϕϕ00.02. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle The Entropy Complexity of an Asymmetric Dual-Channel Supply Chain with Probabilistic Selling
Entropy 2018, 20(7), 543; https://doi.org/10.3390/e20070543
Received: 13 May 2018 / Revised: 16 July 2018 / Accepted: 17 July 2018 / Published: 23 July 2018
PDF Full-text (3427 KB) | HTML Full-text | XML Full-text
Abstract
Considering consumers’ attitudes to risks for probabilistic products and probabilistic selling, this paper develops a dynamic Stackelberg game model of the supply chain considering the asymmetric dual-channel structure. Based on entropy theory and dynamic theory, we analyze and simulate the influences of decision
[...] Read more.
Considering consumers’ attitudes to risks for probabilistic products and probabilistic selling, this paper develops a dynamic Stackelberg game model of the supply chain considering the asymmetric dual-channel structure. Based on entropy theory and dynamic theory, we analyze and simulate the influences of decision variables and parameters on the stability and entropy of asymmetric dual-channel supply chain systems using bifurcation, entropy diagram, the parameter plot basin, attractor, etc. The results show that decision variables and parameters have great impacts on the stability of asymmetric dual-channel supply chains; the supply chain system will enter chaos through flip bifurcation or Neimark–Sacker bifurcation with the increase of the system entropy, and thus the system is more complex and falls into a chaotic state, with its entropy increased. The stability of the system can become robust with the increase of the probability that product a becomes a probabilistic product, and it weakens with the increase of the risk preference of customers for probabilistic products and the relative bargaining power of the retailer. A manufacturer using the direct selling channel may obtain greater profit than one using traditional selling channels. Using the method of parameter adjustment and feedback control, the entropy of the supply chain system will decline, and the supply chain system will fall into a stable state. Therefore, in the actual market of probabilistic selling, the manufacturers and retailers should pay attention to the parameters and adjustment speed of prices and ensure the stability of the game process and the orderliness of the dual-channel supply chain. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle The Principle of Least Action for Reversible Thermodynamic Processes and Cycles
Entropy 2018, 20(7), 542; https://doi.org/10.3390/e20070542
Received: 24 June 2018 / Revised: 19 July 2018 / Accepted: 20 July 2018 / Published: 21 July 2018
PDF Full-text (2318 KB) | HTML Full-text | XML Full-text
Abstract
The principle of least action, which is usually applied to natural phenomena, can also be used in optimization problems with manual intervention. Following a brief introduction to the brachistochrone problem in classical mechanics, the principle of least action was applied to the optimization
[...] Read more.
The principle of least action, which is usually applied to natural phenomena, can also be used in optimization problems with manual intervention. Following a brief introduction to the brachistochrone problem in classical mechanics, the principle of least action was applied to the optimization of reversible thermodynamic processes and cycles in this study. Analyses indicated that the entropy variation per unit of heat exchanged is the mode of action for reversible heat absorption or heat release processes. Minimizing this action led to the optimization of heat absorption or heat release processes, and the corresponding optimal path was the first or second half of a Carnot cycle. Finally, the action of an entire reversible thermodynamic cycle was determined as the sum of the actions of the heat absorption and release processes. Minimizing this action led to a Carnot cycle. This implies that the Carnot cycle can also be derived using the principle of least action derived from the entropy concept. Full article
(This article belongs to the Section Thermodynamics)
Figures

Figure 1

Open AccessFeature PaperReview Random k-Body Ensembles for Chaos and Thermalization in Isolated Systems
Entropy 2018, 20(7), 541; https://doi.org/10.3390/e20070541
Received: 7 June 2018 / Revised: 13 July 2018 / Accepted: 16 July 2018 / Published: 20 July 2018
PDF Full-text (1491 KB) | HTML Full-text | XML Full-text
Abstract
Embedded ensembles or random matrix ensembles generated by k-body interactions acting in many-particle spaces are now well established to be paradigmatic models for many-body chaos and thermalization in isolated finite quantum (fermion or boson) systems. In this article, briefly discussed are (i)
[...] Read more.
Embedded ensembles or random matrix ensembles generated by k-body interactions acting in many-particle spaces are now well established to be paradigmatic models for many-body chaos and thermalization in isolated finite quantum (fermion or boson) systems. In this article, briefly discussed are (i) various embedded ensembles with Lie algebraic symmetries for fermion and boson systems and their extensions (for Majorana fermions, with point group symmetries etc.); (ii) results generated by these ensembles for various aspects of chaos, thermalization and statistical relaxation, including the role of q-hermite polynomials in k-body ensembles; and (iii) analyses of numerical and experimental data for level fluctuations for trapped boson systems and results for statistical relaxation and decoherence in these systems with close relations to results from embedded ensembles. Full article
(This article belongs to the Special Issue Thermalization in Isolated Quantum Systems)
Figures

Figure 1

Open AccessArticle Information Guided Exploration of Scalar Values and Isocontours in Ensemble Datasets
Entropy 2018, 20(7), 540; https://doi.org/10.3390/e20070540
Received: 26 June 2018 / Revised: 16 July 2018 / Accepted: 18 July 2018 / Published: 20 July 2018
PDF Full-text (3621 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Uncertainty of scalar values in an ensemble dataset is often represented by the collection of their corresponding isocontours. Various techniques such as contour-boxplot, contour variability plot, glyphs and probabilistic marching-cubes have been proposed to analyze and visualize ensemble isocontours. All these techniques assume
[...] Read more.
Uncertainty of scalar values in an ensemble dataset is often represented by the collection of their corresponding isocontours. Various techniques such as contour-boxplot, contour variability plot, glyphs and probabilistic marching-cubes have been proposed to analyze and visualize ensemble isocontours. All these techniques assume that a scalar value of interest is already known to the user. Not much work has been done in guiding users to select the scalar values for such uncertainty analysis. Moreover, analyzing and visualizing a large collection of ensemble isocontours for a selected scalar value has its own challenges. Interpreting the visualizations of such large collections of isocontours is also a difficult task. In this work, we propose a new information-theoretic approach towards addressing these issues. Using specific information measures that estimate the predictability and surprise of specific scalar values, we evaluate the overall uncertainty associated with all the scalar values in an ensemble system. This helps the scientist to understand the effects of uncertainty on different data features. To understand in finer details the contribution of individual members towards the uncertainty of the ensemble isocontours of a selected scalar value, we propose a conditional entropy based algorithm to quantify the individual contributions. This can help simplify analysis and visualization for systems with more members by identifying the members contributing the most towards overall uncertainty. We demonstrate the efficacy of our method by applying it on real-world datasets from material sciences, weather forecasting and ocean simulation experiments. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Figures

Figure 1

Open AccessArticle Turbulence through the Spyglass of Bilocal Kinetics
Entropy 2018, 20(7), 539; https://doi.org/10.3390/e20070539
Received: 13 June 2018 / Revised: 13 July 2018 / Accepted: 16 July 2018 / Published: 20 July 2018
PDF Full-text (223 KB) | HTML Full-text | XML Full-text
Abstract
In two recent papers we introduced a generalization of Boltzmann’s assumption of molecular chaos based on a criterion of maximum entropy, which allowed setting up a bilocal version of Boltzmann’s kinetic equation. The present paper aims to investigate how the essentially non-local character
[...] Read more.
In two recent papers we introduced a generalization of Boltzmann’s assumption of molecular chaos based on a criterion of maximum entropy, which allowed setting up a bilocal version of Boltzmann’s kinetic equation. The present paper aims to investigate how the essentially non-local character of turbulent flows can be addressed through this bilocal kinetic description, instead of the more standard approach through the local Euler/Navier–Stokes equation. Balance equations appropriate to this kinetic scheme are derived and closed so as to provide bilocal hydrodynamical equations at the non-viscous order. These equations essentially consist of two copies of the usual local equations, but coupled through a bilocal pressure tensor. Interestingly, our formalism automatically produces a closed transport equation for this coupling term. Full article
Open AccessArticle When Photons Are Lying about Where They Have Been
Entropy 2018, 20(7), 538; https://doi.org/10.3390/e20070538
Received: 1 June 2018 / Revised: 15 July 2018 / Accepted: 16 July 2018 / Published: 19 July 2018
PDF Full-text (849 KB) | HTML Full-text | XML Full-text
Abstract
The history of photons in a nested Mach–Zehnder interferometer with an inserted Dove prism is analyzed. It is argued that the Dove prism does not change the past of the photon. Alonso and Jordan correctly point out that an experiment by Danan et
[...] Read more.
The history of photons in a nested Mach–Zehnder interferometer with an inserted Dove prism is analyzed. It is argued that the Dove prism does not change the past of the photon. Alonso and Jordan correctly point out that an experiment by Danan et al. demonstrating the past of the photon in a nested interferometer will show different results when the Dove prism is inserted. The reason, however, is not that the past is changed, but that the experimental demonstration becomes incorrect. The explanation of a signal from the place in which the photon was (almost) not present is given. Bohmian trajectory of the photon is specified. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle Thermal Characteristics of Staggered Double-Layer Microchannel Heat Sink
Entropy 2018, 20(7), 537; https://doi.org/10.3390/e20070537
Received: 14 June 2018 / Revised: 5 July 2018 / Accepted: 12 July 2018 / Published: 19 July 2018
PDF Full-text (1678 KB) | HTML Full-text | XML Full-text
Abstract
The present work numerically studies the thermal characteristics of a staggered double-layer microchannel heat sink (DLMCHS) with an offset between the upper layer of microchannels and lower layer of microchannels in the width direction, and investigates effects of inlet velocity and geometric parameters
[...] Read more.
The present work numerically studies the thermal characteristics of a staggered double-layer microchannel heat sink (DLMCHS) with an offset between the upper layer of microchannels and lower layer of microchannels in the width direction, and investigates effects of inlet velocity and geometric parameters including the offset of the two layers of microchannels, vertical rib thickness and microchannel aspect ratio on the thermal resistance of the staggered DLMCHS. The present work found that the thermal resistance of the staggered DLMCHS increases with the increasing offset value when the vertical rib thickness is small, but decreases firstly and then increases as the offset value increases when the vertical rib thickness is large enough. Furthermore, the thermal resistance of the staggered DLMCHS decreases with the increasing offset when the aspect ratio is small, but increases with the increasing offset when the aspect ratio is large enough. Thus, for the DLMCHS with a small microchannel aspect ratio and large vertical rib thickness, the offset between the upper layer of microchannels and the lower layer of microchannels in the width direction is a potential method to reduce thermal resistance and improve the thermal performance of the DLMCHS. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Figures

Figure 1

Open AccessArticle Attacks against a Simplified Experimentally Feasible Semiquantum Key Distribution Protocol
Entropy 2018, 20(7), 536; https://doi.org/10.3390/e20070536
Received: 16 June 2018 / Revised: 10 July 2018 / Accepted: 16 July 2018 / Published: 18 July 2018
PDF Full-text (250 KB) | HTML Full-text | XML Full-text
Abstract
A semiquantum key distribution (SQKD) protocol makes it possible for a quantum party and a classical party to generate a secret shared key. However, many existing SQKD protocols are not experimentally feasible in a secure way using current technology. An experimentally feasible SQKD
[...] Read more.
A semiquantum key distribution (SQKD) protocol makes it possible for a quantum party and a classical party to generate a secret shared key. However, many existing SQKD protocols are not experimentally feasible in a secure way using current technology. An experimentally feasible SQKD protocol, “classical Alice with a controllable mirror” (the “Mirror protocol”), has recently been presented and proved completely robust, but it is more complicated than other SQKD protocols. Here we prove a simpler variant of the Mirror protocol (the “simplified Mirror protocol”) to be completely non-robust by presenting two possible attacks against it. Our results show that the complexity of the Mirror protocol is at least partly necessary for achieving robustness. Full article
Open AccessArticle A Simple Chaotic Map-Based Image Encryption System Using Both Plaintext Related Permutation and Diffusion
Entropy 2018, 20(7), 535; https://doi.org/10.3390/e20070535
Received: 11 June 2018 / Revised: 13 July 2018 / Accepted: 16 July 2018 / Published: 18 July 2018
Cited by 1 | PDF Full-text (19276 KB) | HTML Full-text | XML Full-text
Abstract
Recently, to conquer most non-plain related chaos-based image cryptosystems’ security flaws that cannot resist the powerful chosen/knownn plain-text attacks or differential attacks efficiently for less plaintext sensitivity, many plain related chaos-based image cryptosystems have been developed. Most cryptosystems that have adopted the traditional
[...] Read more.
Recently, to conquer most non-plain related chaos-based image cryptosystems’ security flaws that cannot resist the powerful chosen/knownn plain-text attacks or differential attacks efficiently for less plaintext sensitivity, many plain related chaos-based image cryptosystems have been developed. Most cryptosystems that have adopted the traditional permutation–diffusion structure still have some drawbacks and security flaws: (1) most plaintext related image encryption schemes using only plaintext related confusion operation or only plaintext related diffusion operation relate to plaintext inadequately that cannot achieve high plaintext sensitivity; (2) in some algorithms, the generation of security key that needs to be sent to the receiver is determined by the original image, so these algorithms may not applicable to real-time image encryption; (3) most plaintext related image encryption schemes have less efficiency because more than one round permutation–diffusion operation is required to achieve high security. To obtain high security and efficiency, a simple chaotic based color image encryption system by using both plaintext related permutation and diffusion is presented in this paper. In our cryptosystem, the values of the parameters of cat map used in permutation stage are related to plain image and the parameters of cat map are also influenced by the diffusion operation. Thus, both the permutation stage and diffusion stage are related to plain images, which can obtain high key sensitivity and plaintext sensitivity to resist chosen/known plaintext attacks or differential attacks efficiently. Furthermore, only one round of plaintext related permutation and diffusion operation is performed to process the original image to obtain cipher image. Thus, the proposed scheme has high efficiency. Complete simulations are given and the simulation results prove the excellent security and efficiency of the proposed scheme. Full article
Figures

Figure 1

Open AccessArticle Symmetry and Correspondence of Algorithmic Complexity over Geometric, Spatial and Topological Representations
Entropy 2018, 20(7), 534; https://doi.org/10.3390/e20070534
Received: 25 June 2018 / Revised: 12 July 2018 / Accepted: 14 July 2018 / Published: 18 July 2018
PDF Full-text (871 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic
[...] Read more.
We introduce a definition of algorithmic symmetry in the context of geometric and spatial complexity able to capture mathematical aspects of different objects using as a case study polyominoes and polyhedral graphs. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov–Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumerate all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity—both theoretical and numerical—with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize spatial, geometric, symmetric and topological properties of mathematical objects and graphs. Full article
Figures

Figure 1

Open AccessArticle From Identity to Uniqueness: The Emergence of Increasingly Higher Levels of Hierarchy in the Process of the Matter Evolution
Entropy 2018, 20(7), 533; https://doi.org/10.3390/e20070533
Received: 25 June 2018 / Revised: 7 July 2018 / Accepted: 16 July 2018 / Published: 17 July 2018
Cited by 1 | PDF Full-text (2888 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the
[...] Read more.
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the chemical evolution-diversification; and during the biological and human evolution—a process of distinctifying. The main event in the evolution of the Universe was the emergence of new levels of hierarchy, which together constitute the process of hierarchogenesis. This process contains 14 such events so far, and its dynamics is presented graphically by a very regular and smooth curve. The function that the curve presents is odd, i.e., symmetric about its central part, due to the similarity of patterns of the deceleration during the cosmic/chemical evolution (1st half of the general evolution) and the acceleration during the biological/human evolution (its 2nd half). The main driver of the hierarchogenesis as described by this odd function is counteraction and counterbalance of attraction and repulsion that take various forms at the different hierarchical levels. Direction and pace of the irreversible and inevitable increase of the Universe complexity in accordance with the general law of complification result from a consistent influence of all these factors. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessEditorial Thermodynamics in Material Science
Entropy 2018, 20(7), 532; https://doi.org/10.3390/e20070532
Received: 28 June 2018 / Accepted: 28 June 2018 / Published: 16 July 2018
PDF Full-text (156 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Thermodynamics in Material Science)
Open AccessArticle Automatic Analysis of Archimedes’ Spiral for Characterization of Genetic Essential Tremor Based on Shannon’s Entropy and Fractal Dimension
Entropy 2018, 20(7), 531; https://doi.org/10.3390/e20070531
Received: 20 May 2018 / Revised: 11 July 2018 / Accepted: 11 July 2018 / Published: 16 July 2018
PDF Full-text (3867 KB) | HTML Full-text | XML Full-text
Abstract
Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson’s disease. The drawing of the Archimedes’ spiral is the gold standard test to distinguish between both pathologies. The aim of this
[...] Read more.
Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson’s disease. The drawing of the Archimedes’ spiral is the gold standard test to distinguish between both pathologies. The aim of this paper is to select non-linear biomarkers based on the analysis of digital drawings. It belongs to a larger cross study for early diagnosis of essential tremor that also includes genetic information. The proposed automatic analysis system consists in a hybrid solution: Machine Learning paradigms and automatic selection of features based on statistical tests using medical criteria. Moreover, the selected biomarkers comprise not only commonly used linear features (static and dynamic), but also other non-linear ones: Shannon entropy and Fractal Dimension. The results are hopeful, and the developed tool can easily be adapted to users; and taking into account social and economic points of view, it could be very helpful in real complex environments. Full article
(This article belongs to the Special Issue Selected Papers from IWOBI—Entropy-Based Applied Signal Processing)
Figures

Figure 1

Open AccessArticle On Chaos in the Fractional-Order Discrete-Time Unified System and Its Control Synchronization
Entropy 2018, 20(7), 530; https://doi.org/10.3390/e20070530
Received: 9 June 2018 / Revised: 11 July 2018 / Accepted: 12 July 2018 / Published: 15 July 2018
Cited by 1 | PDF Full-text (2770 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a fractional map based on the integer-order unified map. The chaotic behavior of the proposed map is analyzed by means of bifurcations plots, and experimental bounds are placed on the parameters and fractional order. Different control laws are
[...] Read more.
In this paper, we propose a fractional map based on the integer-order unified map. The chaotic behavior of the proposed map is analyzed by means of bifurcations plots, and experimental bounds are placed on the parameters and fractional order. Different control laws are proposed to force the states to zero asymptotically and to achieve the complete synchronization of a pair of fractional unified maps with identical or nonidentical parameters. Numerical results are used throughout the paper to illustrate the findings. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Figures

Figure 1

Open AccessArticle Curvature Invariants of Statistical Submanifolds in Kenmotsu Statistical Manifolds of Constant ϕ-Sectional Curvature
Entropy 2018, 20(7), 529; https://doi.org/10.3390/e20070529
Received: 11 June 2018 / Revised: 6 July 2018 / Accepted: 11 July 2018 / Published: 14 July 2018
Cited by 1 | PDF Full-text (288 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we consider statistical submanifolds of Kenmotsu statistical manifolds of constant ϕ-sectional curvature. For such submanifold, we investigate curvature properties. We establish some inequalities involving the normalized δ-Casorati curvatures (extrinsic invariants) and the scalar curvature (intrinsic invariant). Moreover, we
[...] Read more.
In this article, we consider statistical submanifolds of Kenmotsu statistical manifolds of constant ϕ-sectional curvature. For such submanifold, we investigate curvature properties. We establish some inequalities involving the normalized δ-Casorati curvatures (extrinsic invariants) and the scalar curvature (intrinsic invariant). Moreover, we prove that the equality cases of the inequalities hold if and only if the imbedding curvature tensors h and h of the submanifold (associated with the dual connections) satisfy h=h, i.e., the submanifold is totally geodesic with respect to the Levi–Civita connection. Full article
Open AccessArticle Free Final Time Input Design Problem for Robust Entropy-Like System Parameter Estimation
Entropy 2018, 20(7), 528; https://doi.org/10.3390/e20070528
Received: 3 June 2018 / Revised: 6 July 2018 / Accepted: 12 July 2018 / Published: 14 July 2018
PDF Full-text (1735 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra
[...] Read more.
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra state variable representing the free final time scaling factor, formulated in the Bolza functional form, subject to the D-efficiency constraint as well as the input energy constraint. The objective function used for the model of the system identification provides robustness regarding the outlying data and was constructed using the so-called Entropy-like estimator. The perturbation time interval has a significant impact on the cost of the real-life system identification experiment. The contribution of this work is to examine the economic aspects between the imposed constraints on the input signal design, and the experiment duration while undertaking an identification experiment in the real operating conditions. The methodology is applicable to the general class of systems and was supported by numerical examples. Illustrative examples of the Least Squares, and the Entropy-Like estimators for the system parameter data validation where measurements include additive white noise are compared using ellipsoidal confidence regions. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Category Structure and Categorical Perception Jointly Explained by Similarity-Based Information Theory
Entropy 2018, 20(7), 527; https://doi.org/10.3390/e20070527
Received: 27 April 2018 / Revised: 8 July 2018 / Accepted: 10 July 2018 / Published: 14 July 2018
PDF Full-text (1097 KB) | HTML Full-text | XML Full-text
Abstract
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any
[...] Read more.
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any element belongs to a category. Interestingly, this categorization leads to an altered perception referred to as categorical perception: for a given physical distance, items within a category are perceived closer than items in two different categories. A subtler effect is the perceptual magnet: discriminability is reduced close to the prototypes of a category and increased near its boundaries. Here, starting from predefined abstract categories, we naturally derive the internal structure of categories and the phenomenon of categorical perception, using an information theoretical framework that involves both probabilities and pairwise similarities between items. Essentially, we suggest that pairwise similarities between items are to be tuned to render some predefined categories as well as possible. However, constraints on these pairwise similarities only produce an approximate matching, which explains concurrently the notion of goodness and the warping of perception. Overall, we demonstrate that similarity-based information theory may offer a global and unified principled understanding of categorization and categorical perception simultaneously. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Universal Features in Phonological Neighbor Networks
Entropy 2018, 20(7), 526; https://doi.org/10.3390/e20070526
Received: 22 May 2018 / Revised: 29 June 2018 / Accepted: 10 July 2018 / Published: 12 July 2018
PDF Full-text (760 KB) | HTML Full-text | XML Full-text
Abstract
Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two
[...] Read more.
Human speech perception involves transforming a countinuous acoustic signal into discrete linguistically meaningful units (phonemes) while simultaneously causing a listener to activate words that are similar to the spoken utterance and to each other. The Neighborhood Activation Model posits that phonological neighbors (two forms [words] that differ by one phoneme) compete significantly for recognition as a spoken word is heard. This definition of phonological similarity can be extended to an entire corpus of forms to produce a phonological neighbor network (PNN). We study PNNs for five languages: English, Spanish, French, Dutch, and German. Consistent with previous work, we find that the PNNs share a consistent set of topological features. Using an approach that generates random lexicons with increasing levels of phonological realism, we show that even random forms with minimal relationship to any real language, combined with only the empirical distribution of language-specific phonological form lengths, are sufficient to produce the topological properties observed in the real language PNNs. The resulting pseudo-PNNs are insensitive to the level of lingustic realism in the random lexicons but quite sensitive to the shape of the form length distribution. We therefore conclude that “universal” features seen across multiple languages are really string universals, not language universals, and arise primarily due to limitations in the kinds of networks generated by the one-step neighbor definition. Taken together, our results indicate that caution is warranted when linking the dynamics of human spoken word recognition to the topological properties of PNNs, and that the investigation of alternative similarity metrics for phonological forms should be a priority. Full article
(This article belongs to the Section Complexity)
Figures

Figure 1

Open AccessArticle A New Hyperchaotic System-Based Design for Efficient Bijective Substitution-Boxes
Entropy 2018, 20(7), 525; https://doi.org/10.3390/e20070525
Received: 25 May 2018 / Revised: 28 June 2018 / Accepted: 9 July 2018 / Published: 12 July 2018
Cited by 1 | PDF Full-text (2351 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a novel method to construct cryptographically strong bijective substitution-boxes based on the complicated dynamics of a new hyperchaotic system. The new hyperchaotic system was found to have good characteristics when compared with other systems utilized for S-box construction.
[...] Read more.
In this paper, we present a novel method to construct cryptographically strong bijective substitution-boxes based on the complicated dynamics of a new hyperchaotic system. The new hyperchaotic system was found to have good characteristics when compared with other systems utilized for S-box construction. The performance assessment of the proposed S-box method was carried out based on criteria, such as high nonlinearity, a good avalanche effect, bit-independent criteria, and low differential uniformity. The proposed method was also analyzed for the batch-generation of 8 × 8 S-boxes. The analyses found that through a proposed purely chaos-based method, an 8 × 8 S-box with a maximum average high nonlinearity of 108.5, or S-boxes with differential uniformity as low as 8, can be retrieved. Moreover, small-sized S-boxes with high nonlinearity and low differential uniformity are also obtainable. A performance comparison of the anticipated method with recent S-box proposals proved its dominance and effectiveness for a strong bijective S-box construction. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1a

Open AccessArticle The Entropy of Deep Eutectic Solvent Formation
Entropy 2018, 20(7), 524; https://doi.org/10.3390/e20070524
Received: 29 May 2018 / Revised: 25 June 2018 / Accepted: 11 July 2018 / Published: 12 July 2018
PDF Full-text (196 KB) | HTML Full-text | XML Full-text
Abstract
The standard entropies S298°E of deep eutectic solvents (DESs), which are liquid binary mixtures of a hydrogen bond acceptor component and a hydrogen bod donor one, are calculated from their molecular volumes, derived from their densities or crystal structures. These values are
[...] Read more.
The standard entropies S298°E of deep eutectic solvents (DESs), which are liquid binary mixtures of a hydrogen bond acceptor component and a hydrogen bod donor one, are calculated from their molecular volumes, derived from their densities or crystal structures. These values are compared with those of the components—pro-rated according to the DES composition—to obtain the standard entropies of DES formation ΔfS. These quantities are positive, due to the increased number and kinds of hydrogen bonds present in the DESs relative to those in the components. The ΔfS values are also compared with the freezing point depressions of the DESs ΔfusT/K, but no general conclusions on their mutual relationship could be drawn. Full article
Open AccessArticle Generalized Grey Target Decision Method for Mixed Attributes Based on Kullback-Leibler Distance
Entropy 2018, 20(7), 523; https://doi.org/10.3390/e20070523
Received: 14 May 2018 / Revised: 22 June 2018 / Accepted: 6 July 2018 / Published: 12 July 2018
PDF Full-text (578 KB) | HTML Full-text | XML Full-text
Abstract
A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from
[...] Read more.
A novel generalized grey target decision method for mixed attributes based on Kullback-Leibler (K-L) distance is proposed. The proposed approach involves the following steps: first, all indices are converted into index binary connection number vectors; second, the two-tuple (determinacy, uncertainty) numbers originated from index binary connection number vectors are obtained; third, the positive and negative target centers of two-tuple (determinacy, uncertainty) numbers are calculated; then the K-L distances of all alternatives to their positive and negative target centers are integrated by the Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method; the final decision is based on the integrated value on a bigger the better basis. A case study exemplifies the proposed approach. Full article
Figures

Figure 1

Open AccessArticle An Image Fusion Method Based on Sparse Representation and Sum Modified-Laplacian in NSCT Domain
Entropy 2018, 20(7), 522; https://doi.org/10.3390/e20070522
Received: 18 May 2018 / Revised: 24 June 2018 / Accepted: 9 July 2018 / Published: 11 July 2018
PDF Full-text (5390 KB) | HTML Full-text | XML Full-text
Abstract
Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the
[...] Read more.
Multi-modality image fusion provides more comprehensive and sophisticated information in modern medical diagnosis, remote sensing, video surveillance, etc. Traditional multi-scale transform (MST) based image fusion solutions have difficulties in the selection of decomposition level, and the contrast loss in fused image. At the same time, traditional sparse-representation based image fusion methods suffer the weak representation ability of fixed dictionary. In order to overcome these deficiencies of MST- and SR-based methods, this paper proposes an image fusion framework which integrates nonsubsampled contour transformation (NSCT) into sparse representation (SR). In this fusion framework, NSCT is applied to source images decomposition for obtaining corresponding low- and high-pass coefficients. It fuses low- and high-pass coefficients by using SR and Sum Modified-laplacian (SML) respectively. NSCT inversely transforms the fused coefficients to obtain the final fused image. In this framework, a principal component analysis (PCA) is implemented in dictionary training to reduce the dimension of learned dictionary and computation costs. A novel high-pass fusion rule based on SML is applied to suppress pseudo-Gibbs phenomena around singularities of fused image. Compared to three mainstream image fusion solutions, the proposed solution achieves better performance on structural similarity and detail preservation in fused images. Full article
(This article belongs to the Special Issue Women in Information Theory 2018)
Figures

Figure 1

Open AccessArticle A New Compound Fault Feature Extraction Method Based on Multipoint Kurtosis and Variational Mode Decomposition
Entropy 2018, 20(7), 521; https://doi.org/10.3390/e20070521
Received: 21 June 2018 / Revised: 6 July 2018 / Accepted: 10 July 2018 / Published: 10 July 2018
Cited by 1 | PDF Full-text (6392 KB) | HTML Full-text | XML Full-text
Abstract
Due to the weak entropy of the vibration signal in the strong noise environment, it is very difficult to extract compound fault features. EMD (Empirical Mode Decomposition), EEMD (Ensemble Empirical Mode Decomposition) and LMD (Local Mean Decomposition) are widely used in compound fault
[...] Read more.
Due to the weak entropy of the vibration signal in the strong noise environment, it is very difficult to extract compound fault features. EMD (Empirical Mode Decomposition), EEMD (Ensemble Empirical Mode Decomposition) and LMD (Local Mean Decomposition) are widely used in compound fault feature extraction. Although they can decompose different characteristic components into each IMF (Intrinsic Mode Function), there is still serious mode mixing because of the noise. VMD (Variational Mode Decomposition) is a rigorous mathematical theory that can alleviate the mode mixing. Each characteristic component of VMD contains a unique center frequency but it is a parametric decomposition method. An improper value of K will lead to over-decomposition or under-decomposition. So, the number of decomposition levels of VMD needs an adaptive determination. The commonly used adaptive methods are particle swarm optimization and ant colony algorithm but they consume a lot of computing time. This paper proposes a compound fault feature extraction method based on Multipoint Kurtosis (MKurt)-VMD. Firstly, MED (Minimum Entropy Deconvolution) denoises the vibration signal in the strong noise environment. Secondly, multipoint kurtosis extracts the periodic multiple faults and a multi-periodic vector is further constructed to determine the number of impulse periods which determine the K value of VMD. Thirdly, the noise-reduced signal is processed by VMD and the fault features are further determined by FFT. Finally, the proposed compound fault feature extraction method can alleviate the mode mixing in comparison with EEMD. The validity of this method is further confirmed by processing the measured signal and extracting the compound fault features such as the gear spalling and the roller fault, their fault periods are 22.4 and 111.2 respectively and the corresponding frequencies are 360 Hz and 72 Hz, respectively. Full article
Figures

Figure 1

Open AccessArticle Interfacial Properties of Active-Passive Polymer Mixtures
Entropy 2018, 20(7), 520; https://doi.org/10.3390/e20070520
Received: 29 May 2018 / Revised: 3 July 2018 / Accepted: 8 July 2018 / Published: 10 July 2018
PDF Full-text (2522 KB) | HTML Full-text | XML Full-text
Abstract
Active matter consists of particles that dissipate energy, from their own sources, in the form of mechanical work on their surroundings. Recent interest in active-passive polymer mixtures has been driven by their relevance in phase separation of (e.g., transcriptionally) active and inactive (transcriptionally
[...] Read more.
Active matter consists of particles that dissipate energy, from their own sources, in the form of mechanical work on their surroundings. Recent interest in active-passive polymer mixtures has been driven by their relevance in phase separation of (e.g., transcriptionally) active and inactive (transcriptionally silent) DNA strands in nuclei of living cells. In this paper, we study the interfacial properties of the phase separated steady states of the active-passive polymer mixtures and compare them with equilibrium phase separation. We model the active constituents by assigning them stronger-than-thermal fluctuations. We demonstrate that the entropy production is an accurate indicator of the phase transition. We then construct phase diagrams and analyze kinetic properties of the particles as a function of the distance from the interface. Studying the interface fluctuations, we find that they follow the capillary waves spectrum. This allows us to establish a mechanistic definition of the interfacial stiffness and its dependence on the relative level of activity with respect to the passive constituents. We show how the interfacial width depends on the activity ratio and comment on the finite size effects. Our results highlight similarities and differences of the non-equilibrium steady states with an equilibrium phase separated polymer mixture with a lower critical solution temperature. We present several directions in which the non-equilibrium system can be studied further and point out interesting observations that indicate general principles behind the non-equilibrium phase separation. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics of Interfaces)
Figures

Figure 1

Open AccessArticle Projected Affinity Values for Nyström Spectral Clustering
Entropy 2018, 20(7), 519; https://doi.org/10.3390/e20070519
Received: 19 May 2018 / Revised: 6 July 2018 / Accepted: 9 July 2018 / Published: 10 July 2018
PDF Full-text (887 KB) | HTML Full-text | XML Full-text
Abstract
In kernel methods, Nyström approximation is a popular way of calculating out-of-sample extensions and can be further applied to large-scale data clustering and classification tasks. Given a new data point, Nyström employs its empirical affinity vector, k, for calculation. This vector is
[...] Read more.
In kernel methods, Nyström approximation is a popular way of calculating out-of-sample extensions and can be further applied to large-scale data clustering and classification tasks. Given a new data point, Nyström employs its empirical affinity vector, k, for calculation. This vector is assumed to be a proper measurement of the similarity between the new point and the training set. In this paper, we suggest replacing the affinity vector by its projections on the leading eigenvectors learned from the training set, i.e., using k*=i=1ckTuiui instead, where ui is the i-th eigenvector of the training set and c is the number of eigenvectors used, which is typically equal to the number of classes designed by users. Our work is motivated by the constraints that in kernel space, the kernel-mapped new point should (a) also lie on the unit sphere defined by the Gaussian kernel and (b) generate training set affinity values close to k. These two constraints define a Quadratic Optimization Over a Sphere (QOOS) problem. In this paper, we prove that the projection on the leading eigenvectors, rather than the original affinity vector, is the solution to the QOOS problem. The experimental results show that the proposed replacement of k by k* slightly improves the performance of the Nyström approximation. Compared with other affinity matrix modification methods, our k* obtains comparable or higher clustering performance in terms of accuracy and Normalized Mutual Information (NMI). Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Topographic Reconfiguration of Local and Shared Information in Anesthetic-Induced Unconsciousness
Entropy 2018, 20(7), 518; https://doi.org/10.3390/e20070518
Received: 18 May 2018 / Revised: 1 July 2018 / Accepted: 2 July 2018 / Published: 10 July 2018
PDF Full-text (6329 KB) | HTML Full-text | XML Full-text
Abstract
Theoretical consideration predicts that the alteration of local and shared information in the brain is a key element in the mechanism of anesthetic-induced unconsciousness. Ordinal pattern analysis, such as permutation entropy (PE) and symbolic mutual information (SMI), have been successful in quantifying local
[...] Read more.
Theoretical consideration predicts that the alteration of local and shared information in the brain is a key element in the mechanism of anesthetic-induced unconsciousness. Ordinal pattern analysis, such as permutation entropy (PE) and symbolic mutual information (SMI), have been successful in quantifying local and shared information in neurophysiological data; however, they have been rarely applied to altered states of consciousness, especially to data obtained with functional magnetic resonance imaging (fMRI). PE and SMI analysis, together with the superb spatial resolution of fMRI recording, enables us to explore the local information of specific brain areas, the shared information between the areas, and the relationship between the two. Given the spatially divergent action of anesthetics on regional brain activity, we hypothesized that anesthesia would differentially influence entropy (PE) and shared information (SMI) across various brain areas, which may represent fundamental, mechanistic indicators of loss of consciousness. FMRI data were collected from 15 healthy participants during four states: wakefulness (W), light (conscious) sedation (L), deep (unconscious) sedation (D), and recovery (R). Sedation was produced by the common, clinically used anesthetic, propofol. Firstly, we found that that global PE decreased from W to D, and increased from D to R. The PE was differentially affected across the brain areas; specifically, the PE in the subcortical network was reduced more than in the cortical networks. Secondly, SMI was also differentially affected in different areas, as revealed by the reconfiguration of its spatial pattern (topographic structure). The topographic structures of SMI in the conscious states W, L, and R were distinctively different from that of the unconscious state D. Thirdly, PE and SMI were positively correlated in W, L, and R, whereas this correlation was disrupted in D. And lastly, PE changes occurred preferentially in highly connected hub regions. These findings advance our understanding of brain dynamics and information exchange, emphasizing the importance of topographic structure and the relationship of local and shared information in anesthetic-induced unconsciousness. Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Figures

Figure 1

Open AccessArticle Microstructure and Mechanical Properties of Particulate Reinforced NbMoCrTiAl High Entropy Based Composite
Entropy 2018, 20(7), 517; https://doi.org/10.3390/e20070517
Received: 13 June 2018 / Revised: 5 July 2018 / Accepted: 6 July 2018 / Published: 10 July 2018
PDF Full-text (2586 KB) | HTML Full-text | XML Full-text
Abstract
A novel metal matrix composite based on the NbMoCrTiAl high entropy alloy (HEA) was designed by the in-situ formation method. The microstructure, phase evolution, and compression mechanical properties at room temperature of the composite are investigated in detail. The results confirmed that the
[...] Read more.
A novel metal matrix composite based on the NbMoCrTiAl high entropy alloy (HEA) was designed by the in-situ formation method. The microstructure, phase evolution, and compression mechanical properties at room temperature of the composite are investigated in detail. The results confirmed that the composite was primarily composed of body-centered cubic solid solution with a small amount of titanium carbides and alumina. With the presence of approximately 7.0 vol. % Al2O3 and 32.2 vol. % TiC reinforced particles, the compressive fracture strength of the composite (1542 MPa) was increased by approximately 50% compared with that of the as-cast NbMoCrTiAl HEA. In consideration of the superior oxidation resistance, the P/M NbMoCrTiAl high entropy alloy composite could be considered as a promising high temperature structural material. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Figures

Figure 1

Open AccessArticle A Quantum Ruler for Magnetic Deflectometry
Entropy 2018, 20(7), 516; https://doi.org/10.3390/e20070516
Received: 15 June 2018 / Revised: 4 July 2018 / Accepted: 6 July 2018 / Published: 9 July 2018
PDF Full-text (1641 KB) | HTML Full-text | XML Full-text
Abstract
Matter-wave near-field interference can imprint a nano-scale fringe pattern onto a molecular beam, which allows observing its shifts in the presence of even very small external forces. Here we demonstrate quantum interference of the pre-vitamin 7-dehydrocholesterol and discuss the conceptual challenges of magnetic
[...] Read more.
Matter-wave near-field interference can imprint a nano-scale fringe pattern onto a molecular beam, which allows observing its shifts in the presence of even very small external forces. Here we demonstrate quantum interference of the pre-vitamin 7-dehydrocholesterol and discuss the conceptual challenges of magnetic deflectometry in a near-field interferometer as a tool to explore photochemical processes within molecules whose center of mass is quantum delocalized. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Figures

Figure 1

Open AccessArticle On the Importance of Electron Diffusion in a Bulk-Matter Test of the Pauli Exclusion Principle
Entropy 2018, 20(7), 515; https://doi.org/10.3390/e20070515
Received: 8 June 2018 / Revised: 3 July 2018 / Accepted: 6 July 2018 / Published: 9 July 2018
PDF Full-text (550 KB) | HTML Full-text | XML Full-text
Abstract
The VIolation of Pauli (VIP) experiment (and its upgraded version, VIP-2) uses the Ramberg and Snow (RS) method (Phys. Lett. B 1990, 238, 438) to search for violations of the Pauli exclusion principle in the Gran Sasso underground laboratory. The RS method
[...] Read more.
The VIolation of Pauli (VIP) experiment (and its upgraded version, VIP-2) uses the Ramberg and Snow (RS) method (Phys. Lett. B 1990, 238, 438) to search for violations of the Pauli exclusion principle in the Gran Sasso underground laboratory. The RS method consists of feeding a copper conductor with a high direct current, so that the large number of newly-injected conduction electrons can interact with the copper atoms and possibly cascade electromagnetically to an already occupied atomic ground state if their wavefunction has the wrong symmetry with respect to the atomic electrons, emitting characteristic X-rays as they do so. In their original data analysis, RS considered a very simple path for each electron, which is sure to return a bound, albeit a very weak one, because it ignores the meandering random walks of the electrons as they move from the entrance to the exit of the copper sample. These complex walks bring the electrons close to many more atoms than in the RS calculation. Here, we consider the full description of these walks and show that this leads to a nontrivial and nonlinear X-ray emission rate. Finally, we obtain an improved bound, which sets much tighter constraints on the violation of the Pauli exclusion principle for electrons. Full article
(This article belongs to the Section Quantum Information)
Figures

Figure 1

Back to Top