Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 2 (February 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-89
Export citation of selected articles as:
Open AccessArticle Complex Dynamics in a Memcapacitor-Based Circuit
Entropy 2019, 21(2), 188; https://doi.org/10.3390/e21020188 (registering DOI)
Received: 11 January 2019 / Revised: 11 February 2019 / Accepted: 14 February 2019 / Published: 16 February 2019
Viewed by 110 | PDF Full-text (3104 KB)
Abstract
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex [...] Read more.
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex system. The basins of attraction, multistability, bifurcations, Lyapunov exponents, and initial-condition-triggered similar bifurcation are analyzed. Finally, the memcapacitor-based chaotic oscillator is realized via circuit implementation with experimental results presented. Full article
(This article belongs to the Section Complexity)
Open AccessArticle Entropy Analysis of Soccer Dynamics
Entropy 2019, 21(2), 187; https://doi.org/10.3390/e21020187 (registering DOI)
Received: 27 January 2019 / Revised: 8 February 2019 / Accepted: 13 February 2019 / Published: 16 February 2019
Viewed by 91 | PDF Full-text (1022 KB) | HTML Full-text | XML Full-text
Abstract
This paper adopts the information and fractional calculus tools for studying the dynamics of a national soccer league. A soccer league season is treated as a complex system (CS) with a state observable at discrete time instants, that is, at the time of [...] Read more.
This paper adopts the information and fractional calculus tools for studying the dynamics of a national soccer league. A soccer league season is treated as a complex system (CS) with a state observable at discrete time instants, that is, at the time of rounds. The CS state, consisting of the goals scored by the teams, is processed by means of different tools, namely entropy, mutual information and Jensen–Shannon divergence. The CS behavior is visualized in 3-D maps generated by multidimensional scaling. The points on the maps represent rounds and their relative positioning allows for a direct interpretation of the results. Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
Figures

Figure 1

Open AccessReview Entropic Effects in Polymer Nanocomposites
Entropy 2019, 21(2), 186; https://doi.org/10.3390/e21020186 (registering DOI)
Received: 14 December 2018 / Revised: 31 January 2019 / Accepted: 11 February 2019 / Published: 15 February 2019
Viewed by 133 | PDF Full-text (3946 KB)
Abstract
Polymer nanocomposite materials, consisting of a polymer matrix embedded with nanoscale fillers or additives that reinforce the inherent properties of the matrix polymer, play a key role in many industrial applications. Understanding of the relation between thermodynamic interactions and macroscopic morphologies of the [...] Read more.
Polymer nanocomposite materials, consisting of a polymer matrix embedded with nanoscale fillers or additives that reinforce the inherent properties of the matrix polymer, play a key role in many industrial applications. Understanding of the relation between thermodynamic interactions and macroscopic morphologies of the composites allow for the optimization of design and mechanical processing. This review article summarizes the recent advancement in various aspects of entropic effects in polymer nanocomposites, and highlights molecular methods used to perform numerical simulations, morphologies and phase behaviors of polymer matrices and fillers, and characteristic parameters that significantly correlate with entropic interactions in polymer nanocomposites. Experimental findings and insight obtained from theories and simulations are combined to understand how the entropic effects are turned into effective interparticle interactions that can be harnessed for tailoring nanostructures of polymer nanocomposites. Full article
(This article belongs to the Special Issue 20th Anniversary of Entropy—Review Papers Collection)
Open AccessArticle Bounding the Plausibility of Physical Theories in a Device-Independent Setting via Hypothesis Testing
Entropy 2019, 21(2), 185; https://doi.org/10.3390/e21020185 (registering DOI)
Received: 15 December 2018 / Revised: 12 February 2019 / Accepted: 12 February 2019 / Published: 15 February 2019
Viewed by 121 | PDF Full-text (316 KB)
Abstract
The device-independent approach to physics is one where conclusions about physical systems (and hence of Nature) are drawn directly and solely from the observed correlations between measurement outcomes. This operational approach to physics arose as a byproduct of Bell’s seminal work to distinguish, [...] Read more.
The device-independent approach to physics is one where conclusions about physical systems (and hence of Nature) are drawn directly and solely from the observed correlations between measurement outcomes. This operational approach to physics arose as a byproduct of Bell’s seminal work to distinguish, via a Bell test, quantum correlations from the set of correlations allowed by local-hidden-variable theories. In practice, since one can only perform a finite number of experimental trials, deciding whether an empirical observation is compatible with some class of physical theories will have to be carried out via the task of hypothesis testing. In this paper, we show that the prediction-based-ratio method—initially developed for performing a hypothesis test of local-hidden-variable theories—can equally well be applied to test many other classes of physical theories, such as those constrained only by the nonsignaling principle, and those that are constrained to produce any of the outer approximation to the quantum set of correlations due to Navascués-Pironio-Acín. We numerically simulate Bell tests using hypothetical nonlocal sources of correlations to illustrate the applicability of the method in both the independent and identically distributed (i.i.d.) scenario and the non-i.i.d. scenario. As a further application, we demonstrate how this method allows us to unveil an apparent violation of the nonsignaling conditions in certain experimental data collected in a Bell test. This, in turn, highlights the importance of the randomization of measurement settings, as well as a consistency check of the nonsignaling conditions in a Bell test. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Open AccessFeature PaperArticle Bayesian Recurrent Neural Network Models for Forecasting and Quantifying Uncertainty in Spatial-Temporal Data
Entropy 2019, 21(2), 184; https://doi.org/10.3390/e21020184 (registering DOI)
Received: 29 December 2018 / Revised: 3 February 2019 / Accepted: 12 February 2019 / Published: 15 February 2019
Viewed by 127 | PDF Full-text (1329 KB) | HTML Full-text | XML Full-text
Abstract
Recurrent neural networks (RNNs) are nonlinear dynamical models commonly used in the machine learning and dynamical systems literature to represent complex dynamical or sequential relationships between variables. Recently, as deep learning models have become more common, RNNs have been used to forecast increasingly [...] Read more.
Recurrent neural networks (RNNs) are nonlinear dynamical models commonly used in the machine learning and dynamical systems literature to represent complex dynamical or sequential relationships between variables. Recently, as deep learning models have become more common, RNNs have been used to forecast increasingly complicated systems. Dynamical spatio-temporal processes represent a class of complex systems that can potentially benefit from these types of models. Although the RNN literature is expansive and highly developed, uncertainty quantification is often ignored. Even when considered, the uncertainty is generally quantified without the use of a rigorous framework, such as a fully Bayesian setting. Here we attempt to quantify uncertainty in a more formal framework while maintaining the forecast accuracy that makes these models appealing, by presenting a Bayesian RNN model for nonlinear spatio-temporal forecasting. Additionally, we make simple modifications to the basic RNN to help accommodate the unique nature of nonlinear spatio-temporal data. The proposed model is applied to a Lorenz simulation and two real-world nonlinear spatio-temporal forecasting applications. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Figures

Figure 1

Open AccessArticle Performance Evaluation of Fixed Sample Entropy in Myographic Signals for Inspiratory Muscle Activity Estimation
Entropy 2019, 21(2), 183; https://doi.org/10.3390/e21020183 (registering DOI)
Received: 15 January 2019 / Revised: 6 February 2019 / Accepted: 11 February 2019 / Published: 15 February 2019
Viewed by 99 | PDF Full-text (1816 KB) | HTML Full-text | XML Full-text
Abstract
Fixed sample entropy (fSampEn) has been successfully applied to myographic signals for inspiratory muscle activity estimation, attenuating interference from cardiac activity. However, several values have been suggested for fSampEn parameters depending on the application, and there is no consensus standard for optimum values. [...] Read more.
Fixed sample entropy (fSampEn) has been successfully applied to myographic signals for inspiratory muscle activity estimation, attenuating interference from cardiac activity. However, several values have been suggested for fSampEn parameters depending on the application, and there is no consensus standard for optimum values. This study aimed to perform a thorough evaluation of the performance of the most relevant fSampEn parameters in myographic respiratory signals, and to propose, for the first time, a set of optimal general fSampEn parameters for a proper estimation of inspiratory muscle activity. Different combinations of fSampEn parameters were used to calculate fSampEn in both non-invasive and the gold standard invasive myographic respiratory signals. All signals were recorded in a heterogeneous population of healthy subjects and chronic obstructive pulmonary disease patients during loaded breathing, thus allowing the performance of fSampEn to be evaluated for a variety of inspiratory muscle activation levels. The performance of fSampEn was assessed by means of the cross-covariance of fSampEn time-series and both mouth and transdiaphragmatic pressures generated by inspiratory muscles. A set of optimal general fSampEn parameters was proposed, allowing fSampEn of different subjects to be compared and contributing to improving the assessment of inspiratory muscle activity in health and disease. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Figure 1

Open AccessArticle An Information Theory-Based Approach to Assessing Spatial Patterns in Complex Systems
Entropy 2019, 21(2), 182; https://doi.org/10.3390/e21020182 (registering DOI)
Received: 7 February 2019 / Accepted: 14 February 2019 / Published: 15 February 2019
Viewed by 106 | PDF Full-text (4823 KB) | HTML Full-text | XML Full-text
Abstract
Given the intensity and frequency of environmental change, the linked and cross-scale nature of social-ecological systems, and the proliferation of big data, methods that can help synthesize complex system behavior over a geographical area are of great value. Fisher information evaluates order in [...] Read more.
Given the intensity and frequency of environmental change, the linked and cross-scale nature of social-ecological systems, and the proliferation of big data, methods that can help synthesize complex system behavior over a geographical area are of great value. Fisher information evaluates order in data and has been established as a robust and effective tool for capturing changes in system dynamics, including the detection of regimes and regime shifts. The methods developed to compute Fisher information can accommodate multivariate data of various types and requires no a priori decisions about system drivers, making it a unique and powerful tool. However, the approach has primarily been used to evaluate temporal patterns. In its sole application to spatial data, Fisher information successfully detected regimes in terrestrial and aquatic systems over transects. Although the selection of adjacently positioned sampling stations provided a natural means of ordering the data, such an approach limits the types of questions that can be answered in a spatial context. Here, we expand the approach to develop a method for more fully capturing spatial dynamics. The results reflect changes in the index that correspond with geographical patterns and demonstrate the utility of the method in uncovering hidden spatial trends in complex systems. Full article
Figures

Figure 1

Open AccessReview Chemical Kinetics Roots and Methods to Obtain the Probability Distribution Function Evolution of Reactants and Products in Chemical Networks Governed by a Master Equation
Entropy 2019, 21(2), 181; https://doi.org/10.3390/e21020181
Received: 24 January 2019 / Accepted: 11 February 2019 / Published: 14 February 2019
Viewed by 154 | PDF Full-text (618 KB)
Abstract
In this paper first, we review the physical root bases of chemical reaction networks as a Markov process in multidimensional vector space. Then we study the chemical reactions from a microscopic point of view, to obtain the expression for the propensities for the [...] Read more.
In this paper first, we review the physical root bases of chemical reaction networks as a Markov process in multidimensional vector space. Then we study the chemical reactions from a microscopic point of view, to obtain the expression for the propensities for the different reactions that can happen in the network. These chemical propensities, at a given time, depend on the system state at that time, and do not depend on the state at an earlier time indicating that we are dealing with Markov processes. Then the Chemical Master Equation (CME) is deduced for an arbitrary chemical network from a probability balance and it is expressed in terms of the reaction propensities. This CME governs the dynamics of the chemical system. Due to the difficulty to solve this equation two methods are studied, the first one is the probability generating function method or z-transform, which permits to obtain the evolution of the factorial moment of the system with time in an easiest way or after some manipulation the evolution of the polynomial moments. The second method studied is the expansion of the CME in terms of an order parameter (system volume). In this case we study first the expansion of the CME using the propensities obtained previously and splitting the molecular concentration into a deterministic part and a random part. An expression in terms of multinomial coefficients is obtained for the evolution of the probability of the random part. Then we study how to reconstruct the probability distribution from the moments using the maximum entropy principle. Finally, the previous methods are applied to simple chemical networks and the consistency of these methods is studied. Full article
Open AccessArticle Entropy-Induced Self-Assembly of Colloidal Crystals with High Reflectivity and Narrow Reflection Bandwidth
Entropy 2019, 21(2), 180; https://doi.org/10.3390/e21020180
Received: 7 January 2019 / Revised: 10 February 2019 / Accepted: 13 February 2019 / Published: 14 February 2019
Viewed by 151 | PDF Full-text (3596 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Cracks and defects, which could result in lower reflectivity and larger full width at half maximum (FWHM), are the major obstacles for obtaining highly ordered structures of colloidal crystals (CCs). The high-quality CCs with high reflectivity (more than 90%) and 9.2 nm narrow [...] Read more.
Cracks and defects, which could result in lower reflectivity and larger full width at half maximum (FWHM), are the major obstacles for obtaining highly ordered structures of colloidal crystals (CCs). The high-quality CCs with high reflectivity (more than 90%) and 9.2 nm narrow FWHM have been successfully fabricated using a fixed proportion of a soft matter system composed of silica particles (SPs), polyethylene glycol diacrylate (PEGDA), and ethanol. The influences of refractivity difference, volume fractions, and particle dimension on FWHM were illuminated. Firstly, we clarified the influences of the planar interface and the bending interface on the self-assembly. The CCs had been successfully fabricated on the planar interface and presented unfavorable results on the bending interface. Secondly, a hard sphere system consisting of SPs, PEGDA, and ethanol was established, and the entropy-driven phase transition mechanism of a polydisperse system was expounded. The FWHM and reflectivity of CCs showed an increasing trend with increasing temperature. Consequently, high-quality CCs were obtained by adjusting temperatures (ordered structure formed at 90 °C and solidified at 0 °C) based on the surface phase rule of the system. We acquired a profound understanding of the principle and process of self-assembly, which is significant for preparation and application of CCs such as optical filters. Full article
Figures

Figure 1

Open AccessReview Non-Equilibrium Liouville and Wigner Equations: Classical Statistical Mechanics and Chemical Reactions for Long Times
Entropy 2019, 21(2), 179; https://doi.org/10.3390/e21020179
Received: 26 November 2018 / Revised: 18 January 2019 / Accepted: 7 February 2019 / Published: 14 February 2019
Viewed by 141 | PDF Full-text (402 KB)
Abstract
We review and improve previous work on non-equilibrium classical and quantum statistical systems, subject to potentials, without ab initio dissipation. We treat classical closed three-dimensional many-particle interacting systems without any “heat bath” (hb), evolving through the Liouville equation for the [...] Read more.
We review and improve previous work on non-equilibrium classical and quantum statistical systems, subject to potentials, without ab initio dissipation. We treat classical closed three-dimensional many-particle interacting systems without any “heat bath” ( h b ), evolving through the Liouville equation for the non-equilibrium classical distribution W c , with initial states describing thermal equilibrium at large distances but non-equilibrium at finite distances. We use Boltzmann’s Gaussian classical equilibrium distribution W c , e q , as weight function to generate orthogonal polynomials ( H n ’s) in momenta. The moments of W c , implied by the H n ’s, fulfill a non-equilibrium hierarchy. Under long-term approximations, the lowest moment dominates the evolution towards thermal equilibrium. A non-increasing Liapunov function characterizes the long-term evolution towards equilibrium. Non-equilibrium chemical reactions involving two and three particles in a h b are studied classically and quantum-mechanically (by using Wigner functions W). Difficulties related to the non-positivity of W are bypassed. Equilibrium Wigner functions W e q generate orthogonal polynomials, which yield non-equilibrium moments of W and hierarchies. In regimes typical of chemical reactions (short thermal wavelength and long times), non-equilibrium hierarchies yield approximate Smoluchowski-like equations displaying dissipation and quantum effects. The study of three-particle chemical reactions is new. Full article
(This article belongs to the Special Issue 20th Anniversary of Entropy—Review Papers Collection)
Open AccessArticle Entropic Approach to the Detection of Crucial Events
Entropy 2019, 21(2), 178; https://doi.org/10.3390/e21020178
Received: 22 December 2018 / Revised: 29 January 2019 / Accepted: 12 February 2019 / Published: 14 February 2019
Viewed by 125 | PDF Full-text (491 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we establish a clear distinction between two processes yielding anomalous diffusion and 1/f noise. The first process is called Stationary Fractional Brownian Motion (SFBM) and is characterized by the use of stationary correlation functions. The second process rests [...] Read more.
In this paper, we establish a clear distinction between two processes yielding anomalous diffusion and 1 / f noise. The first process is called Stationary Fractional Brownian Motion (SFBM) and is characterized by the use of stationary correlation functions. The second process rests on the action of crucial events generating ergodicity breakdown and aging effects. We refer to the latter as Aging Fractional Brownian Motion (AFBM). To settle the confusion between these different forms of Fractional Brownian Motion (FBM) we use an entropic approach properly updated to incorporate the recent advances of biology and psychology sciences on cognition. We show that although the joint action of crucial and non-crucial events may have the effect of making the crucial events virtually invisible, the entropic approach allows us to detect their action. The results of this paper lead us to the conclusion that the communication between the heart and the brain is accomplished by AFBM processes. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Figures

Figure 1

Open AccessArticle Information Thermodynamics for Time Series of Signal-Response Models
Entropy 2019, 21(2), 177; https://doi.org/10.3390/e21020177
Received: 17 December 2018 / Revised: 27 January 2019 / Accepted: 11 February 2019 / Published: 14 February 2019
Viewed by 135 | PDF Full-text (1098 KB) | HTML Full-text | XML Full-text
Abstract
The entropy production in stochastic dynamical systems is linked to the structure of their causal representation in terms of Bayesian networks. Such a connection was formalized for bipartite (or multipartite) systems with an integral fluctuation theorem in [Phys. Rev. Lett. 111, 180603 (2013)]. [...] Read more.
The entropy production in stochastic dynamical systems is linked to the structure of their causal representation in terms of Bayesian networks. Such a connection was formalized for bipartite (or multipartite) systems with an integral fluctuation theorem in [Phys. Rev. Lett. 111, 180603 (2013)]. Here we introduce the information thermodynamics for time series, that are non-bipartite in general, and we show that the link between irreversibility and information can only result from an incomplete causal representation. In particular, we consider a backward transfer entropy lower bound to the conditional time series irreversibility that is induced by the absence of feedback in signal-response models. We study such a relation in a linear signal-response model providing analytical solutions, and in a nonlinear biological model of receptor-ligand systems where the time series irreversibility measures the signaling efficiency. Full article
Figures

Figure 1

Open AccessArticle Feature Extraction of Ship-Radiated Noise Based on Regenerated Phase-Shifted Sinusoid-Assisted EMD, Mutual Information, and Differential Symbolic Entropy
Entropy 2019, 21(2), 176; https://doi.org/10.3390/e21020176
Received: 1 January 2019 / Revised: 2 February 2019 / Accepted: 11 February 2019 / Published: 14 February 2019
Viewed by 152 | PDF Full-text (3621 KB) | HTML Full-text | XML Full-text
Abstract
To improve the recognition accuracy of ship-radiated noise, a feature extraction method based on regenerated phase-shifted sinusoid-assisted empirical mode decomposition (RPSEMD), mutual information (MI), and differential symbolic entropy (DSE) is proposed in this paper. RPSEMD is an improved empirical mode decomposition (EMD) that [...] Read more.
To improve the recognition accuracy of ship-radiated noise, a feature extraction method based on regenerated phase-shifted sinusoid-assisted empirical mode decomposition (RPSEMD), mutual information (MI), and differential symbolic entropy (DSE) is proposed in this paper. RPSEMD is an improved empirical mode decomposition (EMD) that alleviates the mode mixing problem of EMD. DSE is a new tool to quantify the complexity of nonlinear time series. It not only has high computational efficiency, but also can measure the nonlinear complexity of short time series. Firstly, the ship-radiated noise is decomposed into a series of intrinsic mode functions (IMFs) by RPSEMD, and the DSE of each IMF is calculated. Then, the MI between each IMF and the original signal is calculated; the sum of MIs is taken as the denominator; and each normalized MI (norMI) is obtained. Finally, each norMI is used as the weight coefficient to weight the corresponding DSE, and the weighted DSE (WDSE) is obtained. The WDSEs are sent into the support vector machine (SVM) classifier to classify and recognize three types of ship-radiated noise. The experimental results demonstrate that the recognition rate of the proposed method reaches 98.3333%. Consequently, the proposed WDSE method can effectively achieve the classification of ships. Full article
(This article belongs to the Section Signal and Data Analysis)
Figures

Figure 1

Open AccessReview Phase Transition in Frustrated Magnetic Thin Film—Physics at Phase Boundaries
Entropy 2019, 21(2), 175; https://doi.org/10.3390/e21020175
Received: 6 December 2018 / Revised: 3 February 2019 / Accepted: 10 February 2019 / Published: 13 February 2019
Viewed by 157 | PDF Full-text (12898 KB) | HTML Full-text | XML Full-text
Abstract
In this review, we outline some principal theoretical knowledge of the properties of frustrated spin systems and magnetic thin films. The two points we would like to emphasize: (i) the physics in low dimensions where exact solutions can be obtained; (ii) the physics [...] Read more.
In this review, we outline some principal theoretical knowledge of the properties of frustrated spin systems and magnetic thin films. The two points we would like to emphasize: (i) the physics in low dimensions where exact solutions can be obtained; (ii) the physics at phase boundaries where interesting phenomena can occur due to competing interactions of the two phases around the boundary. This competition causes a frustration. We will concentrate our attention on magnetic thin films and phenomena occurring near the boundary of two phases of different symmetries. Two-dimensional (2D) systems are in fact the limiting case of thin films with a monolayer. Naturally, we will treat this case at the beginning. We begin by defining the frustration and giving examples of frustrated 2D Ising systems that we can exactly solve by transforming them into vertex models. We will show that these simple systems already contain most of the striking features of frustrated systems such as the high degeneracy of the ground state (GS), many phases in the GS phase diagram in the space of interaction parameters, the reentrance occurring near the boundaries of these phases, the disorder lines in the paramagnetic phase, and the partial disorder coexisting with the order at equilibrium. Thin films are then presented with different aspects: surface elementary excitations (surface spin waves), surface phase transition, and criticality. Several examples are shown and discussed. New results on skyrmions in thin films and superlattices are also displayed. By the examples presented in this review we show that the frustration when combined with the surface effect in low dimensions gives rise to striking phenomena observed in particular near the phase boundaries. Full article
Figures

Figure 1

Open AccessArticle Entropy Generation Rate Minimization for Methanol Synthesis via a CO2 Hydrogenation Reactor
Entropy 2019, 21(2), 174; https://doi.org/10.3390/e21020174
Received: 12 December 2018 / Revised: 27 January 2019 / Accepted: 4 February 2019 / Published: 13 February 2019
Viewed by 147 | PDF Full-text (3649 KB) | HTML Full-text | XML Full-text
Abstract
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation [...] Read more.
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation rate (EGR) as the objective function is optimized by using finite time thermodynamic and optimal control theory. The exterior wall temperature (EWR) is taken as the control variable, and the fixed methanol yield and conservation equations are taken as the constraints in the optimization problem. Compared with the reference reactor with a constant EWR, the total EGR of the optimal reactor decreases by 20.5%, and the EGR caused by the heat transfer decreases by 68.8%. In the optimal reactor, the total EGRs mainly distribute in the first 30% reactor length, and the EGRs caused by the chemical reaction accounts for more than 84% of the total EGRs. The selectivity of CH3OH can be enhanced by increasing the inlet molar flow rate of CO, and the CO2 conversion rate can be enhanced by removing H2O from the reaction system. The results obtained herein are in favor of optimal designs of practical tubular MSCH reactors. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Figures

Figure 1

Open AccessArticle The Relevance of Foreshocks in Earthquake Triggering: A Statistical Study
Entropy 2019, 21(2), 173; https://doi.org/10.3390/e21020173
Received: 25 October 2018 / Revised: 30 January 2019 / Accepted: 1 February 2019 / Published: 13 February 2019
Viewed by 131 | PDF Full-text (393 KB) | HTML Full-text | XML Full-text
Abstract
An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock [...] Read more.
An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results. Full article
Figures

Figure 1

Open AccessArticle Thermodynamics of f(R) Gravity with Disformal Transformation
Entropy 2019, 21(2), 172; https://doi.org/10.3390/e21020172
Received: 19 December 2018 / Revised: 1 February 2019 / Accepted: 11 February 2019 / Published: 13 February 2019
Viewed by 146 | PDF Full-text (303 KB) | HTML Full-text | XML Full-text
Abstract
We study thermodynamics in f(R) gravity with the disformal transformation. The transformation applied to the matter Lagrangian has the form of γμν=A(ϕ,X)gμν+B(ϕ,X [...] Read more.
We study thermodynamics in f ( R ) gravity with the disformal transformation. The transformation applied to the matter Lagrangian has the form of γ μ ν = A ( ϕ , X ) g μ ν + B ( ϕ , X ) μ ϕ ν ϕ with the assumption of the Minkowski matter metric γ μ ν = η μ ν , where ϕ is the disformal scalar and X is the corresponding kinetic term of ϕ . We verify the generalized first and second laws of thermodynamics in this disformal type of f ( R ) gravity in the Friedmann-Lemaître-Robertson-Walker (FLRW) universe. In addition, we show that the Hubble parameter contains the disformally induced terms, which define the effectively varying equations of state for matter. Full article
(This article belongs to the Special Issue Modified Gravity: From Black Holes Entropy to Current Cosmology II)
Open AccessArticle Bell Inequalities with One Bit of Communication
Entropy 2019, 21(2), 171; https://doi.org/10.3390/e21020171
Received: 13 December 2018 / Revised: 28 January 2019 / Accepted: 6 February 2019 / Published: 13 February 2019
Viewed by 131 | PDF Full-text (357 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We study Bell scenarios with binary outcomes supplemented by one bit of classical communication. We developed a method to find facet inequalities for such scenarios even when direct facet enumeration is not possible, or at least difficult. Using this method, we partially solved [...] Read more.
We study Bell scenarios with binary outcomes supplemented by one bit of classical communication. We developed a method to find facet inequalities for such scenarios even when direct facet enumeration is not possible, or at least difficult. Using this method, we partially solved the scenario where Alice and Bob choose between three inputs, finding a total of 668 inequivalent facet inequalities (with respect to relabelings of inputs and outputs). We also show that some of these inequalities are constructed from facet inequalities found in scenarios without communication, that is, the well-known Bell inequalities. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Figures

Figure 1

Open AccessArticle The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery
Entropy 2019, 21(2), 170; https://doi.org/10.3390/e21020170
Received: 20 January 2019 / Revised: 2 February 2019 / Accepted: 3 February 2019 / Published: 12 February 2019
Viewed by 166 | PDF Full-text (9764 KB) | HTML Full-text | XML Full-text
Abstract
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection [...] Read more.
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection of embedding dimension and time delay. To complete the automatic parameter selection of MPE, a novel parameter optimization strategy of MPE is proposed, namely optimized multi-scale permutation entropy (OMPE). In the OMPE method, an improved Cao method is proposed to adaptively select the embedding dimension. Meanwhile, the time delay is determined based on mutual information. To verify the effectiveness of OMPE method, a simulated signal and two experimental signals are used for validation. Results demonstrate that the proposed OMPE method has a better feature extraction ability comparing with existing MPE methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Figures

Figure 1

Open AccessArticle Microstructure and Mechanical Properties of Precipitate Strengthened High Entropy Alloy Al10Co25Cr8Fe15Ni36Ti6 with Additions of Hafnium and Molybdenum
Entropy 2019, 21(2), 169; https://doi.org/10.3390/e21020169
Received: 28 January 2019 / Revised: 8 February 2019 / Accepted: 8 February 2019 / Published: 12 February 2019
Viewed by 191 | PDF Full-text (5223 KB)
Abstract
High entropy or compositionally complex alloys provide opportunities for optimization towards new high-temperature materials. Improvements in the equiatomic alloy Al17Co17Cr17Cu17Fe17Ni17 (at.%) led to the base alloy for this work with the chemical [...] Read more.
High entropy or compositionally complex alloys provide opportunities for optimization towards new high-temperature materials. Improvements in the equiatomic alloy Al17Co17Cr17Cu17Fe17Ni17 (at.%) led to the base alloy for this work with the chemical composition Al10Co25Cr8Fe15Ni36Ti6 (at.%). Characterization of the beneficial particle-strengthened microstructure by scanning electron microscopy (SEM) and observation of good mechanical properties at elevated temperatures arose the need of accomplishing further optimization steps. For this purpose, the refractory metals hafnium and molybdenum were added in small amounts (0.5 and 1.0 at.% respectively) because of their well-known positive effects on mechanical properties of Ni-based superalloys. By correlation of microstructural examinations using SEM with tensile tests in the temperature range of room temperature up to 900 °C, conclusions could be drawn for further optimization steps. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Open AccessArticle Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation
Entropy 2019, 21(2), 168; https://doi.org/10.3390/e21020168
Received: 21 November 2018 / Revised: 25 January 2019 / Accepted: 4 February 2019 / Published: 12 February 2019
Viewed by 181 | PDF Full-text (4216 KB) | HTML Full-text | XML Full-text
Abstract
Various retinal vessel segmentation methods based on convolutional neural networks were proposed recently, and Dense U-net as a new semantic segmentation network was successfully applied to scene segmentation. Retinal vessel is tiny, and the features of retinal vessel can be learned effectively by [...] Read more.
Various retinal vessel segmentation methods based on convolutional neural networks were proposed recently, and Dense U-net as a new semantic segmentation network was successfully applied to scene segmentation. Retinal vessel is tiny, and the features of retinal vessel can be learned effectively by the patch-based learning strategy. In this study, we proposed a new retinal vessel segmentation framework based on Dense U-net and the patch-based learning strategy. In the process of training, training patches were obtained by random extraction strategy, Dense U-net was adopted as a training network, and random transformation was used as a data augmentation strategy. In the process of testing, test images were divided into image patches, test patches were predicted by training model, and the segmentation result can be reconstructed by overlapping-patches sequential reconstruction strategy. This proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and convolutional neural network (CNN) methods, the result demonstrated that our approach is competitive in these evaluation metrics. This method can obtain a better segmentation result than specialists, and has clinical application value. Full article
Figures

Figure 1

Open AccessArticle Thermodynamics in the Universe Described by the Emergence of Space and the Energy Balance Relation
Entropy 2019, 21(2), 167; https://doi.org/10.3390/e21020167
Received: 9 January 2019 / Revised: 31 January 2019 / Accepted: 7 February 2019 / Published: 11 February 2019
Viewed by 180 | PDF Full-text (782 KB) | HTML Full-text | XML Full-text
Abstract
It has previously been shown that it is more common to describe the evolution of the universe based on the emergence of space and the energy balance relation. Here we investigate the thermodynamic properties of the universe described by such a model. We [...] Read more.
It has previously been shown that it is more common to describe the evolution of the universe based on the emergence of space and the energy balance relation. Here we investigate the thermodynamic properties of the universe described by such a model. We show that the first law of thermodynamics and the generalized second law of thermodynamics (GSLT) are both satisfied and the weak energy condition are also fulfilled for two typical examples. Finally, we examine the physical consistency for the present model. The results show that there exists a good thermodynamic description for such a universe. Full article
(This article belongs to the Special Issue Modified Gravity: From Black Holes Entropy to Current Cosmology II)
Figures

Figure 1

Open AccessConcept Paper Learning Entropy as a Learning-Based Information Concept
Entropy 2019, 21(2), 166; https://doi.org/10.3390/e21020166
Received: 30 December 2018 / Revised: 28 January 2019 / Accepted: 5 February 2019 / Published: 11 February 2019
Viewed by 233 | PDF Full-text (1662 KB)
Abstract
Recently, a novel concept of a non-probabilistic novelty detection measure, based on a multi-scale quantification of unusually large learning efforts of machine learning systems, was introduced as learning entropy (LE). The key finding with LE is that the learning effort of learning systems [...] Read more.
Recently, a novel concept of a non-probabilistic novelty detection measure, based on a multi-scale quantification of unusually large learning efforts of machine learning systems, was introduced as learning entropy (LE). The key finding with LE is that the learning effort of learning systems is quantifiable as a novelty measure for each individually observed data point of otherwise complex dynamic systems, while the model accuracy is not a necessary requirement for novelty detection. This brief paper extends the explanation of LE from the point of an informatics approach towards a cognitive (learning-based) information measure emphasizing the distinction from Shannon’s concept of probabilistic information. Fundamental derivations of learning entropy and of its practical estimations are recalled and further extended. The potentials, limitations, and, thus, the current challenges of LE are discussed. Full article
Open AccessArticle Quality-Oriented Perceptual HEVC Based on the Spatiotemporal Saliency Detection Model
Entropy 2019, 21(2), 165; https://doi.org/10.3390/e21020165
Received: 20 January 2019 / Revised: 4 February 2019 / Accepted: 10 February 2019 / Published: 11 February 2019
Viewed by 204 | PDF Full-text (1186 KB) | HTML Full-text | XML Full-text
Abstract
Perceptual video coding (PVC) can provide a lower bitrate with the same visual quality compared with traditional H.265/high efficiency video coding (HEVC). In this work, a novel H.265/HEVC-compliant PVC framework is proposed based on the video saliency model. Firstly, both an effective and [...] Read more.
Perceptual video coding (PVC) can provide a lower bitrate with the same visual quality compared with traditional H.265/high efficiency video coding (HEVC). In this work, a novel H.265/HEVC-compliant PVC framework is proposed based on the video saliency model. Firstly, both an effective and efficient spatiotemporal saliency model is used to generate a video saliency map. Secondly, a perceptual coding scheme is developed based on the saliency map. A saliency-based quantization control algorithm is proposed to reduce the bitrate. Finally, the simulation results demonstrate that the proposed perceptual coding scheme shows its superiority in objective and subjective tests, achieving up to a 9.46% bitrate reduction with negligible subjective and objective quality loss. The advantage of the proposed method is the high quality adapted for a high-definition video application. Full article
(This article belongs to the Special Issue Application of Information Theory in Biomedical Data Mining)
Figures

Figure 1

Open AccessArticle Effect of Ti/Ni Coating of Diamond Particles on Microstructure and Properties of High-Entropy Alloy/Diamond Composites
Entropy 2019, 21(2), 164; https://doi.org/10.3390/e21020164
Received: 31 December 2018 / Revised: 27 January 2019 / Accepted: 5 February 2019 / Published: 10 February 2019
Viewed by 229 | PDF Full-text (3822 KB)
Abstract
In this study, an effective way of applying Ti/Ni deposited coating to the surface of diamond single crystal particles by magnetron sputtering was proposed and novel high-entropy alloy (HEA)/diamond composites were prepared by spark plasma sintering (SPS). The results show that the interfacial [...] Read more.
In this study, an effective way of applying Ti/Ni deposited coating to the surface of diamond single crystal particles by magnetron sputtering was proposed and novel high-entropy alloy (HEA)/diamond composites were prepared by spark plasma sintering (SPS). The results show that the interfacial bonding state of the coated diamond composite is obviously better than that of the uncoated diamond composite. Corresponding mechanical properties such as hardness, density, transverse fracture strength and friction properties of the coated diamond composite were also found to be better than those of the uncoated diamond composite. The effects of interface structure and defects on the mechanical properties of HEA/diamond composites were investigated. The research directions for further improving the structure and properties of high-entropy alloy/diamond composites were proposed. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Open AccessArticle A Novel Belief Entropy for Measuring Uncertainty in Dempster-Shafer Evidence Theory Framework Based on Plausibility Transformation and Weighted Hartley Entropy
Entropy 2019, 21(2), 163; https://doi.org/10.3390/e21020163
Received: 22 January 2019 / Revised: 5 February 2019 / Accepted: 7 February 2019 / Published: 10 February 2019
Viewed by 214 | PDF Full-text (866 KB)
Abstract
Dempster-Shafer evidence theory (DST) has shown its great advantages to tackle uncertainty in a wide variety of applications. However, how to quantify the information-based uncertainty of basic probability assignment (BPA) with belief entropy in DST framework is still an open issue. The main [...] Read more.
Dempster-Shafer evidence theory (DST) has shown its great advantages to tackle uncertainty in a wide variety of applications. However, how to quantify the information-based uncertainty of basic probability assignment (BPA) with belief entropy in DST framework is still an open issue. The main work of this study is to define a new belief entropy for measuring uncertainty of BPA. The proposed belief entropy has two components. The first component is based on the summation of the probability mass function (PMF) of single events contained in each BPA, which are obtained using plausibility transformation. The second component is the same as the weighted Hartley entropy. The two components could effectively measure the discord uncertainty and non-specificity uncertainty found in DST framework, respectively. The proposed belief entropy is proved to satisfy the majority of the desired properties for an uncertainty measure in DST framework. In addition, when BPA is probability distribution, the proposed method could degrade to Shannon entropy. The feasibility and superiority of the new belief entropy is verified according to the results of numerical experiments. Full article
Open AccessArticle A Robust Adaptive Filter for a Complex Hammerstein System
Entropy 2019, 21(2), 162; https://doi.org/10.3390/e21020162
Received: 22 December 2018 / Revised: 2 February 2019 / Accepted: 6 February 2019 / Published: 9 February 2019
Viewed by 190 | PDF Full-text (887 KB)
Abstract
The Hammerstein adaptive filter using maximum correntropy criterion (MCC) has been shown to be more robust to outliers than the ones using the traditional mean square error (MSE) criterion. As there is no report on the robust Hammerstein adaptive filters in the complex [...] Read more.
The Hammerstein adaptive filter using maximum correntropy criterion (MCC) has been shown to be more robust to outliers than the ones using the traditional mean square error (MSE) criterion. As there is no report on the robust Hammerstein adaptive filters in the complex domain, in this paper, we develop the robust Hammerstein adaptive filter under MCC to the complex domain, and propose the Hammerstein maximum complex correntropy criterion (HMCCC) algorithm. Thus, the new Hammerstein adaptive filter can be used to directly handle the complex-valued data. Additionally, we analyze the stability and steady-state mean square performance of HMCCC. Simulations illustrate that the proposed HMCCC algorithm is convergent in the impulsive noise environment, and achieves a higher accuracy and faster convergence speed than the Hammerstein complex least mean square (HCLMS) algorithm. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Open AccessArticle Incentive Contract Design for the Water-Rail-Road Intermodal Transportation with Travel Time Uncertainty: A Stackelberg Game Approach
Entropy 2019, 21(2), 161; https://doi.org/10.3390/e21020161
Received: 26 December 2018 / Revised: 6 February 2019 / Accepted: 7 February 2019 / Published: 9 February 2019
Viewed by 226 | PDF Full-text (492 KB)
Abstract
In the management of intermodal transportation, incentive contract design problem has significant impacts on the benefit of a multimodal transport operator (MTO). In this paper, we analyze a typical water-rail-road (WRR) intermodal transportation that is composed of three serial transportation stages: water, rail [...] Read more.
In the management of intermodal transportation, incentive contract design problem has significant impacts on the benefit of a multimodal transport operator (MTO). In this paper, we analyze a typical water-rail-road (WRR) intermodal transportation that is composed of three serial transportation stages: water, rail and road. In particular, the entire transportation process is planned, organized, and funded by an MTO that outsources the transportation task at each stage to independent carriers (subcontracts). Due to the variability of transportation conditions, the travel time of each transportation stage depending on the respective carrier’s effort level is unknown (asymmetric information) and characterized as an uncertain variable via the experts’ estimations. Considering the decentralized decision-making process, we interpret the incentive contract design problem for the WRR intermodal transportation as a Stackelberg game in which the risk-neutral MTO serves as the leader and the risk-averse carriers serve as the followers. Within the framework of uncertainty theory, we formulate an uncertain bi-level programming model for the incentive contract design problem under expectation and entropy decision criteria. Subsequently, we provide the analytical results of the proposed model and analyze the optimal time-based incentive contracts by developing a hybrid solution method which combines a decomposition approach and an iterative algorithm. Finally, we give a simulation example to investigate the impact of asymmetric information on the optimal time-based incentive contracts and to identify the value of information for WRR intermodal transportation. Full article
Open AccessArticle A Comparison Study on Criteria to Select the Most Adequate Weighting Matrix
Entropy 2019, 21(2), 160; https://doi.org/10.3390/e21020160
Received: 13 January 2019 / Revised: 6 February 2019 / Accepted: 6 February 2019 / Published: 8 February 2019
Viewed by 323 | PDF Full-text (445 KB)
Abstract
The practice of spatial econometrics revolves around a weighting matrix, which is often supplied by the user on previous knowledge. This is the so-called W issue. Probably, the aprioristic approach is not the best solution although, presently, there are few alternatives for the [...] Read more.
The practice of spatial econometrics revolves around a weighting matrix, which is often supplied by the user on previous knowledge. This is the so-called W issue. Probably, the aprioristic approach is not the best solution although, presently, there are few alternatives for the user. Our contribution focuses on the problem of selecting a W matrix from among a finite set of matrices, all of them considered appropriate for the case. We develop a new and simple method based on the entropy corresponding to the distribution of probability estimated for the data. Other alternatives, which are common in current applied work, are also reviewed. The paper includes a large study of Monte Carlo to calibrate the effectiveness of our approach compared to others. A well-known case study is also included. Full article
Open AccessEditorial Special Issue “Quantum Foundations: 90 Years of Uncertainty”
Entropy 2019, 21(2), 159; https://doi.org/10.3390/e21020159
Received: 2 February 2019 / Accepted: 6 February 2019 / Published: 8 February 2019
Viewed by 232 | PDF Full-text (176 KB) | HTML Full-text | XML Full-text
Abstract
The VII Conference on Quantum Foundations: 90 years of uncertainty (https://sites [...] Full article
(This article belongs to the Special Issue Quantum Foundations: 90 Years of Uncertainty)
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top