Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 16, Issue 5 (May 2014), Pages 2384-2903

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Measuring the Complexity of Self-Organizing Traffic Lights
Entropy 2014, 16(5), 2384-2407; doi:10.3390/e16052384
Received: 1 February 2014 / Revised: 15 April 2014 / Accepted: 17 April 2014 / Published: 25 April 2014
Cited by 10 | PDF Full-text (549 KB) | HTML Full-text | XML Full-text
Abstract
We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize
[...] Read more.
We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize different dynamical phases. It becomes clear that different operation regimes are required for different traffic demands. Thus, not only is traffic a non-stationary problem, requiring controllers to adapt constantly; controllers must also change drastically the complexity of their behavior depending on the demand. Based on our measures and extending Ashby’s law of requisite variety, we can say that the self-organizing method achieves an adaptability level comparable to that of a living system. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Open AccessArticle General H-theorem and Entropies that Violate the Second Law
Entropy 2014, 16(5), 2408-2432; doi:10.3390/e16052408
Received: 9 March 2014 / Revised: 15 April 2014 / Accepted: 24 April 2014 / Published: 29 April 2014
Cited by 3 | PDF Full-text (364 KB) | HTML Full-text | XML Full-text
Abstract
H-theorem states that the entropy production is nonnegative and, therefore, the entropy of a closed system should monotonically change in time. In information processing, the entropy production is positive for random transformation of signals (the information processing lemma). Originally, the H-theorem and
[...] Read more.
H-theorem states that the entropy production is nonnegative and, therefore, the entropy of a closed system should monotonically change in time. In information processing, the entropy production is positive for random transformation of signals (the information processing lemma). Originally, the H-theorem and the information processing lemma were proved for the classical Boltzmann-Gibbs-Shannon entropy and for the correspondent divergence (the relative entropy). Many new entropies and divergences have been proposed during last decades and for all of them the H-theorem is needed. This note proposes a simple and general criterion to check whether the H-theorem is valid for a convex divergence H and demonstrates that some of the popular divergences obey no H-theorem. We consider systems with n states Ai that obey first order kinetics (master equation). A convex function H is a Lyapunov function for all master equations with given equilibrium if and only if its conditional minima properly describe the equilibria of pair transitions AiAj . This theorem does not depend on the principle of detailed balance and is valid for general Markov kinetics. Elementary analysis of pair equilibria demonstrate that the popular Bregman divergences like Euclidian distance or Itakura-Saito distance in the space of distribution cannot be the universal Lyapunov functions for the first-order kinetics and can increase in Markov processes. Therefore, they violate the second law and the information processing lemma. In particular, for these measures of information (divergences) random manipulation with data may add information to data. The main results are extended to nonlinear generalized mass action law kinetic equations. Full article
Open AccessArticle Optimization of Biomass-Fuelled Combined Cooling, Heating and Power (CCHP) Systems Integrated with Subcritical or Transcritical Organic Rankine Cycles (ORCs)
Entropy 2014, 16(5), 2433-2453; doi:10.3390/e16052433
Received: 28 February 2014 / Revised: 14 April 2014 / Accepted: 25 April 2014 / Published: 30 April 2014
Cited by 8 | PDF Full-text (1512 KB) | HTML Full-text | XML Full-text
Abstract
This work is focused on the thermodynamic optimization of Organic Rankine Cycles (ORCs), coupled with absorption or adsorption cooling units, for combined cooling heating and power (CCHP) generation from biomass combustion. Results were obtained by modelling with the main aim of providing optimization
[...] Read more.
This work is focused on the thermodynamic optimization of Organic Rankine Cycles (ORCs), coupled with absorption or adsorption cooling units, for combined cooling heating and power (CCHP) generation from biomass combustion. Results were obtained by modelling with the main aim of providing optimization guidelines for the operating conditions of these types of systems, specifically the subcritical or transcritical ORC, when integrated in a CCHP system to supply typical heating and cooling demands in the tertiary sector. The thermodynamic approach was complemented, to avoid its possible limitations, by the technological constraints of the expander, the heat exchangers and the pump of the ORC. The working fluids considered are: n-pentane, n-heptane, octamethyltrisiloxane, toluene and dodecamethylcyclohexasiloxane. In addition, the energy and environmental performance of the different optimal CCHP plants was investigated. The optimal plant from the energy and environmental point of view is the one integrated by a toluene recuperative ORC, although it is limited to a development with a turbine type expander. Also, the trigeneration plant could be developed in an energy and environmental efficient way with an n-pentane recuperative ORC and a volumetric type expander. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Open AccessArticle Computational Information Geometry in Statistics: Theory and Practice
Entropy 2014, 16(5), 2454-2471; doi:10.3390/e16052454
Received: 27 March 2014 / Revised: 25 April 2014 / Accepted: 29 April 2014 / Published: 2 May 2014
Cited by 2 | PDF Full-text (735 KB) | HTML Full-text | XML Full-text
Abstract
A broad view of the nature and potential of computational information geometry in statistics is offered.   This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space.   Additional underlying
[...] Read more.
A broad view of the nature and potential of computational information geometry in statistics is offered.   This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space.   Additional underlying theory and illustrative real examples are presented.  In the infinite-dimensional case, challenges inherent in this ambitious overall agenda are highlighted and promising new methodologies indicated. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle F-Geometry and Amari’s α-Geometry on a Statistical Manifold
Entropy 2014, 16(5), 2472-2487; doi:10.3390/e16052472
Received: 13 December 2013 / Revised: 21 April 2014 / Accepted: 25 April 2014 / Published: 6 May 2014
Cited by 6 | PDF Full-text (220 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we introduce a geometry called F-geometry on a statistical manifold S using an embedding F of S into the space RX of random variables. Amari’s α-geometry is a special case of F-geometry. Then using the embedding
[...] Read more.
In this paper, we introduce a geometry called F-geometry on a statistical manifold S using an embedding F of S into the space RX of random variables. Amari’s α-geometry is a special case of F-geometry. Then using the embedding F and a positive smooth function G, we introduce (F,G)-metric and (F,G)-connections that enable one to consider weighted Fisher information metric and weighted connections. The necessary and sufficient condition for two (F,G)-connections to be dual with respect to the (F,G)-metric is obtained. Then we show that Amari’s 0-connection is the only self dual F-connection with respect to the Fisher information metric. Invariance properties of the geometric structures are discussed, which proved that Amari’s α-connections are the only F-connections that are invariant under smooth one-to-one transformations of the random variables. Full article
(This article belongs to the Special Issue Information Geometry)
Open AccessArticle Three Methods for Estimating the Entropy Parameter M Based on a Decreasing Number of Velocity Measurements in a River Cross-Section
Entropy 2014, 16(5), 2512-2529; doi:10.3390/e16052512
Received: 27 February 2014 / Revised: 15 April 2014 / Accepted: 7 May 2014 / Published: 9 May 2014
Cited by 7 | PDF Full-text (471 KB) | HTML Full-text | XML Full-text
Abstract
The theoretical development and practical application of three new methods for estimating the entropy parameter M used within the framework of the entropy method proposed by Chiu in the 1980s as a valid alternative to the velocity-area method for measuring the discharge in
[...] Read more.
The theoretical development and practical application of three new methods for estimating the entropy parameter M used within the framework of the entropy method proposed by Chiu in the 1980s as a valid alternative to the velocity-area method for measuring the discharge in a river is here illustrated. The first method is based on reproducing the cumulative velocity distribution function associated with a flood event and requires measurements regarding the entire cross-section, whereas, in the second and third method, the estimate of M is based on reproducing the cross-sectional mean velocity  by following two different procedures. Both of them rely on the entropy parameter M alone and look for that value of M that brings two different estimates of , obtained by using two different M-dependent-approaches, as close as possible. From an operational viewpoint, the acquisition of velocity data becomes increasingly simplified going from the first to the third approach, which uses only one surface velocity measurement. The procedures proposed are applied in a case study based on the Ponte Nuovo hydrometric station on the Tiber River in central Italy. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Open AccessArticle Measuring Instantaneous and Spectral Information Entropies by Shannon Entropy of Choi-Williams Distribution in the Context of Electroencephalography
Entropy 2014, 16(5), 2530-2548; doi:10.3390/e16052530
Received: 6 December 2013 / Revised: 30 April 2014 / Accepted: 5 May 2014 / Published: 9 May 2014
Cited by 1 | PDF Full-text (1220 KB) | HTML Full-text | XML Full-text
Abstract
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the
[...] Read more.
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function at each time instant taken independently; (2) partial spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of each frequency value taken independently; (3) complete instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function of the entire CWD; (4) complete spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of the entire CWD. These indexes were tested on synthetic time series with different behavior (periodic, chaotic and random) and on a dataset of electroencephalographic (EEG) signals recorded in different states (eyes-open, eyes-closed, ictal and non-ictal activity). The results have shown that the values of these indexes tend to decrease, with different proportion, when the behavior of the synthetic signals evolved from chaos or randomness to periodicity. Statistical differences (p-value < 0.0005) were found between values of these measures comparing eyes-open and eyes-closed states and between ictal and non-ictal states in the traditional EEG frequency bands. Finally, this paper has demonstrated that the proposed measures can be useful tools to quantify the different periodic, chaotic and random components in EEG signals. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Open AccessArticle Exergy Analysis of Flat Plate Solar Collectors
Entropy 2014, 16(5), 2549-2567; doi:10.3390/e16052549
Received: 19 November 2013 / Revised: 1 April 2014 / Accepted: 5 May 2014 / Published: 9 May 2014
Cited by 4 | PDF Full-text (549 KB) | HTML Full-text | XML Full-text
Abstract
This study proposes the concept of the local heat loss coefficient and examines the calculation method for the average heat loss coefficient and the average absorber plate temperature. It also presents an exergy analysis model of flat plate collectors, considering non-uniformity in temperature
[...] Read more.
This study proposes the concept of the local heat loss coefficient and examines the calculation method for the average heat loss coefficient and the average absorber plate temperature. It also presents an exergy analysis model of flat plate collectors, considering non-uniformity in temperature distribution along the absorber plate. The computation results agree well with experimental data. The effects of ambient temperature, solar irradiance, fluid inlet temperature, and fluid mass flow rate on useful heat rate, useful exergy rate, and exergy loss rate are examined. An optimal fluid inlet temperature exists for obtaining the maximum useful exergy rate. The calculated optimal fluid inlet temperature is 69 °C, and the maximum useful exergy rate is 101.6 W. Exergy rate distribution is analyzed when ambient temperature, solar irradiance, fluid mass flow rate, and fluid inlet temperature are set to 20 °C, 800 W/m2, 0.05 kg/s, and 50 °C, respectively. The exergy efficiency is 5.96%, and the largest exergy loss is caused by the temperature difference between the absorber plate surface and the sun, accounting for 72.86% of the total exergy rate. Full article
Open AccessArticle A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
Entropy 2014, 16(5), 2568-2591; doi:10.3390/e16052568
Received: 16 January 2014 / Revised: 16 April 2014 / Accepted: 4 May 2014 / Published: 9 May 2014
PDF Full-text (2852 KB) | HTML Full-text | XML Full-text
Abstract
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate
[...] Read more.
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Guided Self-Organization in a Dynamic Embodied System Based on Attractor Selection Mechanism
Entropy 2014, 16(5), 2592-2610; doi:10.3390/e16052592
Received: 10 February 2014 / Revised: 17 April 2014 / Accepted: 22 April 2014 / Published: 13 May 2014
Cited by 7 | PDF Full-text (921 KB) | HTML Full-text | XML Full-text
Abstract
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied
[...] Read more.
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied systems like robots, as the behavioral dynamics are results of interactions among their controller, mechanical dynamics of the body, and the environment. This paper presents a guided self-organization approach for dynamic robots based on a coupling between the system mechanical dynamics with an internal control structure known as the attractor selection mechanism. The mechanism enables the robot to gracefully shift between random and deterministic behaviors, represented by a number of attractors, depending on internally generated stochastic perturbation and sensory input. The robot used in this paper is a simulated curved beam hopping robot: a system with a variety of mechanical dynamics which depends on its actuation frequencies. Despite the simplicity of the approach, it will be shown how the approach regulates the probability of the robot to reach a goal through the interplay among the sensory input, the level of inherent stochastic perturbation, i.e., noise, and the mechanical dynamics. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Open AccessArticle Scale-Invariant Divergences for Density Functions
Entropy 2014, 16(5), 2611-2628; doi:10.3390/e16052611
Received: 6 January 2014 / Revised: 26 April 2014 / Accepted: 28 April 2014 / Published: 13 May 2014
Cited by 1 | PDF Full-text (214 KB) | HTML Full-text | XML Full-text
Abstract
Divergence is a discrepancy measure between two objects, such as functions, vectors, matrices, and so forth. In particular, divergences defined on probability distributions are widely employed in probabilistic forecasting. As the dissimilarity measure, the divergence should satisfy some conditions. In this paper, we
[...] Read more.
Divergence is a discrepancy measure between two objects, such as functions, vectors, matrices, and so forth. In particular, divergences defined on probability distributions are widely employed in probabilistic forecasting. As the dissimilarity measure, the divergence should satisfy some conditions. In this paper, we consider two conditions: The first one is the scale-invariance property and the second is that the divergence is approximated by the sample mean of a loss function. The first requirement is an important feature for dissimilarity measures. The divergence will depend on which system of measurements we used to measure the objects. Scale-invariant divergence is transformed in a consistent way when the system of measurements is changed to the other one. The second requirement is formalized such that the divergence is expressed by using the so-called composite score. We study the relation between composite scores and scale-invariant divergences, and we propose a new class of divergences called H¨older divergence that satisfies two conditions above. We present some theoretical properties of H¨older divergence. We show that H¨older divergence unifies existing divergences from the viewpoint of scale-invariance. Full article
Open AccessArticle Using Neighbor Diversity to Detect Fraudsters in Online Auctions
Entropy 2014, 16(5), 2629-2641; doi:10.3390/e16052629
Received: 24 February 2014 / Revised: 6 May 2014 / Accepted: 9 May 2014 / Published: 14 May 2014
Cited by 3 | PDF Full-text (281 KB) | HTML Full-text | XML Full-text
Abstract
Online auctions attract not only legitimate businesses trying to sell their products but also fraudsters wishing to commit fraudulent transactions. Consequently, fraudster detection is crucial to ensure the continued success of online auctions. This paper proposes an approach to detect fraudsters based on
[...] Read more.
Online auctions attract not only legitimate businesses trying to sell their products but also fraudsters wishing to commit fraudulent transactions. Consequently, fraudster detection is crucial to ensure the continued success of online auctions. This paper proposes an approach to detect fraudsters based on the concept of neighbor diversity. The neighbor diversity of an auction account quantifies the diversity of all traders that have transactions with this account. Based on four different features of each trader (i.e., the number of received ratings, the number of cancelled transactions, k-core, and the joined date), four measurements of neighbor diversity are proposed to discern fraudsters from legitimate traders. An experiment is conducted using data gathered from a real world auction website. The results show that, although the use of neighbor diversity on k-core or on the joined date shows little or no improvement in detecting fraudsters, both the neighbor diversity on the number of received ratings and the neighbor diversity on the number of cancelled transactions improve classification accuracy, compared to the state-of-the-art methods that use k-core and center weight. Full article
Open AccessArticle The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data
Entropy 2014, 16(5), 2642-2668; doi:10.3390/e16052642
Received: 31 March 2014 / Revised: 7 May 2014 / Accepted: 9 May 2014 / Published: 14 May 2014
PDF Full-text (469 KB) | HTML Full-text | XML Full-text
Abstract
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find
[...] Read more.
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative) Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Open AccessArticle Equivalent Temperature-Enthalpy Diagram for the Study of Ejector Refrigeration Systems
Entropy 2014, 16(5), 2669-2685; doi:10.3390/e16052669
Received: 13 January 2014 / Revised: 17 April 2014 / Accepted: 4 May 2014 / Published: 14 May 2014
Cited by 2 | PDF Full-text (463 KB) | HTML Full-text | XML Full-text
Abstract
The Carnot factor versus enthalpy variation (heat) diagram has been used extensively for the second law analysis of heat transfer processes. With enthalpy variation (heat) as the abscissa and the Carnot factor as the ordinate the area between the curves representing the heat
[...] Read more.
The Carnot factor versus enthalpy variation (heat) diagram has been used extensively for the second law analysis of heat transfer processes. With enthalpy variation (heat) as the abscissa and the Carnot factor as the ordinate the area between the curves representing the heat exchanging media on this diagram illustrates the exergy losses due to the transfer. It is also possible to draw the paths of working fluids in steady-state, steady-flow thermodynamic cycles on this diagram using the definition of “the equivalent temperature” as the ratio between the variations of enthalpy and entropy in an analyzed process. Despite the usefulness of this approach two important shortcomings should be emphasized. First, the approach is not applicable for the processes of expansion and compression particularly for the isenthalpic processes taking place in expansion valves. Second, from the point of view of rigorous thermodynamics, the proposed ratio gives the temperature dimension for the isobaric processes only. The present paper proposes to overcome these shortcomings by replacing the actual processes of expansion and compression by combinations of two thermodynamic paths: isentropic and isobaric. As a result the actual (not ideal) refrigeration and power cycles can be presented on equivalent temperature versus enthalpy variation diagrams. All the exergy losses, taking place in different equipments like pumps, turbines, compressors, expansion valves, condensers and evaporators are then clearly visualized. Moreover the exergies consumed and produced in each component of these cycles are also presented. The latter give the opportunity to also analyze the exergy efficiencies of the components. The proposed diagram is finally applied for the second law analysis of an ejector based refrigeration system. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Open AccessArticle Model Selection Criteria Using Divergences
Entropy 2014, 16(5), 2686-2698; doi:10.3390/e16052686
Received: 1 April 2014 / Revised: 12 May 2014 / Accepted: 13 May 2014 / Published: 14 May 2014
Cited by 8 | PDF Full-text (229 KB) | HTML Full-text | XML Full-text
Abstract
In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown
[...] Read more.
In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown that the proposed criteria are asymptotically unbiased. The influence functions of these criteria are also derived and some comments on robustness are provided. Full article
Open AccessArticle Action-Amplitude Approach to Controlled Entropic Self-Organization
Entropy 2014, 16(5), 2699-2712; doi:10.3390/e16052699
Received: 25 January 2014 / Accepted: 12 May 2014 / Published: 14 May 2014
Cited by 2 | PDF Full-text (1000 KB) | HTML Full-text | XML Full-text
Abstract
Motivated by the notion of perceptual error, as a core concept of the perceptual control theory, we propose an action-amplitude model for controlled entropic self-organization (CESO). We present several aspects of this development that illustrate its explanatory power: (i) a physical view of
[...] Read more.
Motivated by the notion of perceptual error, as a core concept of the perceptual control theory, we propose an action-amplitude model for controlled entropic self-organization (CESO). We present several aspects of this development that illustrate its explanatory power: (i) a physical view of partition functions and path integrals, as well as entropy and phase transitions; (ii) a global view of functional compositions and commutative diagrams; (iii) a local geometric view of the Kähler–Ricci flow and time-evolution of entropic action; and (iv) a computational view using various path-integral approximations. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Open AccessArticle Non-Extensive Entropy Econometrics: New Statistical Features of Constant Elasticity of Substitution-Related Models
Entropy 2014, 16(5), 2713-2728; doi:10.3390/e16052713
Received: 23 February 2014 / Revised: 18 April 2014 / Accepted: 12 May 2014 / Published: 16 May 2014
Cited by 3 | PDF Full-text (503 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Power-law (PL) formalism is known to provide an appropriate framework for canonical modeling of nonlinear systems. We estimated three stochastically distinct models of constant elasticity of substitution (CES) class functions as non-linear inverse problem and showed that these PL related functions should have
[...] Read more.
Power-law (PL) formalism is known to provide an appropriate framework for canonical modeling of nonlinear systems. We estimated three stochastically distinct models of constant elasticity of substitution (CES) class functions as non-linear inverse problem and showed that these PL related functions should have a closed form. The first model is related to an aggregator production function, the second to an aggregator utility function (the Armington) and the third to an aggregator technical transformation function. A q-generalization of K-L information divergence criterion function with a priori consistency constraints is proposed. Related inferential statistical indices are computed. The approach leads to robust estimation and to new findings about the true stochastic nature of this class of nonlinear—up until now—analytically intractable functions. Outputs from traditional econometric techniques (Shannon entropy, NLLS, GMM, ML) are also presented. Full article
Open AccessArticle Transitional Intermittency Exponents Through Deterministic Boundary-Layer Structures and Empirical Entropic Indices
Entropy 2014, 16(5), 2729-2755; doi:10.3390/e16052729
Received: 1 March 2014 / Revised: 28 April 2014 / Accepted: 9 May 2014 / Published: 16 May 2014
Cited by 4 | PDF Full-text (666 KB) | HTML Full-text | XML Full-text
Abstract
A computational procedure is developed to determine initial instabilities within a three-dimensional laminar boundary layer and to follow these instabilities in the streamwise direction through to the resulting intermittency exponents within a fully developed turbulent flow. The fluctuating velocity wave vector component equations
[...] Read more.
A computational procedure is developed to determine initial instabilities within a three-dimensional laminar boundary layer and to follow these instabilities in the streamwise direction through to the resulting intermittency exponents within a fully developed turbulent flow. The fluctuating velocity wave vector component equations are arranged into a Lorenz-type system of equations. The nonlinear time series solution of these equations at the fifth station downstream of the initial instabilities indicates a sequential outward burst process, while the results for the eleventh station predict a strong sequential inward sweep process. The results for the thirteenth station indicate a return to the original instability autogeneration process. The nonlinear time series solutions indicate regions of order and disorder within the solutions. Empirical entropies are defined from decomposition modes obtained from singular value decomposition techniques applied to the nonlinear time series solutions. Empirical entropic indices are obtained from the empirical entropies for two streamwise stations. The intermittency exponents are then obtained from the entropic indices for these streamwise stations that indicate the burst and autogeneration processes. Full article
Open AccessArticle Long-Range Atomic Order and Entropy Change at the Martensitic Transformation in a Ni-Mn-In-Co Metamagnetic Shape Memory Alloy
Entropy 2014, 16(5), 2756-2767; doi:10.3390/e16052756
Received: 4 April 2014 / Revised: 28 April 2014 / Accepted: 14 May 2014 / Published: 19 May 2014
Cited by 4 | PDF Full-text (598 KB) | HTML Full-text | XML Full-text
Abstract
The influence of the atomic order on the martensitic transformation entropy change has been studied in a Ni-Mn-In-Co metamagnetic shape memory alloy through the evolution of the transformation temperatures under high-temperature quenching and post-quench annealing thermal treatments. It is confirmed that the entropy
[...] Read more.
The influence of the atomic order on the martensitic transformation entropy change has been studied in a Ni-Mn-In-Co metamagnetic shape memory alloy through the evolution of the transformation temperatures under high-temperature quenching and post-quench annealing thermal treatments. It is confirmed that the entropy change evolves as a consequence of the variations on the degree of L21 atomic order brought by thermal treatments, though, contrary to what occurs in ternary Ni-Mn-In, post-quench aging appears to be the most effective way to modify the transformation entropy in Ni-Mn-In-Co. It is also shown that any entropy change value between around 40 and 5 J/kgK can be achieved in a controllable way for a single alloy under the appropriate aging treatment, thus bringing out the possibility of properly tune the magnetocaloric effect. Full article
(This article belongs to the Special Issue Entropy in Shape Memory Alloys)
Figures

Open AccessArticle Market Efficiency, Roughness and Long Memory in PSI20 Index Returns: Wavelet and Entropy Analysis
Entropy 2014, 16(5), 2768-2788; doi:10.3390/e16052768
Received: 21 March 2014 / Revised: 8 May 2014 / Accepted: 9 May 2014 / Published: 19 May 2014
Cited by 2 | PDF Full-text (417 KB) | HTML Full-text | XML Full-text
Abstract
In this study, features of the financial returns of the PSI20index, related to market efficiency, are captured using wavelet- and entropy-based techniques. This characterization includes the following points. First, the detection of long memory, associated with low frequencies, and a global measure of
[...] Read more.
In this study, features of the financial returns of the PSI20index, related to market efficiency, are captured using wavelet- and entropy-based techniques. This characterization includes the following points. First, the detection of long memory, associated with low frequencies, and a global measure of the time series: the Hurst exponent estimated by several methods, including wavelets. Second, the degree of roughness, or regularity variation, associated with the H¨older exponent, fractal dimension and estimation based on the multifractal spectrum. Finally, the degree of the unpredictability of the series, estimated by approximate entropy. These aspects may also be studied through the concepts of non-extensive entropy and distribution using, for instance, the Tsallis q-triplet. They allow one to study the existence of efficiency in the financial market. On the other hand, the study of local roughness is performed by considering wavelet leader-based entropy. In fact, the wavelet coefficients are computed from a multiresolution analysis, and the wavelet leaders are defined by the local suprema of these coefficients, near the point that we are considering. The resulting entropy is more accurate in that detection than the H¨older exponent. These procedures enhance the capacity to identify the occurrence of financial crashes. Full article
Open AccessArticle Changing the Environment Based on Empowerment as Intrinsic Motivation
Entropy 2014, 16(5), 2789-2819; doi:10.3390/e16052789
Received: 28 February 2014 / Revised: 28 April 2014 / Accepted: 4 May 2014 / Published: 21 May 2014
Cited by 4 | PDF Full-text (1413 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to
[...] Read more.
One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to restructure the world. We show how changes in embodiment and in the environment change the resulting behaviour of the agent and the artefacts left in the world. For this purpose, we introduce an approximation of the established empowerment formalism based on sparse sampling, which is simpler and significantly faster to compute for deterministic dynamics. Sparse sampling also introduces a degree of randomness into the decision making process, which turns out to beneficial for some cases. We then utilize the measure to generate agent behaviour for different agent embodiments in a Minecraft-inspired three dimensional block world. The paradigmatic results demonstrate that empowerment can be used as a suitable generic intrinsic motivation to not only generate actions in given static environments, as shown in the past, but also to modify existing environmental conditions. In doing so, the emerging strategies to modify an agent’s environment turn out to be meaningful to the specific agent capabilities, i.e., de facto to its embodiment. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Figures

Open AccessArticle Randomized Binary Consensus with Faulty Agents
Entropy 2014, 16(5), 2820-2838; doi:10.3390/e16052820
Received: 30 January 2014 / Revised: 7 May 2014 / Accepted: 13 May 2014 / Published: 21 May 2014
Cited by 3 | PDF Full-text (712 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates self-organizing binary majority consensus disturbed by faulty nodes with random and persistent failure. We study consensus in ordered and random networks with noise, message loss and delays. Using computer simulations, we show that: (1) explicit randomization by noise, message loss
[...] Read more.
This paper investigates self-organizing binary majority consensus disturbed by faulty nodes with random and persistent failure. We study consensus in ordered and random networks with noise, message loss and delays. Using computer simulations, we show that: (1) explicit randomization by noise, message loss and topology can increase robustness towards faulty nodes; (2) commonly-used faulty nodes with random failure inhibit consensus less than faulty nodes with persistent failure; and (3) in some cases, such randomly failing faulty nodes can even promote agreement. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Open AccessArticle Exact Test of Independence Using Mutual Information
Entropy 2014, 16(5), 2839-2849; doi:10.3390/e16052839
Received: 18 February 2014 / Revised: 15 May 2014 / Accepted: 20 May 2014 / Published: 23 May 2014
Cited by 4 | PDF Full-text (126 KB) | HTML Full-text | XML Full-text
Abstract
Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The
[...] Read more.
Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The exact tests reported in the literature assume that data samples for each variable are sequentially independent and identically distributed (iid). In general, time series data have dependencies (Markov structure) that violate this condition. The algorithm given in this paper is the first exact significance test of mutual information that takes into account the Markov structure. When the Markov order is not known or indefinite, an exact test is used to determine an effective Markov order. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Open AccessArticle A Probabilistic Description of the Configurational Entropy of Mixing
Entropy 2014, 16(5), 2850-2868; doi:10.3390/e16052850
Received: 27 February 2014 / Revised: 16 May 2014 / Accepted: 20 May 2014 / Published: 23 May 2014
Cited by 1 | PDF Full-text (346 KB) | HTML Full-text | XML Full-text
Abstract
This work presents a formalism to calculate the configurational entropy of mixing based on the identification of non-interacting atomic complexes in the mixture and the calculation of their respective probabilities, instead of computing the number of atomic configurations in a lattice. The methodology
[...] Read more.
This work presents a formalism to calculate the configurational entropy of mixing based on the identification of non-interacting atomic complexes in the mixture and the calculation of their respective probabilities, instead of computing the number of atomic configurations in a lattice. The methodology is applied in order to develop a general analytical expression for the configurational entropy of mixing of interstitial solutions. The expression is valid for any interstitial concentration, is suitable for the treatment of interstitial short-range order (SRO) and can be applied to tetrahedral or octahedral interstitial solutions in any crystal lattice. The effect of the SRO of H on the structural properties of the Nb-H and bcc Zr-H solid solutions is studied using an accurate description of the configurational entropy. The methodology can also be applied to systems with no translational symmetry, such as liquids and amorphous materials. An expression for the configurational entropy of a granular system composed by equal sized hard spheres is deduced. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Open AccessArticle A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
Entropy 2014, 16(5), 2869-2889; doi:10.3390/e16052869
Received: 22 March 2014 / Revised: 12 May 2014 / Accepted: 21 May 2014 / Published: 23 May 2014
Cited by 2 | PDF Full-text (4820 KB) | HTML Full-text | XML Full-text
Abstract
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending
[...] Read more.
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Figures

Open AccessArticle Maximum Power of Thermally and Electrically Coupled Thermoelectric Generators
Entropy 2014, 16(5), 2890-2903; doi:10.3390/e16052890
Received: 26 February 2014 / Revised: 2 April 2014 / Accepted: 20 May 2014 / Published: 23 May 2014
Cited by 4 | PDF Full-text (549 KB) | HTML Full-text | XML Full-text
Abstract
In a recent work, we have reported a study on the figure of merit of a thermoelectric system composed by thermoelectric generators connected electrically and thermally in different configurations. In this work, we are interested in analyzing the output power delivered by a
[...] Read more.
In a recent work, we have reported a study on the figure of merit of a thermoelectric system composed by thermoelectric generators connected electrically and thermally in different configurations. In this work, we are interested in analyzing the output power delivered by a thermoelectric system for different arrays of thermoelectric materials in each configuration. Our study shows the impact of the array of thermoelectric materials in the output power of the composite system. We evaluate numerically the corresponding maximum output power for each configuration and determine the optimum array and configuration for maximum power. We compare our results with other recently reported studies. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Figures

Review

Jump to: Research

Open AccessReview Recent Theoretical Approaches to Minimal Artificial Cells
Entropy 2014, 16(5), 2488-2511; doi:10.3390/e16052488
Received: 17 January 2014 / Revised: 27 February 2014 / Accepted: 22 April 2014 / Published: 8 May 2014
Cited by 3 | PDF Full-text (1412 KB) | HTML Full-text | XML Full-text
Abstract
Minimal artificial cells (MACs) are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the
[...] Read more.
Minimal artificial cells (MACs) are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized) by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes…) play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell) systems. Full article

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top