Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 17, Issue 7 (July 2015), Pages 4485-5144

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-36
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle On Monotone Embedding in Information Geometry
Entropy 2015, 17(7), 4485-4499; doi:10.3390/e17074485
Received: 27 January 2015 / Revised: 28 February 2015 / Accepted: 17 March 2015 / Published: 25 June 2015
Cited by 6 | PDF Full-text (273 KB) | HTML Full-text | XML Full-text
Abstract
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as
[...] Read more.
A paper was published (Harsha and Subrahamanian Moosath, 2014) in which the authors claimed to have discovered an extension to Amari's \(\alpha\)-geometry through a general monotone embedding function. It will be pointed out here that this so-called \((F, G)\)-geometry (which includes \(F\)-geometry as a special case) is identical to Zhang's (2004) extension to the \(\alpha\)-geometry, where the name of the pair of monotone embedding functions \(\rho\) and \(\tau\) were used instead of \(F\) and \(H\) used in Harsha and Subrahamanian Moosath (2014). Their weighting function \(G\) for the Riemannian metric appears cosmetically due to a rewrite of the score function in log-representation as opposed to \((\rho, \tau)\)-representation in Zhang (2004). It is further shown here that the resulting metric and \(\alpha\)-connections obtained by Zhang (2004) through arbitrary monotone embeddings is a unique extension of the \(\alpha\)-geometric structure. As a special case, Naudts' (2004) \(\phi\)-logarithm embedding (using the so-called \(\log_\phi\) function) is recovered with the identification \(\rho=\phi, \, \tau=\log_\phi\), with \(\phi\)-exponential \(\exp_\phi\) given by the associated convex function linking the two representations. Full article
Open AccessArticle Clausius’ Disgregation: A Conceptual Relic that Sheds Light on the Second Law
Entropy 2015, 17(7), 4500-4518; doi:10.3390/e17074500
Received: 5 May 2015 / Revised: 11 June 2015 / Accepted: 17 June 2015 / Published: 25 June 2015
Cited by 1 | PDF Full-text (988 KB) | HTML Full-text | XML Full-text
Abstract
The present work analyzes the cognitive process that led Clausius towards the translation of the Second Law of Thermodynamics into mathematical expressions. We show that Clausius’ original formal expression of the Second Law was achieved by making extensive use of the concept of
[...] Read more.
The present work analyzes the cognitive process that led Clausius towards the translation of the Second Law of Thermodynamics into mathematical expressions. We show that Clausius’ original formal expression of the Second Law was achieved by making extensive use of the concept of disgregation, a quantity which has subsequently disappeared from the thermodynamic language. Our analysis demonstrates that disgregation stands as a crucial logical step of such process and sheds light on the comprehension of such fundamental relation. The introduction of entropy—which occurred three years after the first formalization of the Second Law—was aimed at making the Second Law exploitable in practical contexts. The reasons for the disappearance of disgregation, as well as of other “pre-modern” quantities, from the thermodynamics language are discussed. Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle Estimating Portfolio Value at Risk in the Electricity Markets Using an Entropy Optimized BEMD Approach
Entropy 2015, 17(7), 4519-4532; doi:10.3390/e17074519
Received: 27 January 2015 / Revised: 18 June 2015 / Accepted: 23 June 2015 / Published: 26 June 2015
PDF Full-text (222 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new entropy-optimized bivariate empirical mode decomposition (BEMD)-based model for estimating portfolio value at risk (PVaR). It reveals and analyzes different components of the price fluctuation. These components are decomposed and distinguished by their different behavioral patterns and
[...] Read more.
In this paper, we propose a new entropy-optimized bivariate empirical mode decomposition (BEMD)-based model for estimating portfolio value at risk (PVaR). It reveals and analyzes different components of the price fluctuation. These components are decomposed and distinguished by their different behavioral patterns and fluctuation range, by the BEMD model. The entropy theory has been introduced for the identification of the model parameters during the modeling process. The decomposed bivariate data components are calculated with the DCC-GARCH models. Empirical studies suggest that the proposed model outperforms the benchmark multivariate exponential weighted moving average (MEWMA) and DCC-GARCH model, in terms of conventional out-of-sample performance evaluation criteria for the model accuracy. Full article
Open AccessArticle Reliability Analysis Based on a Jump Diffusion Model with Two Wiener Processes for Cloud Computing with Big Data
Entropy 2015, 17(7), 4533-4546; doi:10.3390/e17074533
Received: 16 April 2015 / Revised: 16 June 2015 / Accepted: 24 June 2015 / Published: 26 June 2015
Cited by 4 | PDF Full-text (164 KB) | HTML Full-text | XML Full-text
Abstract
At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as
[...] Read more.
At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing. Full article
(This article belongs to the Special Issue Dynamical Equations and Causal Structures from Observations)
Open AccessArticle Noiseless Linear Amplifiers in Entanglement-Based Continuous-Variable Quantum Key Distribution
Entropy 2015, 17(7), 4547-4562; doi:10.3390/e17074547
Received: 27 March 2015 / Revised: 4 June 2015 / Accepted: 23 June 2015 / Published: 26 June 2015
Cited by 4 | PDF Full-text (1138 KB) | HTML Full-text | XML Full-text
Abstract
We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay.
[...] Read more.
We propose a method to improve the performance of two entanglement-based continuous-variable quantum key distribution protocols using noiseless linear amplifiers. The two entanglement-based schemes consist of an entanglement distribution protocol with an untrusted source and an entanglement swapping protocol with an untrusted relay. Simulation results show that the noiseless linear amplifiers can improve the performance of these two protocols, in terms of maximal transmission distances, when we consider small amounts of entanglement, as typical in realistic setups. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Open AccessArticle A Possible Cosmological Application of Some Thermodynamic Properties of the Black Body Radiation in n-Dimensional Euclidean Spaces
Entropy 2015, 17(7), 4563-4581; doi:10.3390/e17074563
Received: 18 March 2015 / Revised: 25 May 2015 / Accepted: 16 June 2015 / Published: 29 June 2015
Cited by 1 | PDF Full-text (1728 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we present the generalization of some thermodynamic properties of the black body radiation (BBR) towards an n-dimensional Euclidean space. For this case, the Planck function and the Stefan–Boltzmann law have already been given by Landsberg and de Vos and some
[...] Read more.
In this work, we present the generalization of some thermodynamic properties of the black body radiation (BBR) towards an n-dimensional Euclidean space. For this case, the Planck function and the Stefan–Boltzmann law have already been given by Landsberg and de Vos and some adjustments by Menon and Agrawal. However, since then, not much more has been done on this subject, and we believe there are some relevant aspects yet to explore. In addition to the results previously found, we calculate the thermodynamic potentials, the efficiency of the Carnot engine, the law for adiabatic processes and the heat capacity at constant volume. There is a region at which an interesting behavior of the thermodynamic potentials arises: maxima and minima appear for the n—dimensional BBR system at very high temperatures and low dimensionality, suggesting a possible application to cosmology. Finally, we propose that an optimality criterion in a thermodynamic framework could be related to the 3—dimensional nature of the universe. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics)
Open AccessArticle Applications of the Fuzzy Sumudu Transform for the Solution of First Order Fuzzy Differential Equations
Entropy 2015, 17(7), 4582-4601; doi:10.3390/e17074582
Received: 20 April 2015 / Revised: 1 June 2015 / Accepted: 3 June 2015 / Published: 1 July 2015
Cited by 4 | PDF Full-text (522 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order
[...] Read more.
In this paper, we study the classical Sumudu transform in fuzzy environment, referred to as the fuzzy Sumudu transform (FST). We also propose some results on the properties of the FST, such as linearity, preserving, fuzzy derivative, shifting and convolution theorem. In order to show the capability of the FST, we provide a detailed procedure to solve fuzzy differential equations (FDEs). A numerical example is provided to illustrate the usage of the FST. Full article
(This article belongs to the Special Issue Dynamical Equations and Causal Structures from Observations)
Open AccessArticle A New Robust Regression Method Based on Minimization of Geodesic Distances on a Probabilistic Manifold: Application to Power Laws
Entropy 2015, 17(7), 4602-4626; doi:10.3390/e17074602
Received: 3 April 2015 / Revised: 20 June 2015 / Accepted: 25 June 2015 / Published: 1 July 2015
Cited by 4 | PDF Full-text (2403 KB) | HTML Full-text | XML Full-text
Abstract
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other
[...] Read more.
In regression analysis for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. In many situations, the assumptions underlying OLS are not fulfilled, and several other approaches have been proposed. However, most techniques address only part of the shortcomings of OLS. We here discuss a new and more general regression method, which we call geodesic least squares regression (GLS). The method is based on minimization of the Rao geodesic distance on a probabilistic manifold. For the case of a power law, we demonstrate the robustness of the method on synthetic data in the presence of significant uncertainty on both the data and the regression model. We then show good performance of the method in an application to a scaling law in magnetic confinement fusion. Full article
Open AccessArticle Differentiating Interictal and Ictal States in Childhood Absence Epilepsy through Permutation Rényi Entropy
Entropy 2015, 17(7), 4627-4643; doi:10.3390/e17074627
Received: 31 March 2015 / Revised: 18 June 2015 / Accepted: 25 June 2015 / Published: 2 July 2015
Cited by 12 | PDF Full-text (1470 KB) | HTML Full-text | XML Full-text
Abstract
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based
[...] Read more.
Permutation entropy (PE) has been widely exploited to measure the complexity of the electroencephalogram (EEG), especially when complexity is linked to diagnostic information embedded in the EEG. Recently, the authors proposed a spatial-temporal analysis of the EEG recordings of absence epilepsy patients based on PE. The goal here is to improve the ability of PE in discriminating interictal states from ictal states in absence seizure EEG. For this purpose, a parametrical definition of permutation entropy is introduced here in the field of epileptic EEG analysis: the permutation Rényi entropy (PEr). PEr has been extensively tested against PE by tuning the involved parameters (order, delay time and alpha). The achieved results demonstrate that PEr outperforms PE, as there is a statistically-significant, wider gap between the PEr levels during the interictal states and PEr levels observed in the ictal states compared to PE. PEr also outperformed PE as the input to a classifier aimed at discriminating interictal from ictal states. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Open AccessArticle Quantifying Redundant Information in Predicting a Target Random Variable
Entropy 2015, 17(7), 4644-4653; doi:10.3390/e17074644
Received: 18 March 2015 / Revised: 24 June 2015 / Accepted: 26 June 2015 / Published: 2 July 2015
Cited by 5 | PDF Full-text (3093 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable
[...] Read more.
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Continuous Variable Quantum Key Distribution with a Noisy Laser
Entropy 2015, 17(7), 4654-4663; doi:10.3390/e17074654
Received: 12 May 2015 / Revised: 30 June 2015 / Accepted: 1 July 2015 / Published: 3 July 2015
Cited by 4 | PDF Full-text (1350 KB) | HTML Full-text | XML Full-text | Correction
Abstract
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally
[...] Read more.
Existing experimental implementations of continuous-variable quantum key distribution require shot-noise limited operation, achieved with shot-noise limited lasers. However, loosening this requirement on the laser source would allow for cheaper, potentially integrated systems. Here, we implement a theoretically proposed prepare-and-measure continuous-variable protocol and experimentally demonstrate the robustness of it against preparation noise stemming for instance from technical laser noise. Provided that direct reconciliation techniques are used in the post-processing we show that for small distances large amounts of preparation noise can be tolerated in contrast to reverse reconciliation where the key rate quickly drops to zero. Our experiment thereby demonstrates that quantum key distribution with non-shot-noise limited laser diodes might be feasible. Full article
(This article belongs to the Special Issue Quantum Cryptography)
Figures

Open AccessArticle A New Feature Extraction Method Based on the Information Fusion of Entropy Matrix and Covariance Matrix and Its Application in Face Recognition
Entropy 2015, 17(7), 4664-4683; doi:10.3390/e17074664
Received: 3 April 2015 / Revised: 29 May 2015 / Accepted: 23 June 2015 / Published: 3 July 2015
Cited by 1 | PDF Full-text (1278 KB) | HTML Full-text | XML Full-text
Abstract
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve
[...] Read more.
The classic principal components analysis (PCA), kernel PCA (KPCA) and linear discriminant analysis (LDA) feature extraction methods evaluate the importance of components according to their covariance contribution, not considering the entropy contribution, which is important supplementary information for the covariance. To further improve the covariance-based methods such as PCA (or KPCA), this paper firstly proposed an entropy matrix to load the uncertainty information of random variables similar to the covariance matrix loading the variation information in PCA. Then an entropy-difference matrix was used as a weighting matrix for transforming the original training images. This entropy-difference weighting (EW) matrix not only made good use of the local information of the training samples, contrast to the global method of PCA, but also considered the category information similar to LDA idea. Then the EW method was integrated with PCA (or KPCA), to form new feature extracting method. The new method was used for face recognition with the nearest neighbor classifier. The experimental results based on the ORL and Yale databases showed that the proposed method with proper threshold parameters reached higher recognition rates than the usual PCA (or KPCA) methods. Full article
Open AccessArticle Geometric Interpretation of Surface Tension Equilibrium in Superhydrophobic Systems
Entropy 2015, 17(7), 4684-4700; doi:10.3390/e17074684
Received: 18 June 2015 / Revised: 24 June 2015 / Accepted: 30 June 2015 / Published: 6 July 2015
Cited by 4 | PDF Full-text (1597 KB) | HTML Full-text | XML Full-text
Abstract
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of
[...] Read more.
Surface tension and surface energy are closely related, although not identical concepts. Surface tension is a generalized force; unlike a conventional mechanical force, it is not applied to any particular body or point. Using this notion, we suggest a simple geometric interpretation of the Young, Wenzel, Cassie, Antonoff and Girifalco–Good equations for the equilibrium during wetting. This approach extends the traditional concept of Neumann’s triangle. Substances are presented as points, while tensions are vectors connecting the points, and the equations and inequalities of wetting equilibrium obtain simple geometric meaning with the surface roughness effect interpreted as stretching of corresponding vectors; surface heterogeneity is their linear combination, and contact angle hysteresis is rotation. We discuss energy dissipation mechanisms during wetting due to contact angle hysteresis, the superhydrophobicity and the possible entropic nature of the surface tension. Full article
(This article belongs to the Special Issue Geometry in Thermodynamics)
Open AccessArticle Asymptotic Description of Neural Networks with Correlated Synaptic Weights
Entropy 2015, 17(7), 4701-4743; doi:10.3390/e17074701
Received: 13 February 2015 / Revised: 23 May 2015 / Accepted: 23 June 2015 / Published: 6 July 2015
PDF Full-text (399 KB) | HTML Full-text | XML Full-text
Abstract
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network
[...] Read more.
We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The main result of this article is that the image law through the empirical measure satisfies a large deviation principle with a good rate function which is shown to have a unique global minimum. Our analysis of the rate function allows us also to characterize the limit measure as the image of a stationary Gaussian measure defined on a transformed set of trajectories. Full article
(This article belongs to the Special Issue Entropic Aspects in Statistical Physics of Complex Systems)
Open AccessArticle Energetic and Exergetic Analysis of an Ejector-Expansion Refrigeration Cycle Using the Working Fluid R32
Entropy 2015, 17(7), 4744-4761; doi:10.3390/e17074744
Received: 23 May 2015 / Revised: 22 June 2015 / Accepted: 30 June 2015 / Published: 6 July 2015
Cited by 2 | PDF Full-text (1677 KB) | HTML Full-text | XML Full-text
Abstract
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show
[...] Read more.
The performance characteristics of an ejector-expansion refrigeration cycle (EEC) using R32 have been investigated in comparison with that using R134a. The coefficient of performance (COP), the exergy destruction, the exergy efficiency and the suction nozzle pressure drop (SNPD) are discussed. The results show that the application of an ejector instead of a throttle valve in R32 cycle decreases the cycle’s total exergy destruction by 8.84%–15.84% in comparison with the basic cycle (BC). The R32 EEC provides 5.22%–13.77% COP improvement and 5.13%–13.83% exergy efficiency improvement respectively over the BC for the given ranges of evaporating and condensing temperatures. There exists an optimum suction nozzle pressure drop (SNPD) which gives a maximum system COP and volumetric cooling capacity (VCC) under a specified condition. The value of the optimum SNPD mainly depends on the efficiencies of the ejector components, but is virtually independent of evaporating temperature and condensing temperature. In addition, the improvement of the component efficiency, especially the efficiencies of diffusion nozzle and the motive nozzle, can enhance the EEC performance. Full article
Open AccessArticle H Control for Markov Jump Systems with Nonlinear Noise Intensity Function and Uncertain Transition Rates
Entropy 2015, 17(7), 4762-4774; doi:10.3390/e17074762
Received: 30 April 2015 / Revised: 19 June 2015 / Accepted: 1 July 2015 / Published: 6 July 2015
Cited by 1 | PDF Full-text (436 KB) | HTML Full-text | XML Full-text
Abstract
The problem of robust H control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H performance criterion is developed for the given systems for the first time. Based on the developed performance
[...] Read more.
The problem of robust H control is investigated for Markov jump systems with nonlinear noise intensity function and uncertain transition rates. A robust H performance criterion is developed for the given systems for the first time. Based on the developed performance criterion, the desired H state-feedback controller is also designed, which guarantees the robust H performance of the closed-loop system. All the conditions are in terms of linear matrix inequalities (LMIs), and hence they can be readily solved by any LMI solver. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed methods. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Open AccessArticle Fractional Differential Texture Descriptors Based on the Machado Entropy for Image Splicing Detection
Entropy 2015, 17(7), 4775-4785; doi:10.3390/e17074775
Received: 19 May 2015 / Revised: 2 July 2015 / Accepted: 3 July 2015 / Published: 8 July 2015
Cited by 8 | PDF Full-text (872 KB) | HTML Full-text | XML Full-text
Abstract
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks
[...] Read more.
Image splicing is a common operation in image forgery. Different techniques of image splicing detection have been utilized to regain people’s trust. This study introduces a texture enhancement technique involving the use of fractional differential masks based on the Machado entropy. The masks slide over the tampered image, and each pixel of the tampered image is convolved with the fractional mask weight window on eight directions. Consequently, the fractional differential texture descriptors are extracted using the gray-level co-occurrence matrix for image splicing detection. The support vector machine is used as a classifier that distinguishes between authentic and spliced images. Results prove that the achieved improvements of the proposed algorithm are compatible with other splicing detection methods. Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
Open AccessArticle Modeling and Analysis of Entropy Generation in Light Heating of Nanoscaled Silicon and Germanium Thin Films
Entropy 2015, 17(7), 4786-4808; doi:10.3390/e17074786
Received: 26 March 2015 / Revised: 13 June 2015 / Accepted: 25 June 2015 / Published: 9 July 2015
PDF Full-text (1046 KB) | HTML Full-text | XML Full-text
Abstract
In this work, the irreversible processes in light heating of Silicon (Si) and Germanium (Ge) thin films are examined. Each film is exposed to light irradiation with radiative and convective boundary conditions. Heat, electron and hole transport and generation-recombination processes of electron-hole pairs
[...] Read more.
In this work, the irreversible processes in light heating of Silicon (Si) and Germanium (Ge) thin films are examined. Each film is exposed to light irradiation with radiative and convective boundary conditions. Heat, electron and hole transport and generation-recombination processes of electron-hole pairs are studied in terms of a phenomenological model obtained from basic principles of irreversible thermodynamics. We present an analysis of the contributions to the entropy production in the stationary state due to the dissipative effects associated with electron and hole transport, generation-recombination of electron-hole pairs as well as heat transport. The most significant contribution to the entropy production comes from the interaction of light with the medium in both Si and Ge. This interaction includes two processes, namely, the generation of electron-hole pairs and the transferring of energy from the absorbed light to the lattice. In Si the following contribution in magnitude comes from the heat transport. In Ge all the remaining contributions to entropy production have nearly the same order of magnitude. The results are compared and explained addressing the differences in the magnitude of the thermodynamic forces, Onsager’s coefficients and transport properties of Si and Ge. Full article
(This article belongs to the Special Issue Entropy Generation in Thermal Systems and Processes 2015)
Figures

Open AccessArticle Power-Type Functions of Prediction Error of Sea Level Time Series
Entropy 2015, 17(7), 4809-4837; doi:10.3390/e17074809
Received: 27 April 2015 / Revised: 21 June 2015 / Accepted: 3 July 2015 / Published: 9 July 2015
Cited by 6 | PDF Full-text (852 KB) | HTML Full-text | XML Full-text
Abstract
This paper gives the quantitative relationship between prediction error and given past sample size in our research of sea level time series. The present result exhibits that the prediction error of sea level time series in terms of given past sample size follows
[...] Read more.
This paper gives the quantitative relationship between prediction error and given past sample size in our research of sea level time series. The present result exhibits that the prediction error of sea level time series in terms of given past sample size follows decayed power functions, providing a quantitative guideline for the quality control of sea level prediction. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory)
Open AccessArticle Faster Together: Collective Quantum Search
Entropy 2015, 17(7), 4838-4862; doi:10.3390/e17074838
Received: 8 May 2015 / Revised: 9 May 2015 / Accepted: 30 June 2015 / Published: 10 July 2015
Cited by 2 | PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
Joining independent quantum searches provides novel collective modes of quantum search (merging) by utilizing the algorithm’s underlying algebraic structure. If n quantum searches, each targeting a single item, join the domains of their classical oracle functions and sum their Hilbert spaces (merging), instead
[...] Read more.
Joining independent quantum searches provides novel collective modes of quantum search (merging) by utilizing the algorithm’s underlying algebraic structure. If n quantum searches, each targeting a single item, join the domains of their classical oracle functions and sum their Hilbert spaces (merging), instead of acting independently (concatenation), then they achieve a reduction of the search complexity by factor O(√n). Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle Entropy, Information and Complexity or Which Aims the Arrow of Time?
Entropy 2015, 17(7), 4863-4890; doi:10.3390/e17074863
Received: 16 May 2015 / Revised: 22 June 2015 / Accepted: 29 June 2015 / Published: 10 July 2015
Cited by 2 | PDF Full-text (582 KB) | HTML Full-text | XML Full-text
Abstract
In this article, we analyze the interrelationships among such notions as entropy, information, complexity, order and chaos and show using the theory of categories how to generalize the second law of thermodynamics as a law of increasing generalized entropy or a general law
[...] Read more.
In this article, we analyze the interrelationships among such notions as entropy, information, complexity, order and chaos and show using the theory of categories how to generalize the second law of thermodynamics as a law of increasing generalized entropy or a general law of complification. This law could be applied to any system with morphisms, including all of our universe and its subsystems. We discuss how such a general law and other laws of nature drive the evolution of the universe, including physicochemical and biological evolutions. In addition, we determine eliminating selection in physicochemical evolution as an extremely simplified prototype of natural selection. Laws of nature do not allow complexity and entropy to reach maximal values by generating structures. One could consider them as a kind of “breeder” of such selection. Full article
Open AccessArticle Informational and Causal Architecture of Discrete-Time Renewal Processes
Entropy 2015, 17(7), 4891-4917; doi:10.3390/e17074891
Received: 17 March 2015 / Revised: 1 July 2015 / Accepted: 9 July 2015 / Published: 13 July 2015
Cited by 6 | PDF Full-text (2005 KB) | HTML Full-text | XML Full-text
Abstract
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate
[...] Read more.
Renewal processes are broadly used to model stochastic behavior consisting of isolated events separated by periods of quiescence, whose durations are specified by a given probability law. Here, we identify the minimal sufficient statistic for their prediction (the set of causal states), calculate the historical memory capacity required to store those states (statistical complexity), delineate what information is predictable (excess entropy), and decompose the entropy of a single measurement into that shared with the past, future, or both. The causal state equivalence relation defines a new subclass of renewal processes with a finite number of causal states despite having an unbounded interevent count distribution. We use the resulting formulae to analyze the output of the parametrized Simple Nonunifilar Source, generated by a simple two-state hidden Markov model, but with an infinite-state machine presentation. All in all, the results lay the groundwork for analyzing more complex processes with infinite statistical complexity and infinite excess entropy. Full article
(This article belongs to the Section Statistical Mechanics)
Open AccessArticle Fisher Information Properties
Entropy 2015, 17(7), 4918-4939; doi:10.3390/e17074918
Received: 18 June 2015 / Accepted: 10 July 2015 / Published: 13 July 2015
Cited by 7 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a
[...] Read more.
A set of Fisher information properties are presented in order to draw a parallel with similar properties of Shannon differential entropy. Already known properties are presented together with new ones, which include: (i) a generalization of mutual information for Fisher information; (ii) a new proof that Fisher information increases under conditioning; (iii) showing that Fisher information decreases in Markov chains; and (iv) bound estimation error using Fisher information. This last result is especially important, because it completes Fano’s inequality, i.e., a lower bound for estimation error, showing that Fisher information can be used to define an upper bound for this error. In this way, it is shown that Shannon’s differential entropy, which quantifies the behavior of the random variable, and the Fisher information, which quantifies the internal structure of the density function that defines the random variable, can be used to characterize the estimation error. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Identity Authentication over Noisy Channels
Entropy 2015, 17(7), 4940-4958; doi:10.3390/e17074940
Received: 7 April 2015 / Revised: 14 June 2015 / Accepted: 9 July 2015 / Published: 14 July 2015
Cited by 2 | PDF Full-text (2084 KB) | HTML Full-text | XML Full-text
Abstract
Identity authentication is the process of verifying users’ validity. Unlike classical key-based authentications, which are built on noiseless channels, this paper introduces a general analysis and design framework for identity authentication over noisy channels. Specifically, the authentication scenarios of single time and multiple
[...] Read more.
Identity authentication is the process of verifying users’ validity. Unlike classical key-based authentications, which are built on noiseless channels, this paper introduces a general analysis and design framework for identity authentication over noisy channels. Specifically, the authentication scenarios of single time and multiple times are investigated. For each scenario, the lower bound on the opponent’s success probability is derived, and it is smaller than the classical identity authentication’s. In addition, it can remain the same, even if the secret key is reused. Remarkably, the Cartesian authentication code proves to be helpful for hiding the secret key to maximize the secrecy performance. Finally, we show a potential application of this authentication technique. Full article
Open AccessArticle Broad Niche Overlap between Invasive Nile Tilapia Oreochromis niloticus and Indigenous Congenerics in Southern Africa: Should We be Concerned?
Entropy 2015, 17(7), 4959-4973; doi:10.3390/e17074959
Received: 26 February 2015 / Revised: 7 July 2015 / Accepted: 8 July 2015 / Published: 14 July 2015
Cited by 2 | PDF Full-text (2868 KB) | HTML Full-text | XML Full-text
Abstract
This study developed niche models for the native ranges of Oreochromis andersonii, O. mortimeri, and O. mossambicus, and assessed how much of their range is climatically suitable for the establishment of O. niloticus, and then reviewed the conservation implications
[...] Read more.
This study developed niche models for the native ranges of Oreochromis andersonii, O. mortimeri, and O. mossambicus, and assessed how much of their range is climatically suitable for the establishment of O. niloticus, and then reviewed the conservation implications for indigenous congenerics as a result of overlap with O. niloticus based on documented congeneric interactions. The predicted potential geographical range of O. niloticus reveals a broad climatic suitability over most of southern Africa and overlaps with all the endemic congenerics. This is of major conservation concern because six of the eight river systems predicted to be suitable for O. niloticus have already been invaded and now support established populations. Oreochromis niloticus has been implicated in reducing the abundance of indigenous species through competitive exclusion and hybridisation. Despite these well-documented adverse ecological effects, O. niloticus remains one of the most widely cultured and propagated fish species in aquaculture and stock enhancements in the southern Africa sub-region. Aquaculture is perceived as a means of protein security, poverty alleviation, and economic development and, as such, any future decisions on its introduction will be based on the trade-off between socio-economic benefits and potential adverse ecological effects. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Open AccessArticle Lag Synchronization of Complex Lorenz System with Applications to Communication
Entropy 2015, 17(7), 4974-4985; doi:10.3390/e17074974
Received: 25 March 2015 / Revised: 26 June 2015 / Accepted: 2 July 2015 / Published: 15 July 2015
Cited by 6 | PDF Full-text (458 KB) | HTML Full-text | XML Full-text
Abstract
In communication, the signal at the receiver end at time t is the signal from the transmitter side at time t −Τ (Τ ≥ 0 and it is the lag time) as the time lag of transmission. Therefore, lag synchronization (LS) is
[...] Read more.
In communication, the signal at the receiver end at time t is the signal from the transmitter side at time t −Τ (Τ ≥ 0 and it is the lag time) as the time lag of transmission. Therefore, lag synchronization (LS) is more accurate than complete synchronization to design communication scheme. Taking complex Lorenz system as an example, we design the LS controller according to error feedback. Using chaotic masking, we propose a communication scheme based on LS and independent component analysis (ICA). It is suitable to transmit multiple messages with all kinds of amplitudes and it has the ability of anti-noise. Numerical simulations verify the feasibility and effectiveness of the presented schemes. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data
Entropy 2015, 17(7), 4986-4999; doi:10.3390/e17074986
Received: 5 May 2015 / Revised: 1 July 2015 / Accepted: 3 July 2015 / Published: 15 July 2015
PDF Full-text (1078 KB) | HTML Full-text | XML Full-text
Abstract
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships.
[...] Read more.
A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples. Full article
Open AccessArticle A Novel Method for Seismogenic Zoning Based on Triclustering: Application to the Iberian Peninsula
Entropy 2015, 17(7), 5000-5021; doi:10.3390/e17075000
Received: 23 June 2015 / Revised: 3 July 2015 / Accepted: 5 July 2015 / Published: 16 July 2015
Cited by 3 | PDF Full-text (2851 KB) | HTML Full-text | XML Full-text
Abstract
A previous definition of seismogenic zones is required to do a probabilistic seismic hazard analysis for areas of spread and low seismic activity. Traditional zoning methods are based on the available seismic catalog and the geological structures. It is admitted that thermal and
[...] Read more.
A previous definition of seismogenic zones is required to do a probabilistic seismic hazard analysis for areas of spread and low seismic activity. Traditional zoning methods are based on the available seismic catalog and the geological structures. It is admitted that thermal and resistant parameters of the crust provide better criteria for zoning. Nonetheless, the working out of the rheological profiles causes a great uncertainty. This has generated inconsistencies, as different zones have been proposed for the same area. A new method for seismogenic zoning by means of triclustering is proposed in this research. The main advantage is that it is solely based on seismic data. Almost no human decision is made, and therefore, the method is nearly non-biased. To assess its performance, the method has been applied to the Iberian Peninsula, which is characterized by the occurrence of small to moderate magnitude earthquakes. The catalog of the National Geographic Institute of Spain has been used. The output map is checked for validity with the geology. Moreover, a geographic information system has been used for two purposes. First, the obtained zones have been depicted within it. Second, the data have been used to calculate the seismic parameters (b-value, annual rate). Finally, the results have been compared to Kohonen’s self-organizing maps. Full article
(This article belongs to the Section Information Theory)
Open AccessArticle The Critical Point Entanglement and Chaos in the Dicke Model
Entropy 2015, 17(7), 5022-5042; doi:10.3390/e17075022
Received: 24 April 2015 / Revised: 28 June 2015 / Accepted: 14 July 2015 / Published: 16 July 2015
Cited by 4 | PDF Full-text (3877 KB) | HTML Full-text | XML Full-text
Abstract
Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS). Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with
[...] Read more.
Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS). Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle Evaluation of the Atmospheric Chemical Entropy Production of Mars
Entropy 2015, 17(7), 5047-5062; doi:10.3390/e17075047
Received: 3 February 2015 / Revised: 11 July 2015 / Accepted: 14 July 2015 / Published: 20 July 2015
PDF Full-text (302 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Thermodynamic disequilibrium is a necessary situation in a system in which complex emergent structures are created and maintained. It is known that most of the chemical disequilibrium, a particular type of thermodynamic disequilibrium, in Earth’s atmosphere is a consequence of life. We have
[...] Read more.
Thermodynamic disequilibrium is a necessary situation in a system in which complex emergent structures are created and maintained. It is known that most of the chemical disequilibrium, a particular type of thermodynamic disequilibrium, in Earth’s atmosphere is a consequence of life. We have developed a thermochemical model for the Martian atmosphere to analyze the disequilibrium by chemical reactions calculating the entropy production. It follows from the comparison with the Earth atmosphere that the magnitude of the entropy produced by the recombination reaction forming O3 (O + O2 + CO2 ⥦ O3 + CO2) in the atmosphere of the Earth is larger than the entropy produced by the dominant set of chemical reactions considered for Mars, as a consequence of the low density and the poor variety of species of the Martian atmosphere. If disequilibrium is needed to create and maintain self-organizing structures in a system, we conclude that the current Martian atmosphere is unable to support large physico-chemical structures, such as those created on Earth. Full article
(This article belongs to the Section Thermodynamics)
Figures

Open AccessArticle Minimal Rényi–Ingarden–Urbanik Entropy of Multipartite Quantum States
Entropy 2015, 17(7), 5063-5084; doi:10.3390/e17075063
Received: 15 June 2015 / Accepted: 10 July 2015 / Published: 20 July 2015
Cited by 4 | PDF Full-text (1675 KB) | HTML Full-text | XML Full-text
Abstract
We study the entanglement of a pure state of a composite quantum system consisting of several subsystems with d levels each. It can be described by the Rényi–Ingarden–Urbanik entropy Sq of a decomposition of the state in a product basis, minimized over
[...] Read more.
We study the entanglement of a pure state of a composite quantum system consisting of several subsystems with d levels each. It can be described by the Rényi–Ingarden–Urbanik entropy Sq of a decomposition of the state in a product basis, minimized over all local unitary transformations. In the case q = 0, this quantity becomes a function of the rank of the tensor representing the state, while in the limit q → ∞, the entropy becomes related to the overlap with the closest separable state and the geometric measure of entanglement. For any bipartite system, the entropy S1 coincides with the standard entanglement entropy. We analyze the distribution of the minimal entropy for random states of three- and four-qubit systems. In the former case, the distribution of the three-tangle is studied and some of its moments are evaluated, while in the latter case, we analyze the distribution of the hyperdeterminant. The behavior of the maximum overlap of a three-qudit system with the closest separable state is also investigated in the asymptotic limit. Full article
(This article belongs to the Special Issue Quantum Computation and Information: Multi-Particle Aspects)
Open AccessArticle Averaged Extended Tree Augmented Naive Classifier
Entropy 2015, 17(7), 5085-5100; doi:10.3390/e17075085
Received: 8 June 2015 / Revised: 10 June 2015 / Accepted: 17 June 2015 / Published: 21 July 2015
Cited by 1 | PDF Full-text (915 KB) | HTML Full-text | XML Full-text
Abstract
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of
[...] Read more.
This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN), which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN) and Averaged One-Dependence Estimator (AODE) classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required. Full article
(This article belongs to the Special Issue Inductive Statistical Methods)
Open AccessArticle Analytic Exact Upper Bound for the Lyapunov Dimension of the Shimizu–Morioka System
Entropy 2015, 17(7), 5101-5116; doi:10.3390/e17075101
Received: 26 May 2015 / Revised: 15 July 2015 / Accepted: 17 July 2015 / Published: 22 July 2015
Cited by 6 | PDF Full-text (3223 KB) | HTML Full-text | XML Full-text
Abstract
In applied investigations, the invariance of the Lyapunov dimension under a diffeomorphism is often used. However, in the case of irregular linearization, this fact was not strictly considered in the classical works. In the present work, the invariance of the Lyapunov dimension under
[...] Read more.
In applied investigations, the invariance of the Lyapunov dimension under a diffeomorphism is often used. However, in the case of irregular linearization, this fact was not strictly considered in the classical works. In the present work, the invariance of the Lyapunov dimension under diffeomorphism is demonstrated in the general case. This fact is used to obtain the analytic exact upper bound of the Lyapunov dimension of an attractor of the Shimizu–Morioka system. Full article
(This article belongs to the Special Issue Recent Advances in Chaos Theory and Complex Networks)
Open AccessArticle An Entropy-Based Approach to Path Analysis of Structural Generalized Linear Models: A Basic Idea
Entropy 2015, 17(7), 5117-5132; doi:10.3390/e17075117
Received: 11 May 2015 / Revised: 7 July 2015 / Accepted: 17 July 2015 / Published: 22 July 2015
Cited by 1 | PDF Full-text (737 KB) | HTML Full-text | XML Full-text
Abstract
A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed
[...] Read more.
A path analysis method for causal systems based on generalized linear models is proposed by using entropy. A practical example is introduced, and a brief explanation of the entropy coefficient of determination is given. Direct and indirect effects of explanatory variables are discussed as log odds ratios, i.e., relative information, and a method for summarizing the effects is proposed. The example dataset is re-analyzed by using the method. Full article
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)
Open AccessArticle Setting Diverging Colors for a Large-Scale Hypsometric Lunar Map Based on Entropy
Entropy 2015, 17(7), 5133-5144; doi:10.3390/e17075133
Received: 8 May 2015 / Revised: 14 July 2015 / Accepted: 17 July 2015 / Published: 22 July 2015
PDF Full-text (1803 KB) | HTML Full-text | XML Full-text
Abstract
A hypsometric map is a type of map used to represent topographic characteristics by filling different map areas with diverging colors. The setting of appropriate diverging colors is essential for the map to reveal topographic details. When lunar real environmental exploration programs are
[...] Read more.
A hypsometric map is a type of map used to represent topographic characteristics by filling different map areas with diverging colors. The setting of appropriate diverging colors is essential for the map to reveal topographic details. When lunar real environmental exploration programs are performed, large-scale hypsometric maps with a high resolution and greater topographic detail are helpful. Compared to the situation on Earth, fewer lunar exploration objects are available, and the topographic waviness is smaller at a large scale, indicating that presenting the topographic details using traditional hypsometric map-making methods may be difficult. To solve this problem, we employed the Chang’E2 (CE2) topographic and imagery data with a resolution of 7 m and developed a new hypsometric map-making method by setting the diverging colors based on information entropy. The resulting map showed that this method is suitable for presenting the topographic details and might be useful for developing a better understanding of the environment of the lunar surface. Full article
(This article belongs to the Special Issue Entropy, Utility, and Logical Reasoning)

Other

Jump to: Research

Open AccessReply Reply to C. Tsallis’ “Conceptual Inadequacy of the Shore and Johnson Axioms for Wide Classes of Complex Systems”
Entropy 2015, 17(7), 5043-5046; doi:10.3390/e17075043
Received: 7 June 2015 / Revised: 14 July 2015 / Accepted: 14 July 2015 / Published: 17 July 2015
Cited by 1 | PDF Full-text (141 KB) | HTML Full-text | XML Full-text
Abstract
In a recent PRL (2013, 111, 180604), we invoked the Shore and Johnson axioms which demonstrate that the least-biased way to infer probability distributions fpig from data is to maximize the Boltzmann-Gibbs entropy. We then showed which biases are introduced in models obtained
[...] Read more.
In a recent PRL (2013, 111, 180604), we invoked the Shore and Johnson axioms which demonstrate that the least-biased way to infer probability distributions fpig from data is to maximize the Boltzmann-Gibbs entropy. We then showed which biases are introduced in models obtained by maximizing nonadditive entropies. A rebuttal of our work appears in entropy (2015, 17, 2853) and argues that the Shore and Johnson axioms are inapplicable to a wide class of complex systems. Here we highlight the errors in this reasoning. Full article
(This article belongs to the Section Complexity)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top