Next Issue
Volume 47, IS4SI 2019
Previous Issue
Volume 45, Getafrica 2019
 
 
proceedings-logo

Journal Browser

Journal Browser

Proceedings, 2020, ECEA-5

The 5th International Electronic Conference on Entropy and Its Applications

Online | 18–30 November 2019

Volume Editor: Geert Verdoolaege, Ghent University, Belgium

Number of Papers: 31
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The conference aims to bring together researchers to present and discuss their recent contributions without the need for travel. This e-conference is hosted on the MDPI Sciforum platform, which [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Other

7 pages, 517 KiB  
Proceeding Paper
Reverse Weighted-Permutation Entropy: A Novel Complexity Metric Incorporating Distance and Amplitude Information
by Yuxing Li
Proceedings 2020, 46(1), 1; https://doi.org/10.3390/ecea-5-06688 - 17 Nov 2019
Cited by 6 | Viewed by 1365
Abstract
Permutation entropy (PE), as one of the effective complexity metrics to represent the complexity of time series, has the merits of simple calculation and high calculation efficiency. In view of the limitations of PE, weighted-permutation entropy (WPE) and reverse permutation entropy (RPE) were [...] Read more.
Permutation entropy (PE), as one of the effective complexity metrics to represent the complexity of time series, has the merits of simple calculation and high calculation efficiency. In view of the limitations of PE, weighted-permutation entropy (WPE) and reverse permutation entropy (RPE) were proposed to improve the performance of PE. WPE introduces amplitude information to weigh each arrangement pattern, it can not only better reveal the complexity of time series with a sudden change of amplitude, but it also has better robustness to noise; by introducing distance information, RPE is defined as the distance to white noise, it has the reverse trend to traditional PE and has better stability for time series of different lengths. In this paper, we propose a novel complexity metric incorporating distance and amplitude information, and name it reverse weighted-permutation entropy (RWPE), which incorporates the advantages of both WPE and RPE. Three simulation experiments were conducted, including mutation signal detection testing, robustness testing to noise based on complexity, and complexity testing of time series with various lengths. The simulation results show that RWPE can be used as a complexity metric, which has the ability to accurately detect the abrupt amplitudes of time series and has better robustness to noise. Moreover, it also shows greater stability than the other three kinds of PE for time series with various lengths. Full article
Show Figures

Figure 1

7 pages, 6483 KiB  
Proceeding Paper
New Explanation for the Mpemba Effect
by Ilias J. Tyrovolas
Proceedings 2020, 46(1), 2; https://doi.org/10.3390/ecea-5-06658 - 17 Nov 2019
Viewed by 4367
Abstract
The purpose of this study is to check out the involvement of entropy in Mpemba effect. Several water samples were cooled down to frozen in order to probe if preheat affects the cooling duration time. We found out that preheating of the water [...] Read more.
The purpose of this study is to check out the involvement of entropy in Mpemba effect. Several water samples were cooled down to frozen in order to probe if preheat affects the cooling duration time. We found out that preheating of the water sample the cooling duration was reduced. Given this, we theoretically show that water gains more entropy when warmed and re-cooled to the original temperature. Full article
Show Figures

Figure 1

8 pages, 1620 KiB  
Proceeding Paper
Spin Waves and Skyrmions in Magneto-Ferroelectric Superlattices: Theory and Simulation
by Hung T. Diep and Ildus F. Sharafullin
Proceedings 2020, 46(1), 3; https://doi.org/10.3390/ecea-5-06662 - 17 Nov 2019
Viewed by 1312
Abstract
We present in this paper the effects of Dzyaloshinskii–Moriya (DM) magnetoelectric coupling between ferroelectric and magnetic layers in a superlattice formed by alternate magnetic and ferroelectric films. Magnetic films are films of simple cubic lattice with Heisenberg spins interacting with each other via [...] Read more.
We present in this paper the effects of Dzyaloshinskii–Moriya (DM) magnetoelectric coupling between ferroelectric and magnetic layers in a superlattice formed by alternate magnetic and ferroelectric films. Magnetic films are films of simple cubic lattice with Heisenberg spins interacting with each other via an exchange J and a DM interaction with the ferroelectric interface. Electrical polarizations of ±1 are assigned at simple cubic lattice sites in the ferroelectric films. We determine the ground-state (GS) spin configuration in the magnetic film. In zero field, the GS is periodically non-collinear (helical structure) and in an applied field H perpendicular to the layers, it shows the existence of skyrmions at the interface. Using the Green’s function method we study the spin waves (SW) excited in a monolayer and also in a bilayer sandwiched between ferroelectric films, in zero field. We show that the DM interaction strongly affects the long-wave length SW mode. We calculate also the magnetization at low temperatures. We use next Monte Carlo simulations to calculate various physical quantities at finite temperatures such as the critical temperature, the layer magnetization and the layer polarization, as functions of the magneto-electric DM coupling and the applied magnetic field. Phase transition to the disordered phase is studied. Full article
Show Figures

Figure 1

9 pages, 2342 KiB  
Proceeding Paper
Social Conflicts Studied by Statistical Physics Approach and Monte Carlo Simulations
by Hung T. Diep, Miron Kaufman and Sanda Kaufman
Proceedings 2020, 46(1), 4; https://doi.org/10.3390/ecea-5-06661 - 17 Nov 2019
Viewed by 1149
Abstract
Statistical physics models of social systems with a large number of members, each interacting with a subset of others, have been used in very diverse domains such as culture dynamics, crowd behavior, information dissemination and social conflicts. We observe that such models rely [...] Read more.
Statistical physics models of social systems with a large number of members, each interacting with a subset of others, have been used in very diverse domains such as culture dynamics, crowd behavior, information dissemination and social conflicts. We observe that such models rely on the fact that large societal groups display surprising regularities despite individual agency. Unlike physics phenomena that obey Newton’s third law, in the world of humans the magnitudes of action and reaction are not necessarily equal. The effect of the actions of group n on group m can differ from the effect of group m on group n. We thus use the spin language to describe humans with this observation in mind. Note that particular individual behaviors do not survive in statistical averages. Only common characteristics remain. We have studied two-group conflicts as well as three-group conflicts. We have used time-dependent Mean-Field Theory and Monte Carlo simulations. Each group is defined by two parameters which express the intra-group strength of interaction among members and its attitude toward negotiations. The interaction with the other group is parameterized by a constant which expresses an attraction or a repulsion to other group average attitude. The model includes a social temperature T which acts on each group and quantifies the social noise. One of the most striking features is the periodic oscillation of the attitudes toward negotiation or conflict for certain ranges of parameter values. Other striking results include chaotic behavior, namely intractable, unpredictable conflict outcomes. Full article
Show Figures

Figure 1

8 pages, 795 KiB  
Proceeding Paper
Fast Tuning of Topic Models: An Application of Rényi Entropy and Renormalization Theory
by Sergei Koltcov, Vera Ignatenko and Sergei Pashakhin
Proceedings 2020, 46(1), 5; https://doi.org/10.3390/ecea-5-06674 - 17 Nov 2019
Cited by 1 | Viewed by 1194
Abstract
In practice, the critical step in building machine learning models of big data (BD) is costly in terms of time and the computing resources procedure of parameter tuning with a grid search. Due to the size, BD are comparable to mesoscopic physical systems. [...] Read more.
In practice, the critical step in building machine learning models of big data (BD) is costly in terms of time and the computing resources procedure of parameter tuning with a grid search. Due to the size, BD are comparable to mesoscopic physical systems. Hence, methods of statistical physics could be applied to BD. The paper shows that topic modeling demonstrates self-similar behavior under the condition of a varying number of clusters. Such behavior allows using a renormalization technique. The combination of a renormalization procedure with the Rényi entropy approach allows for fast searching of the optimal number of clusters. In this paper, the renormalization procedure is developed for the Latent Dirichlet Allocation (LDA) model with a variational Expectation-Maximization algorithm. The experiments were conducted on two document collections with a known number of clusters in two languages. The paper presents results for three versions of the renormalization procedure: (1) a renormalization with the random merging of clusters, (2) a renormalization based on minimal values of Kullback–Leibler divergence and (3) a renormalization with merging clusters with minimal values of Rényi entropy. The paper shows that the renormalization procedure allows finding the optimal number of topics 26 times faster than grid search without significant loss of quality. Full article
Show Figures

Figure 1

5 pages, 892 KiB  
Proceeding Paper
Computer Simulation of Magnetic Skyrmions
by Vitalii Kapitan, Egor Vasiliev and Alexander Perzhu
Proceedings 2020, 46(1), 6; https://doi.org/10.3390/ecea-5-06678 - 17 Nov 2019
Cited by 1 | Viewed by 2004
Abstract
In this paper, we present the results of a numerical simulation of thermodynamics for the array of Classical Heisenberg spins placed on a 2D square lattice, which effectively represents the behaviour of a single layer. Using the Metropolis algorithm, we show the temperature [...] Read more.
In this paper, we present the results of a numerical simulation of thermodynamics for the array of Classical Heisenberg spins placed on a 2D square lattice, which effectively represents the behaviour of a single layer. Using the Metropolis algorithm, we show the temperature behaviour of the system with a competing Heisenberg and Dzyaloshinskii–Moriya interaction (DMI) in contrast with the classical Heisenberg system. We show the process of nucleation of the skyrmion depending on the value of the external magnetic field. We proposed the controlling method for the movement of skyrmions. Full article
Show Figures

Figure 1

9 pages, 305 KiB  
Proceeding Paper
The Measurement of Statistical Evidence as the Basis for Statistical Reasoning
by Michael Evans
Proceedings 2020, 46(1), 7; https://doi.org/10.3390/ecea-5-06682 - 17 Nov 2019
Cited by 1 | Viewed by 1557
Abstract
There are various approaches to the problem of how one is supposed to conduct a statistical analysis. Different analyses can lead to contradictory conclusions in some problems so this is not a satisfactory state of affairs. It seems that all approaches make reference [...] Read more.
There are various approaches to the problem of how one is supposed to conduct a statistical analysis. Different analyses can lead to contradictory conclusions in some problems so this is not a satisfactory state of affairs. It seems that all approaches make reference to the evidence in the data concerning questions of interest as a justification for the methodology employed. It is fair to say, however, that none of the most commonly used methodologies is absolutely explicit about how statistical evidence is to be characterized and measured. We will discuss the general problem of statistical reasoning and the development of a theory for this that is based on being precise about statistical evidence. This will be shown to lead to the resolution of a number of problems. Full article
8 pages, 699 KiB  
Proceeding Paper
Detection of Arrhythmic Cardiac Signals from ECG Recordings Using the Entropy–Complexity Plane
by Pablo Martinez Coq, Walter Legnani and Ricardo Armentano
Proceedings 2020, 46(1), 8; https://doi.org/10.3390/ecea-5-06693 - 18 Nov 2019
Cited by 1 | Viewed by 1082
Abstract
The aim of this work was to analyze in the Entropy–Complexity plane (HxC) time series coming from ECG, with the objective to discriminate recordings from two different groups of patients: normal sinus rhythm and cardiac arrhythmias. The HxC plane [...] Read more.
The aim of this work was to analyze in the Entropy–Complexity plane (HxC) time series coming from ECG, with the objective to discriminate recordings from two different groups of patients: normal sinus rhythm and cardiac arrhythmias. The HxC plane used in this study was constituted by Shannon’s Entropy as one of its axes, and the other was composed using statistical complexity. To compute the entropy, the probability distribution function (PDF) of the observed data was obtained using the methodology proposed by Bandt and Pompe (2002). The database used in the present study was the ECG recordings obtained from PhysioNet, 47 long-term signals of patients with diagnosed cardiac arrhythmias and 18 long-term signals from normal sinus rhythm patients were processed. Average values of statistical complexity and normalized Shannon entropy were calculated and analyzed in the HxC plane for each time series. The average values of complexity of ECG for patients with diagnosed arrhythmias were bigger than normal sinus rhythm group. On the other hand, the Shannon entropy average values for arrhythmias patients were lower than the normal sinus rhythm group. This characteristic made it possible to discriminate the position of both signals’ groups in the HxC plane. The results were analyzed through a multivariate statistical test hypothesis. The methodology proposed has a remarkable conceptual simplicity, and shows a promising efficiency in the detection of cardiovascular pathologies. Full article
Show Figures

Figure 1

9 pages, 470 KiB  
Proceeding Paper
Graph Entropy Associated with Multilevel Atomic Excitation
by Abu Mohamed Alhasan
Proceedings 2020, 46(1), 9; https://doi.org/10.3390/ecea-5-06675 - 17 Nov 2019
Viewed by 1434
Abstract
A graph-model is presented to describe multilevel atomic structure. As an example, we take the double Λ configuration in alkali-metal atoms with hyperfine structure and nuclear spin I = 3 / 2 , as a graph with four vertices. Links are treated as [...] Read more.
A graph-model is presented to describe multilevel atomic structure. As an example, we take the double Λ configuration in alkali-metal atoms with hyperfine structure and nuclear spin I = 3 / 2 , as a graph with four vertices. Links are treated as coherence. We introduce the transition matrix which describes the connectivity matrix in static graph-model. In general, the transition matrix describes spatiotemporal behavior of the dynamic graph-model. Furthermore, it describes multiple connections and self-looping of vertices. The atomic excitation is made by short pulses, in order that the hyperfine structure is well resolved. Entropy associated with the proposed dynamic graph-model is used to identify transitions as well as local stabilization in the system without invoking the energy concept of the propagated pulses. Full article
Show Figures

Figure 1

8 pages, 428 KiB  
Proceeding Paper
Information Length as a New Diagnostic of Stochastic Resonance
by Eun-jin Kim and Rainer Hollerbach
Proceedings 2020, 46(1), 10; https://doi.org/10.3390/ecea-5-06667 - 17 Nov 2019
Viewed by 999
Abstract
Stochastic resonance is a subtle, yet powerful phenomenon in which noise plays an interesting role of amplifying a signal instead of attenuating it. It has attracted great attention with a vast number of applications in physics, chemistry, biology, etc. Popular measures to study [...] Read more.
Stochastic resonance is a subtle, yet powerful phenomenon in which noise plays an interesting role of amplifying a signal instead of attenuating it. It has attracted great attention with a vast number of applications in physics, chemistry, biology, etc. Popular measures to study stochastic resonance include signal-to-noise ratios, residence time distributions, and different information theoretic measures. Here, we show that the information length provides a novel method to capture stochastic resonance. The information length measures the total number of statistically different states along the path of a system. Specifically, we consider the classical double-well model of stochastic resonance in which a particle in a potential V ( x , t ) = [ - x 2 / 2 + x 4 / 4 - A sin ( ω t ) x ] is subject to an additional stochastic forcing that causes it to occasionally jump between the two wells at x ± 1 . We present direct numerical solutions of the Fokker–Planck equation for the probability density function p ( x , t ) for ω = 10 - 2 to 10 - 6 , and A [ 0 , 0 . 2 ] and show that the information length shows a very clear signal of the resonance. That is, stochastic resonance is reflected in the total number of different statistical states that a system passes through. Full article
Show Figures

Figure 1

8 pages, 571 KiB  
Proceeding Paper
Entropy Production and the Maximum Entropy of the Universe
by Vihan M. Patel and Charles Lineweaver
Proceedings 2020, 46(1), 11; https://doi.org/10.3390/ecea-5-06672 - 17 Nov 2019
Cited by 1 | Viewed by 2481
Abstract
The entropy of the observable universe has been calculated as Suni ~ 10104 k and is dominated by the entropy of supermassive black holes. Irreversible processes in the universe can only happen if there is an entropy gap ΔS between [...] Read more.
The entropy of the observable universe has been calculated as Suni ~ 10104 k and is dominated by the entropy of supermassive black holes. Irreversible processes in the universe can only happen if there is an entropy gap ΔS between the entropy of the observable universe Suni and its maximum entropy Smax: ΔS = SmaxSuni. Thus, the entropy gap ΔS is a measure of the remaining potentially available free energy in the observable universe. To compute ΔS, one needs to know the value of Smax. There is no consensus on whether Smax is a constant or is time-dependent. A time-dependent Smax(t) has been used to represent instantaneous upper limits on entropy growth. However, if we define Smax as a constant equal to the final entropy of the observable universe at its heat death, SmaxSmax,HD, we can interpret T ΔS as a measure of the remaining potentially available (but not instantaneously available) free energy of the observable universe. The time-dependent slope dSuni/dt(t) then becomes the best estimate of current entropy production and T dSuni/dt(t) is the upper limit to free energy extraction. Full article
Show Figures

Figure 1

15 pages, 373 KiB  
Proceeding Paper
Inheritance is a Surjection: Description and Consequence
by Paul Ballonoff
Proceedings 2020, 46(1), 12; https://doi.org/10.3390/ecea-5-06659 - 17 Nov 2019
Viewed by 1059
Abstract
Consider an evolutionary process. In genetic inheritance and in human cultural systems, each new offspring is assigned to be produced by a specific pair of the previous population. This form of mathematical arrangement is called a surjection. We have thus briefly described the [...] Read more.
Consider an evolutionary process. In genetic inheritance and in human cultural systems, each new offspring is assigned to be produced by a specific pair of the previous population. This form of mathematical arrangement is called a surjection. We have thus briefly described the mechanics of genetics—physical mechanics describes the possible forms of loci, and normal genetic statistics describe the results as viability of offspring in actual use. However, we have also described much of the mechanics of mathematical anthropology. Understanding that what we know as inheritance is the result of finding surjections and their consequences is useful in understanding, and perhaps predicting, biological—as well as human—evolution. Full article
Show Figures

Figure 1

8 pages, 243 KiB  
Proceeding Paper
Symplectic/Contact Geometry Related to Bayesian Statistics
by Atsuhide Mori
Proceedings 2020, 46(1), 13; https://doi.org/10.3390/ecea-5-06665 - 17 Nov 2019
Viewed by 919
Abstract
In the previous work, the author gave the following symplectic/contact geometric description of the Bayesian inference of normal means: The space H of normal distributions is an upper halfplane which admits two operations, namely, the convolution product and the normalized pointwise product of [...] Read more.
In the previous work, the author gave the following symplectic/contact geometric description of the Bayesian inference of normal means: The space H of normal distributions is an upper halfplane which admits two operations, namely, the convolution product and the normalized pointwise product of two probability density functions. There is a diffeomorphism F of H that interchanges these operations as well as sends any e-geodesic to an e-geodesic. The product of two copies of H carries positive and negative symplectic structures and a bi-contact hypersurface N. The graph of F is Lagrangian with respect to the negative symplectic structure. It is contained in the bi-contact hypersurface N. Further, it is preserved under a bi-contact Hamiltonian flow with respect to a single function. Then the restriction of the flow to the graph of F presents the inference of means. The author showed that this also works for the Student t-inference of smoothly moving means and enables us to consider the smoothness of data smoothing. In this presentation, the space of multivariate normal distributions is foliated by means of the Cholesky decomposition of the covariance matrix. This provides a pair of regular Poisson structures, and generalizes the above symplectic/contact description to the multivariate case. The most of the ideas presented here have been described at length in a later article of the author. Full article
17 pages, 339 KiB  
Proceeding Paper
Comparative Examination of Nonequilibrium Thermodynamic Models of Thermodiffusion in Liquids
by Semen N. Semenov and Martin E. Schimpf
Proceedings 2020, 46(1), 14; https://doi.org/10.3390/ecea-5-06680 - 17 Nov 2019
Cited by 2 | Viewed by 1193
Abstract
We analyze existing models for material transport in non-isothermal non-electrolyte liquid mixtures that utilize non-equilibrium thermodynamics. Many different sets of equations for material have been derived that, while based on the same fundamental expression of entropy production, utilize different terms of the temperature- [...] Read more.
We analyze existing models for material transport in non-isothermal non-electrolyte liquid mixtures that utilize non-equilibrium thermodynamics. Many different sets of equations for material have been derived that, while based on the same fundamental expression of entropy production, utilize different terms of the temperature- and concentration-induced gradients in the chemical potential to express the material flux. We reason that only by establishing a system of transport equations that satisfies the following three requirements can we obtain a valid thermodynamic model of thermodiffusion based on entropy production, and understand the underlying physical mechanism: (1) Maintenance of mechanical equilibrium in a closed steady-state system, expressed by a form of the Gibbs–Duhem equation that accounts for all the relevant gradients in concentration, temperature, and pressure and respective thermodynamic forces; (2) thermodiffusion (thermophoresis) is zero in pure unbounded liquids (i.e., in the absence of wall effects); (3) invariance in the derived concentrations of components in a mixture, regardless of which concentration or material flux is considered to be the dependent versus independent variable in an overdetermined system of material transport equations. The analysis shows that thermodiffusion in liquids is based on the entropic mechanism. Full article
8 pages, 442 KiB  
Proceeding Paper
Performance of Portfolios Based on the Expected Utility-Entropy Fund Rating Approach
by Daniel Chiew, Judy Qiu, Sirimon Treepongkaruna, Jiping Yang and Chenxiao Shi
Proceedings 2020, 46(1), 15; https://doi.org/10.3390/ecea-5-06679 - 17 Nov 2019
Viewed by 1109
Abstract
Yang and Qiu proposed and reframed an expected utility-entropy (EU-E) based decision model; later on, similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based on [...] Read more.
Yang and Qiu proposed and reframed an expected utility-entropy (EU-E) based decision model; later on, similar numerical representation for a risky choice was axiomatically developed by Luce et al. under the condition of segregation. Recently, we established a fund rating approach based on the EU-E decision model and Morningstar ratings. In this paper, we apply the approach to US mutual funds and construct portfolios using the best rated funds. Furthermore, we evaluate the performance of the fund ratings based on EU-E decision model against Morningstar ratings by examining the performance of the three models in portfolio selection. The conclusions show that portfolios constructed using the ratings based on the EU-E models with moderate tradeoff coefficients perform better than those constructed using Morningstar. The conclusion is robust to different rebalancing intervals. Full article
Show Figures

Figure 1

8 pages, 547 KiB  
Proceeding Paper
A Novel Improved Feature Extraction Technique for Ship-radiated Noise Based on Improved Intrinsic Time-Scale Decomposition and Multiscale Dispersion Entropy
by Zhaoxi Li, Yaan Li, Kai Zhang and Jianli Guo
Proceedings 2020, 46(1), 16; https://doi.org/10.3390/ecea-5-06687 - 17 Nov 2019
Viewed by 1090
Abstract
Entropy feature analysis is an important tool for the classification and identification of different types of ships. In order to improve the limitations of traditional feature extraction of ship-radiation noise in complex marine environments, we proposed a novel feature extraction method for ship-radiated [...] Read more.
Entropy feature analysis is an important tool for the classification and identification of different types of ships. In order to improve the limitations of traditional feature extraction of ship-radiation noise in complex marine environments, we proposed a novel feature extraction method for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion entropy (MDE). The proposed feature extraction technique is named IITD-MDE. IITD, as an improved algorithm, has more reliable performance than intrinsic time-scale decomposition (ITD). Firstly, five types of ship-radiated noise signals are decomposed into a series of intrinsic scale component (ISCs) by IITD. Then, we select the ISC with the main information through correlation analysis, and calculate the MDE value as a feature vector. Finally, the feature vector is input into the support vector machine (SVM) classifier to analyze and get classification. The experimental results demonstrate that the recognition rate of the proposed technique reaches 86% accuracy. Therefore, compared with the other feature extraction methods, the proposed method is able to classify the different types of ships effectively. Full article
Show Figures

Figure 1

7 pages, 1165 KiB  
Proceeding Paper
Entropy-Based Approach for the Analysis of Spatio-Temporal Urban Growth Dynamics
by Garima Nautiyal, Sandeep Maithani, Ashutosh Bhardwaj and Archana Sharma
Proceedings 2020, 46(1), 17; https://doi.org/10.3390/ecea-5-06670 - 17 Nov 2019
Viewed by 1193
Abstract
Relative Entropy (RE) is defined as the measure of the degree of randomness of any geographical variable (i.e., urban growth). It is an effective indicator to evaluate the patterns of urban growth, whether compact or dispersed. In the present study, RE has been [...] Read more.
Relative Entropy (RE) is defined as the measure of the degree of randomness of any geographical variable (i.e., urban growth). It is an effective indicator to evaluate the patterns of urban growth, whether compact or dispersed. In the present study, RE has been used to evaluate the urban growth of Dehradun city. Dehradun, the capital of Uttarakhand, is situated in the foothills of the Himalayas and has undergone rapid urbanization. Landsat satellite data for the years 2000, 2010 and 2019 have been used in the study. Built-up cover outside municipal limits and within municipal limits was classified for the given time period. The road network and city center of the study area were also delineated using satellite data. RE was calculated for the periods 2000–2010 and 2010–2019 with respect to the road network and city center. High values of RE indicate higher levels of urban sprawl, whereas lower values indicate compactness. The urban growth pattern over a period of 19 years was examined with the help of RE. Full article
Show Figures

Figure 1

10 pages, 345 KiB  
Proceeding Paper
Information Theoretic Objective Function for Genetic Software Clustering
by Habib Izadkhah and Mahjoubeh Tajgardan
Proceedings 2020, 46(1), 18; https://doi.org/10.3390/ecea-5-06681 - 17 Nov 2019
Cited by 5 | Viewed by 1256
Abstract
Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used [...] Read more.
Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used by genetic algorithms for clustering. These objective functions determine the quality of each clustering obtained in the evolutionary process of the genetic algorithm in terms of cohesion and coupling. The major drawbacks of these objective functions are the inability to (1) consider utility artifacts, and (2) to apply to another software graph such as artifact feature dependency graph. To overcome the existing objective functions’ limitations, this paper presents a new objective function. The new objective function is based on information theory, aiming to produce a clustering in which information loss is minimized. For applying the new proposed objective function, we have developed a genetic algorithm aiming to maximize the proposed objective function. The proposed genetic algorithm, named ILOF, has been compared to that of some other well-known genetic algorithms. The results obtained confirm the high performance of the proposed algorithm in solving nine software systems. The performance achieved is quite satisfactory and promising for the tested benchmarks. Full article
Show Figures

Figure 1

11 pages, 3034 KiB  
Proceeding Paper
Quantifying Total Influence between Variables with Information Theoretic and Machine Learning Techniques
by Andrea Murari, Riccardo Rossi, Michele Lungaroni, Pasquale Gaudio and Michela Gelfusa
Proceedings 2020, 46(1), 19; https://doi.org/10.3390/ecea-5-06666 - 17 Nov 2019
Viewed by 1108
Abstract
The increasingly sophisticated investigations of complex systems require more robust estimates of the correlations between the measured quantities. The traditional Pearson Correlation Coefficient is easy to calculate but is sensitive only to linear correlations. The total influence between quantities is therefore often expressed [...] Read more.
The increasingly sophisticated investigations of complex systems require more robust estimates of the correlations between the measured quantities. The traditional Pearson Correlation Coefficient is easy to calculate but is sensitive only to linear correlations. The total influence between quantities is therefore often expressed in terms of the Mutual Information, which takes into account also the nonlinear effects but is not normalised. To compare data from different experiments, the Information Quality Ratio is therefore in many cases of easier interpretation. On the other hand, both Mutual Information and Information Quality Ratio are always positive and therefore cannot provide information about the sign of the influence between quantities. Moreover, they require an accurate determination of the probability distribution functions of the variables involved. Since the quality and amount of data available is not always sufficient to grant an accurate estimation of the probability distribution functions, it has been investigated whether neural computational tools can help and complement the aforementioned indicators. Specific encoders and autoencoders have been developed for the task of determining the total correlation between quantities related by a functional dependence, including information about the sign of their mutual influence. Both their accuracy and computational efficiencies have been addressed in detail, with extensive numerical tests using synthetic data. A careful analysis of the robustness against noise has also been performed. The neural computational tools typically outperform the traditional indicators in practically every respect. Full article
Show Figures

Figure 1

13 pages, 472 KiB  
Proceeding Paper
Photochemical Dissipative Structuring of the Fundamental Molecules of Life
by Karo Michaelian
Proceedings 2020, 46(1), 20; https://doi.org/10.3390/ecea-5-06692 - 18 Nov 2019
Cited by 2 | Viewed by 1135
Abstract
It has been conjectured that the origin of the fundamental molecules of life, their proliferation over the surface of Earth, and their complexation through time, are examples of photochemical dissipative structuring, dissipative proliferation, and dissipative selection, respectively, arising out of the nonequilibrium conditions [...] Read more.
It has been conjectured that the origin of the fundamental molecules of life, their proliferation over the surface of Earth, and their complexation through time, are examples of photochemical dissipative structuring, dissipative proliferation, and dissipative selection, respectively, arising out of the nonequilibrium conditions created on Earth’s surface by the solar photon spectrum. Here I describe the nonequilibrium thermodynamics and the photochemical mechanisms involved in the synthesis and evolution of the fundamental molecules of life from simpler more common precursor molecules under the long wavelength UVC and UVB solar photons prevailing at Earth’s surface during the Archean. Dissipative structuring through photochemical mechanisms leads to carbon based UVC pigments with peaked conical intersections which endow them with a large photon disipative capacity (broad wavelength absorption and rapid radiationless dexcitation). Dissipative proliferation occurs when the photochemical dissipative structuring becomes autocatalytic. Dissipative selection arises when fluctuations lead the system to new stationary states (corresponding to different molecular concentration profiles) of greater dissipative capacity as predicted by the universal evolution criterion of Classical Irreversible Thermodynamic theory established by Onsager, Glansdorff, and Prigogine. An example of the UV photochemical dissipative structuring, proliferation, and selection of the nucleobase adenine from an aqueous solution of HCN under UVC light is given. Full article
Show Figures

Figure 1

8 pages, 4737 KiB  
Proceeding Paper
The Potential of L-Band UAVSAR Data for the Extraction of Mangrove Land Cover Using Entropy and Anisotropy Based Classification
by Ojasvi Saini, Ashutosh Bhardwaj and R. S. Chatterjee
Proceedings 2020, 46(1), 21; https://doi.org/10.3390/ecea-5-06673 - 17 Nov 2019
Cited by 1 | Viewed by 1328
Abstract
Mangrove forests serve as an ecosystem stabilizer since they play an important role in providing habitats for many terrestrial and aquatic species along with a huge capability of carbon sequestration and absorbing greenhouse gases. The process of conversion of carbon dioxide into biomass [...] Read more.
Mangrove forests serve as an ecosystem stabilizer since they play an important role in providing habitats for many terrestrial and aquatic species along with a huge capability of carbon sequestration and absorbing greenhouse gases. The process of conversion of carbon dioxide into biomass is very rapid in mangrove forests. Mangroves play a crucial role in protecting the human settlement and arresting shoreline erosion by reducing wave height to a great extent, as they form a natural barricade against high sea tides and windstorms. In most cases, human settlement in the vicinity of mangrove forests has affected the eco-system of the forest and placed them under environmental pressure. Since, a continuous mapping, monitoring, and preservation of coastal mangroves may help in climate resilience, a mangrove land cover extraction method using remotely sensed L-band full-pol UAVSAR data (acquired on 25 February 2016) based on Entropy (H) and Anisotropy (A) concepts has therefore been proposed in this study. The k-Mean clustering has been applied to the subsetted (1-Entropy) * (Anisotropy) image generated by PolSARpro_v5.0 software’s H/A/Alpha Decomposition. The mangrove land cover of the study area was extracted to be 116.07 Km2 using k-Mean clustering and validated with the mangrove land cover area provided by Global Mangrove Watch (GMW) data. Full article
Show Figures

Figure 1

7 pages, 436 KiB  
Proceeding Paper
On the Relationship between City Mobility and Blocks Uniformity
by Eric K. Tokuda, Cesar H. Comin, Roberto M. Cesar and Luciano da F. Costa
Proceedings 2020, 46(1), 22; https://doi.org/10.3390/ecea-5-06669 - 17 Nov 2019
Viewed by 1275
Abstract
The spatial organization and the topological organization of cities have a great influence on the lives of their inhabitants, including mobility efficiency. Entropy has been often adopted for the characterization of diverse natural and human-made systems and structures. In this work, we apply [...] Read more.
The spatial organization and the topological organization of cities have a great influence on the lives of their inhabitants, including mobility efficiency. Entropy has been often adopted for the characterization of diverse natural and human-made systems and structures. In this work, we apply the exponential of entropy (evenness) to characterize the uniformity of city blocks. It is suggested that this measurement is related to several properties of real cities, such as mobility. We consider several real-world cities, from which the logarithm of the average shortest path length is also calculated and compared with the evenness of the city blocks. Several interesting results have been found, as discussed in the article. Full article
Show Figures

Figure 1

7 pages, 1534 KiB  
Proceeding Paper
A New Perspective on the Kauzmann Entropy Paradox: A Crystal/Glass Critical Point in Four- and Three-Dimensions
by Caroline S. Gorham and David E. Laughlin
Proceedings 2020, 46(1), 23; https://doi.org/10.3390/ecea-5-06677 - 17 Nov 2019
Viewed by 1350
Abstract
In this article, a new perspective on the Kauzmann point is presented. The “ideal glass transition” that occurs at the Kauzmann temperature is the point at which the configurational entropy of an undercooled metastable liquid equals that of its crystalline counterpart. We model [...] Read more.
In this article, a new perspective on the Kauzmann point is presented. The “ideal glass transition” that occurs at the Kauzmann temperature is the point at which the configurational entropy of an undercooled metastable liquid equals that of its crystalline counterpart. We model solidifying liquids by using a quaternion orientational order parameter and find that the Kauzmann point is a critical point that exists to separate crystalline and non-crystalline solid states. We identify the Kauzmann point as a first-order critical point, and suggest that it belongs to quaternion ordered systems that exist in four- or three-dimensions. This “Kauzmann critical point” can be considered to be a higher-dimensional analogue to the superfluid-to-Mott insulator quantum phase transition that occurs in two- and one-dimensional complex ordered systems. Such critical points are driven by tuning a non-thermal frustration parameter, and result due to characteristic softening of a `Higgs’ type mode that corresponds to amplitude fluctuations of the order parameter. The first-order nature of the finite temperature Kauzmann critical point is a consequence of the discrete change of the topology of the ground state manifold of the quaternion order parameter field that applies to crystalline and non-crystalline solids. Full article
Show Figures

Figure 1

7 pages, 369 KiB  
Proceeding Paper
Identifying Systemic Risks and Policy-Induced Shocks in Stock Markets by Relative Entropy
by Feiyan Liu, Jianbo Gao and Yunfei Hou
Proceedings 2020, 46(1), 24; https://doi.org/10.3390/ecea-5-06689 - 17 Nov 2019
Viewed by 1263
Abstract
Systemic risks have to be vigilantly guided against at all times in order to prevent their contagion across stock markets. New policies also may not work as desired and even induce shocks to market, especially those emerging ones. Therefore, timely detection of systemic [...] Read more.
Systemic risks have to be vigilantly guided against at all times in order to prevent their contagion across stock markets. New policies also may not work as desired and even induce shocks to market, especially those emerging ones. Therefore, timely detection of systemic risks and policy-induced shocks is crucial to safeguard the health of stock markets. In this paper, we show that the relative entropy or Kullback–Liebler divergence can be used to identify systemic risks and policy-induced shocks in stock markets. Concretely, we analyzed the minutely data of two stock indices, the Dow Jones Industrial Average (DJIA) and the Shanghai Stock Exchange (SSE) Composite Index, and examined the temporal variation of relative entropy for them. We show that clustered peaks in relative entropy curves can accurately identify the timing of the 2007–2008 global financial crisis and its precursors, and the 2015 stock crashes in China. Moreover, a sharpest needle-like peak in relative entropy curves, especially for the SSE market, always served as a precursor of an unusual market, a strong bull market or a bubble, thus possessing a certain ability of forewarning. Full article
Show Figures

Figure 1

6 pages, 663 KiB  
Proceeding Paper
Nonequilibrium Thermodynamics and Entropy Production in Simulation of Electrical Tree Growth
by Adrián César Razzitte, Luciano Enciso, Marcelo Gun and María Sol Ruiz
Proceedings 2020, 46(1), 25; https://doi.org/10.3390/ecea-5-06683 - 17 Nov 2019
Viewed by 1137
Abstract
In the present work we applied the nonequilibrium thermodynamic theory in the analysis of the dielectric breakdown (DB) process. As the tree channel front moves, the intense field near the front moves electrons and ions irreversibly in the region beyond the tree channel [...] Read more.
In the present work we applied the nonequilibrium thermodynamic theory in the analysis of the dielectric breakdown (DB) process. As the tree channel front moves, the intense field near the front moves electrons and ions irreversibly in the region beyond the tree channel tips where electromechanical, thermal and chemical effects cause irreversible damage and, from the nonequilibrium thermodynamic viewpoint, entropy production. From the nonequilibrium thermodynamics analysis, the entropy production is due to the product of fluxes Ji and conjugated forces Xi: σ = ∑iJiXi0. We consider that the coupling between fluxes can describe the dielectric breakdown in solids as a phenomenon of transport of heat, mass and electric charge. Full article
Show Figures

Figure 1

7 pages, 806 KiB  
Proceeding Paper
Quantum Genetic Terrain Algorithm (Q-GTA): A Technique to Study the Evolution of the Earth Using Quantum Genetic Algorithm
by Pranjal Sharma, Ankit Agarwal and Bhawna Chaudhary
Proceedings 2020, 46(1), 26; https://doi.org/10.3390/ecea-5-06685 - 17 Nov 2019
Viewed by 1237
Abstract
In recent years, geologists have put in a lot of effort trying to study the evolution of Earth using different techniques studying rocks, gases, and water at different channels like mantle, lithosphere, and atmosphere. Some of the methods include estimation of heat flux [...] Read more.
In recent years, geologists have put in a lot of effort trying to study the evolution of Earth using different techniques studying rocks, gases, and water at different channels like mantle, lithosphere, and atmosphere. Some of the methods include estimation of heat flux between the atmosphere and sea ice, modeling global temperature changes, and groundwater monitoring networks. That being said, algorithms involving the study of Earth’s evolution have been a debated topic for decades. In addition, there is distinct research on the mantle, lithosphere, and atmosphere using isotopic fractionation, which this paper will take into consideration to form genes at the former stage. This factor of isotopic fractionation could be molded in QGA to study the Earth’s evolution. We combined these factors because the gases containing these isotopes move from mantle to lithosphere or atmosphere through gaps or volcanic eruptions contributing to it. We are likely to use the Rb/Sr and Sm/Nd ratios to study the evolution of these channels. This paper, in general, provides the idea of gathering some information about temperature changes by using isotopic ratios as chromosomes, in QGA the chromosomes depict the characteristic of a generation. Here these ratios depict the temperature characteristic and other steps of QGA would be molded to study these ratios in the form of temperature changes, which would further signify the evolution of Earth based on the study that temperature changes with the change in isotopic ratios. This paper will collect these distinct studies and embed them into an upgraded quantum genetic algorithm called Quantum Genetic Terrain Algorithm or Quantum GTA. Full article
Show Figures

Figure 1

7 pages, 388 KiB  
Proceeding Paper
Systematic Coarse-Grained Models for Molecular Systems Using Entropy
by Evangelia Kalligiannaki, Vagelis Harmandaris and Markos Katsoulakis
Proceedings 2020, 46(1), 27; https://doi.org/10.3390/ecea-5-06710 - 22 Nov 2019
Viewed by 1481
Abstract
The development of systematic coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of different methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. We focus [...] Read more.
The development of systematic coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of different methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. We focus on methods based on information theory, such as relative entropy, showing that they provide parameterizations of coarse-grained models at equilibrium by minimizing a fitting functional over a parameter space. We also connect them with structural-based (inverse Boltzmann) and force matching methods. All the methods mentioned in principle are employed to approximate a many-body potential, the (n-body) potential of mean force, describing the equilibrium distribution of coarse-grained sites observed in simulations of atomically detailed models. We also present in a mathematically consistent way the entropy and force matching methods and their equivalence, which we derive for general nonlinear coarse-graining maps. We apply, and compare, the above-described methodologies in several molecular systems: A simple fluid (methane), water and a polymer (polyethylene) bulk system. Finally, for the latter we also provide reliable confidence intervals using a statistical analysis resampling technique, the bootstrap method. Full article
Show Figures

Figure 1

7 pages, 465 KiB  
Proceeding Paper
The New Method Using Shannon Entropy to Decide the Power Exponents on JMAK Equation
by Hirokazu Maruoka
Proceedings 2020, 46(1), 28; https://doi.org/10.3390/ecea-5-06660 - 17 Nov 2019
Cited by 1 | Viewed by 1211
Abstract
The JMAK (Johnson–Mehl–Avrami–Kolmogorov) equation is exponential equation inserted power-law behavior on the parameter, and is widely utilized to describe the relaxation process, the nucleation process, the deformation of materials and so on. Theoretically the power exponent is occasionally associated with the geometrical factor [...] Read more.
The JMAK (Johnson–Mehl–Avrami–Kolmogorov) equation is exponential equation inserted power-law behavior on the parameter, and is widely utilized to describe the relaxation process, the nucleation process, the deformation of materials and so on. Theoretically the power exponent is occasionally associated with the geometrical factor of the nucleus, which gives the integral power exponent. However, non-integral power exponents occasionally appear and they are sometimes considered as phenomenological in the experiment. On the other hand, the power exponent decides the distribution of step time when the equation is considered as the superposition of the step function. This work intends to extend the interpretation of the power exponent by the new method associating Shannon entropy of distribution of step time with the method of Lagrange multiplier in which cumulants or moments obtained from the distribution function are preserved. This method intends to decide the distribution of step time through the power exponent, in which certain statistical values are fixed. The Shannon entropy to which the second cumulant is introduced gives fractional power exponents that reveal the symmetrical distribution function that can be compared with the experimental results. Various power exponents in which another statistical value is fixed are discussed with physical interpretation. This work gives new insight into the JMAK function and the method of Shannon entropy in general. Full article
Show Figures

Figure 1

8 pages, 663 KiB  
Proceeding Paper
Exposing Face-Swap Images Based on Deep Learning and ELA Detection
by Weiguo Zhang and Chenggang Zhao
Proceedings 2020, 46(1), 29; https://doi.org/10.3390/ecea-5-06684 - 17 Nov 2019
Cited by 8 | Viewed by 2678
Abstract
New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to [...] Read more.
New developments in artificial intelligence (AI) have significantly improved the quality and efficiency in generating fake face images; for example, the face manipulations by DeepFake are so realistic that it is difficult to distinguish their authenticity—either automatically or by humans. In order to enhance the efficiency of distinguishing facial images generated by AI from real facial images, a novel model has been developed based on deep learning and error level analysis (ELA) detection, which is related to entropy and information theory, such as cross-entropy loss function in the final Softmax layer, normalized mutual information in image preprocessing, and some applications of an encoder based on information theory. Due to the limitations of computing resources and production time, the DeepFake algorithm can only generate limited resolutions, resulting in two different image compression ratios between the fake face area as the foreground and the original area as the background, which leaves distinctive artifacts. By using the error level analysis detection method, we can detect the presence or absence of different image compression ratios and then use Convolution neural network (CNN) to detect whether the image is fake. Experiments show that the training efficiency of the CNN model can be significantly improved by using the ELA method. And the detection accuracy rate can reach more than 97% based on CNN architecture of this method. Compared to the state-of-the-art models, the proposed model has the advantages such as fewer layers, shorter training time, and higher efficiency. Full article
Show Figures

Figure 1

774 KiB  
Proceeding Paper
Interpreting the High Energy Consumption of the Brain at Rest
by Alejandro Chinea Manrique de Lara
Proceedings 2020, 46(1), 30; https://doi.org/10.3390/ecea-5-06694 - 18 Nov 2019
Cited by 1 | Viewed by 1494
Abstract
The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was [...] Read more.
The notion that the brain has a resting state mode of functioning has received increasing attention in recent years. The idea derives from experimental observations that showed a relatively spatially and temporally uniform high level of neuronal activity when no explicit task was being performed. Surprisingly, the total energy consumption supporting neuronal firing in this conscious awake baseline state is orders of magnitude larger than the energy changes during stimulation. This paper presents a novel and counterintuitive explanation of the high energy consumption of the brain at rest obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge and to the efficiency of brain’s computations. The high energy consumption of the brain at rest is shown to be the result of the energetics associated to the most probable state of a statistical physics model aimed at capturing the behavior of a system constrained by power consumption and evolutionary designed to minimize metabolic demands. Full article
Show Figures

Figure 1

6 pages, 421 KiB  
Proceeding Paper
Quantum Gravity Strategy for the Production of Dark Matter Using Cavitation by Minimum Entropy
by Edward Jiménez and Esteban E. Jimenez
Proceedings 2020, 46(1), 31; https://doi.org/10.3390/ecea-5-06664 - 17 Nov 2019
Viewed by 1306
Abstract
The minimum entropy is responsible for the formation of dark matter bubbles in a black hole, while the variation in the density of dark matter allows these bubbles to leave the event horizon. Some experimental evidence supports the dark matter production model in [...] Read more.
The minimum entropy is responsible for the formation of dark matter bubbles in a black hole, while the variation in the density of dark matter allows these bubbles to leave the event horizon. Some experimental evidence supports the dark matter production model in the inner vicinity of the border of a black hole. The principle of minima entropy explains how cavitation occurs on the event horizon, which in turn complies with the Navier–Stokes 3D equations. Moreover, current works in an axiomatic way show that in the event horizon Einstein’s equations are equivalent to Navier–Stokes’ equations. Thus, The solutions of Einstein combined with the boundary conditions establish a one-to-one correspondence with solutions of incompressible Navier–Stokes and in the near-horizon limit it provides a precise mathematical sense in which horizons are always incompressible fluids. It is also essential to understand that Cavitation by minimum entropy is the production of dark matter bubbles, by variation of the pressure inside or on the horizon of a black hole, in general Δ p = p n + 1 p n = σ n σ n + 1 1 p n or in particular Δ p = ( 1 P ) p 0 , where P t = Δ p ρ 0 P . Finally, fluctuations in the density of dark matter can facilitate its escape from a black hole, if and only if there is previously dark matter produced by cavitation inside or on the horizon of a black hole and also ρ D M < ρ B . Full article
Show Figures

Figure 1

Back to TopTop