E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Entropy: From Physics to Information Sciences and Geometry"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 June 2018)

Special Issue Editors

Guest Editor
Prof. Dr. Ali Mohammad-Djafari

Laboratoire des Signaux et Systmes, UMR 8506 CNRS-SUPELEC-UNIV PARIS SUD, Gif-sur-Yvette, France
Website | E-Mail
Interests: inference, inverse problems, bayesian computation, information and maximum entropy, knowledge extraction
Guest Editor
Prof. Dr. Miguel Rubi

Secció de Física Estadística i Interdisciplinària - Departament de Física de la Matèria Condensada, Facultat de Física, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona, Spain
Website | E-Mail
Phone: +34-934021162
Interests: statistical physics, thermodynamics, biophysic

Special Issue Information

Dear Colleagues,

One of the most frequently used scientific words, is the word “Entropy”. The reason is that it is related to two main scientific domains: physics and information theory. Its origin goes back to the start of physics (thermodynamics), but since Shannon, it has become related to information theory.

The main topics of the special issue include:

  • Physics: classical thermodynamics and quantum
  • Statistical physics and Bayesian computation
  • Geometrical science of information, topology and metrics
  • Maximum entropy principle and inference
  • Kullback and Bayes or information theory and Bayesian inference
  • Entropy in action (applications)

The inter-disciplinary nature of contributions from both theoretical and applied perspectives are very welcome, including papers addressing conceptual and methodological developments, as well as new applications of entropy and information theory.

This special issue will publish the extended version of papers presented at Entropy 2018 conference but not limited to that.  This issue is open to any other contrinutions related to the subjects of Entropy 2018 conference.

Prof. Dr. Ali Mohammad-Djafari
Prof. Dr. Miguel Rubi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (38 papers)

View options order results:
result details:
Displaying articles 1-38
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle From Physics to Bioengineering: Microbial Cultivation Process Design and Feeding Rate Control Based on Relative Entropy Using Nuisance Time
Entropy 2018, 20(10), 779; https://doi.org/10.3390/e20100779
Received: 15 August 2018 / Revised: 8 October 2018 / Accepted: 10 October 2018 / Published: 11 October 2018
PDF Full-text (2822 KB) | HTML Full-text | XML Full-text
Abstract
For historic reasons, industrial knowledge of reproducibility and restrictions imposed by regulations, open-loop feeding control approaches dominate in industrial fed-batch cultivation processes. In this study, a generic gray box biomass modeling procedure uses relative entropy as a key to approach the posterior similarly
[...] Read more.
For historic reasons, industrial knowledge of reproducibility and restrictions imposed by regulations, open-loop feeding control approaches dominate in industrial fed-batch cultivation processes. In this study, a generic gray box biomass modeling procedure uses relative entropy as a key to approach the posterior similarly to how prior distribution approaches the posterior distribution by the multivariate path of Lagrange multipliers, for which a description of a nuisance time is introduced. The ultimate purpose of this study was to develop a numerical semi-global convex optimization procedure that is dedicated to the calculation of feeding rate time profiles during the fed-batch cultivation processes. The proposed numerical semi-global convex optimization of relative entropy is neither restricted to the gray box model nor to the bioengineering application. From the bioengineering application perspective, the proposed bioprocess design technique has benefits for both the regular feed-forward control and the advanced adaptive control systems, in which the model for biomass growth prediction is compulsory. After identification of the gray box model parameters, the options and alternatives in controllable industrial biotechnological processes are described. The main aim of this work is to achieve high reproducibility, controllability, and desired process performance. Glucose concentration measurements, which were used for the development of the model, become unnecessary for the development of the desired microbial cultivation process. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle A Novel Algorithm Based on the Pixel-Entropy for Automatic Detection of Number of Lanes, Lane Centers, and Lane Division Lines Formation
Entropy 2018, 20(10), 725; https://doi.org/10.3390/e20100725
Received: 30 July 2018 / Revised: 13 September 2018 / Accepted: 17 September 2018 / Published: 21 September 2018
PDF Full-text (7981 KB) | HTML Full-text | XML Full-text
Abstract
Lane detection for traffic surveillance in intelligent transportation systems is a challenge for vision-based systems. In this paper, a novel pixel-entropy based algorithm for the automatic detection of the number of lanes and their centers, as well as the formation of their division
[...] Read more.
Lane detection for traffic surveillance in intelligent transportation systems is a challenge for vision-based systems. In this paper, a novel pixel-entropy based algorithm for the automatic detection of the number of lanes and their centers, as well as the formation of their division lines is proposed. Using as input a video from a static camera, each pixel behavior in the gray color space is modeled by a time series; then, for a time period τ , its histogram followed by its entropy are calculated. Three different types of theoretical pixel-entropy behaviors can be distinguished: (1) the pixel-entropy at the lane center shows a high value; (2) the pixel-entropy at the lane division line shows a low value; and (3) a pixel not belonging to the road has an entropy value close to zero. From the road video, several small rectangle areas are captured, each with only a few full rows of pixels. For each pixel of these areas, the entropy is calculated, then for each area or row an entropy curve is produced, which, when smoothed, has as many local maxima as lanes and one more local minima than lane division lines. For the purpose of testing, several real traffic scenarios under different weather conditions with other moving objects were used. However, these background objects, which are out of road, were filtered out. Our algorithm, compared to others based on trajectories of vehicles, shows the following advantages: (1) the lowest computational time for lane detection (only 32 s with a traffic flow of one vehicle/s per-lane); and (2) better results under high traffic flow with congestion and vehicle occlusion. Instead of detecting road markings, it forms lane-dividing lines. Here, the entropies of Shannon and Tsallis were used, but the entropy of Tsallis for a selected q of a finite set achieved the best results. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Representation and Characterization of Nonstationary Processes by Dilation Operators and Induced Shape Space Manifolds
Entropy 2018, 20(9), 717; https://doi.org/10.3390/e20090717
Received: 21 July 2018 / Revised: 29 August 2018 / Accepted: 7 September 2018 / Published: 19 September 2018
PDF Full-text (1424 KB) | HTML Full-text | XML Full-text
Abstract
We proposed in this work the introduction of a new vision of stochastic processes through geometry induced by dilation. The dilation matrices of a given process are obtained by a composition of rotation matrices built in with respect to partial correlation coefficients. Particularly
[...] Read more.
We proposed in this work the introduction of a new vision of stochastic processes through geometry induced by dilation. The dilation matrices of a given process are obtained by a composition of rotation matrices built in with respect to partial correlation coefficients. Particularly interesting is the fact that the obtention of dilation matrices is regardless of the stationarity of the underlying process. When the process is stationary, only one dilation matrix is obtained and it corresponds therefore to Naimark dilation. When the process is nonstationary, a set of dilation matrices is obtained. They correspond to Kolmogorov decomposition. In this work, the nonstationary class of periodically correlated processes was of interest. The underlying periodicity of correlation coefficients is then transmitted to the set of dilation matrices. Because this set lives on the Lie group of rotation matrices, we can see them as points of a closed curve on the Lie group. Geometrical aspects can then be investigated through the shape of the obtained curves, and to give a complete insight into the space of curves, a metric and the derived geodesic equations are provided. The general results are adapted to the more specific case where the base manifold is the Lie group of rotation matrices, and because the metric in the space of curve naturally extends to the space of shapes; this enables a comparison between curves’ shapes and allows then the classification of random processes’ measures. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Harmonic Sierpinski Gasket and Applications
Entropy 2018, 20(9), 714; https://doi.org/10.3390/e20090714
Received: 14 August 2018 / Revised: 8 September 2018 / Accepted: 14 September 2018 / Published: 17 September 2018
Cited by 1 | PDF Full-text (458 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to investigate the generalization of the Sierpinski gasket through the harmonic metric. In particular, this work presents an antenna based on such a generalization. In fact, the harmonic Sierpinski gasket is used as a geometric configuration of
[...] Read more.
The aim of this paper is to investigate the generalization of the Sierpinski gasket through the harmonic metric. In particular, this work presents an antenna based on such a generalization. In fact, the harmonic Sierpinski gasket is used as a geometric configuration of small antennas. As with fractal antennas and Rényi entropy, their performance is characterized by the associated entropy that is studied and discussed here. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Change-Point Detection Using the Conditional Entropy of Ordinal Patterns
Entropy 2018, 20(9), 709; https://doi.org/10.3390/e20090709
Received: 27 July 2018 / Revised: 3 September 2018 / Accepted: 12 September 2018 / Published: 14 September 2018
PDF Full-text (803 KB) | HTML Full-text | XML Full-text
Abstract
This paper is devoted to change-point detection using only the ordinal structure of a time series. A statistic based on the conditional entropy of ordinal patterns characterizing the local up and down in a time series is introduced and investigated. The statistic requires
[...] Read more.
This paper is devoted to change-point detection using only the ordinal structure of a time series. A statistic based on the conditional entropy of ordinal patterns characterizing the local up and down in a time series is introduced and investigated. The statistic requires only minimal a priori information on given data and shows good performance in numerical experiments. By the nature of ordinal patterns, the proposed method does not detect pure level changes but changes in the intrinsic pattern structure of a time series and so it could be interesting in combination with other methods. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Combining Entropy Measures for Anomaly Detection
Entropy 2018, 20(9), 698; https://doi.org/10.3390/e20090698
Received: 31 July 2018 / Revised: 7 September 2018 / Accepted: 10 September 2018 / Published: 12 September 2018
PDF Full-text (1450 KB) | HTML Full-text | XML Full-text
Abstract
The combination of different sources of information is a problem that arises in several situations, for instance, when data are analysed using different similarity measures. Often, each source of information is given as a similarity, distance, or a kernel matrix. In this paper,
[...] Read more.
The combination of different sources of information is a problem that arises in several situations, for instance, when data are analysed using different similarity measures. Often, each source of information is given as a similarity, distance, or a kernel matrix. In this paper, we propose a new class of methods which consists of producing, for anomaly detection purposes, a single Mercer kernel (that acts as a similarity measure) from a set of local entropy kernels and, at the same time, avoids the task of model selection. This kernel is used to build an embedding of data in a variety that will allow the use of a (modified) one-class Support Vector Machine to detect outliers. We study several information combination schemes and their limiting behaviour when the data sample size increases within an Information Geometry context. In particular, we study the variety of the given positive definite kernel matrices to obtain the desired kernel combination as belonging to that variety. The proposed methodology has been evaluated on several real and artificial problems. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Chart for Thermoelectric Systems Operation Based on a Ternary Diagram for Bithermal Systems
Entropy 2018, 20(9), 666; https://doi.org/10.3390/e20090666
Received: 27 July 2018 / Revised: 27 August 2018 / Accepted: 29 August 2018 / Published: 3 September 2018
PDF Full-text (4393 KB) | HTML Full-text | XML Full-text
Abstract
Thermoelectric system’s operation needs careful attention to ensure optimal power conversion depending on the application aims. As a ternary diagram of bithermal systems allows a synthetic graphical analysis of the performance attainable by any work-heat conversion system, thermoelectric systems operation is plotted as
[...] Read more.
Thermoelectric system’s operation needs careful attention to ensure optimal power conversion depending on the application aims. As a ternary diagram of bithermal systems allows a synthetic graphical analysis of the performance attainable by any work-heat conversion system, thermoelectric systems operation is plotted as a parametric curve function of the operating conditions (electric current and reservoirs’ temperature), based on the standard model of Ioffe. The threshold of each operating mode (heat engine, heat pump, thermal dissipation, and forced thermal transfer), along with the optimal efficiencies and powers of the heat pump and heat engine modes, are characterized graphically and analytically as a function of the material properties and the operating conditions. The sensibility of the performance aims (maximum efficiency vs. maximum power) with the operating conditions is, thus, highlighted. In addition, the specific contributions of each phenomenon involved in the semiconductor (reversible Seebeck effect, irreversible heat leakage by conduction and irreversible thermal dissipation by Joule effect) are discussed in terms of entropy generation. Finally, the impact of the exo-irreversibilities on the performance is analyzed by taking the external thermal resistances into account. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Entropy Production Associated with Aggregation into Granules in a Subdiffusive Environment
Entropy 2018, 20(9), 651; https://doi.org/10.3390/e20090651
Received: 7 August 2018 / Revised: 23 August 2018 / Accepted: 29 August 2018 / Published: 30 August 2018
PDF Full-text (239 KB) | HTML Full-text | XML Full-text
Abstract
We study the entropy production that is associated with the growing or shrinking of a small granule in, for instance, a colloidal suspension or in an aggregating polymer chain. A granule will fluctuate in size when the energy of binding is comparable to
[...] Read more.
We study the entropy production that is associated with the growing or shrinking of a small granule in, for instance, a colloidal suspension or in an aggregating polymer chain. A granule will fluctuate in size when the energy of binding is comparable to k B T , which is the “quantum” of Brownian energy. Especially for polymers, the conformational energy landscape is often rough and has been commonly modeled as being self-similar in its structure. The subdiffusion that emerges in such a high-dimensional, fractal environment leads to a Fokker–Planck Equation with a fractional time derivative. We set up such a so-called fractional Fokker–Planck Equation for the aggregation into granules. From that Fokker–Planck Equation, we derive an expression for the entropy production of a growing granule. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Open AccessArticle Random Spacing between Metal Tree Electrodeposits in Linear DLA Arrays
Entropy 2018, 20(9), 643; https://doi.org/10.3390/e20090643
Received: 6 July 2018 / Revised: 28 July 2018 / Accepted: 30 July 2018 / Published: 28 August 2018
PDF Full-text (3879 KB) | HTML Full-text | XML Full-text
Abstract
When we examine the random growth of trees along a linear alley in a rural area, we wonder what governs the location of those trees, and hence the distance between adjacent ones. The same question arises when we observe the growth of metal
[...] Read more.
When we examine the random growth of trees along a linear alley in a rural area, we wonder what governs the location of those trees, and hence the distance between adjacent ones. The same question arises when we observe the growth of metal electro-deposition trees along a linear cathode in a rectangular film of solution. We carry out different sets of experiments wherein zinc trees are grown by electrolysis from a linear graphite cathode in a 2D film of zinc sulfate solution toward a thick zinc metal anode. We measure the distance between adjacent trees, calculate the average for each set, and correlate the latter with probability and entropy. We also obtain a computational image of the grown trees as a function of parameters such as the cell size, number of particles, and sticking probability. The dependence of average distance on concentration is studied and assessed. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessFeature PaperArticle Rényi Entropy Power Inequalities via Normal Transport and Rotation
Entropy 2018, 20(9), 641; https://doi.org/10.3390/e20090641
Received: 7 July 2018 / Revised: 22 August 2018 / Accepted: 23 August 2018 / Published: 26 August 2018
PDF Full-text (305 KB) | HTML Full-text | XML Full-text
Abstract
Following a recent proof of Shannon’s entropy power inequality (EPI), a comprehensive framework for deriving various EPIs for the Rényi entropy is presented that uses transport arguments from normal densities and a change of variable by rotation. Simple arguments are given to recover
[...] Read more.
Following a recent proof of Shannon’s entropy power inequality (EPI), a comprehensive framework for deriving various EPIs for the Rényi entropy is presented that uses transport arguments from normal densities and a change of variable by rotation. Simple arguments are given to recover the previously known Rényi EPIs and derive new ones, by unifying a multiplicative form with constant c and a modification with exponent α of previous works. In particular, for log-concave densities, we obtain a simple transportation proof of a sharp varentropy bound. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Open AccessArticle 1-D versus 2-D Entropy Velocity Law for Water Discharge Assessment in a Rough Ditch
Entropy 2018, 20(9), 638; https://doi.org/10.3390/e20090638
Received: 27 June 2018 / Revised: 6 August 2018 / Accepted: 20 August 2018 / Published: 25 August 2018
PDF Full-text (5309 KB) | HTML Full-text | XML Full-text
Abstract
Water discharge assessment in open channel flow is one of the most crucial issues for hydraulic engineers in the fields of water resource management, river dynamics, ecohydraulics, irrigation, and hydraulic structure design, among others. Recent studies state that the entropy velocity law allows
[...] Read more.
Water discharge assessment in open channel flow is one of the most crucial issues for hydraulic engineers in the fields of water resource management, river dynamics, ecohydraulics, irrigation, and hydraulic structure design, among others. Recent studies state that the entropy velocity law allows expeditive methodology for discharge estimation and rating curve development due to the simple mathematical formulation and implementation. Many works have been developed based on the one-dimensional (1-D) formulation of the entropy velocity profile, supporting measurements in the lab and the field for rating curve assessment, but in recent years, the two-dimensional (2-D) formulation was proposed and applied in studies of regular ditch flow, showing good performance. The present work deals with a comparison between the 1-D and 2-D approaches in order to give a general framework of threats and opportunities related to the robust operational application of such laws. The analysis was carried out on a laboratory ditch with regular roughness, under controlled boundary conditions, and in different stages, generating an exhaustive dashboard for better appraisal of the approaches. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Comparative Performance Analysis of a Simplified Curzon-Ahlborn Engine
Entropy 2018, 20(9), 637; https://doi.org/10.3390/e20090637
Received: 17 July 2018 / Revised: 21 August 2018 / Accepted: 22 August 2018 / Published: 25 August 2018
PDF Full-text (1726 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a finite-time thermodynamic optimization based on three different optimization criteria: Maximum Power Output (MP), Maximum Efficient Power (MEP), and Maximum Power Density (MPD), for a simplified Curzon-Ahlborn engine that was first proposed by Agrawal. The results obtained for the MP
[...] Read more.
This paper presents a finite-time thermodynamic optimization based on three different optimization criteria: Maximum Power Output (MP), Maximum Efficient Power (MEP), and Maximum Power Density (MPD), for a simplified Curzon-Ahlborn engine that was first proposed by Agrawal. The results obtained for the MP are compared with those obtained using MEP and MPD criteria. The results show that when a Newton heat transfer law is used, the efficiency values of the engine working in the MP regime are lower than the efficiency values ( τ ) obtained with the MEP and MPD regimes for all values of the parameter τ = T 2 / T 1 , where T 1 and T 2 are the hot and cold temperatures of the engine reservoirs ( T 2 < T 1 ) , respectively. However, when a Dulong-Petit heat transfer law is used, the efficiency values of the engine working at MEP are larger than those obtained with the MP and the MPD regimes for all values of τ . Notably, when 0 < τ < 0.68 , the efficiency values for the MP regime are larger than those obtained with the MPD regime. Also, when 0.68 < τ < 1 , the efficiency values for the aforementioned regimes are similar. Importantly, the parameter τ plays a crucial role in the engine performance, providing guidance during the design of real power plants. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Entropic Equilibria Selection of Stationary Extrema in Finite Populations
Entropy 2018, 20(9), 631; https://doi.org/10.3390/e20090631
Received: 29 June 2018 / Revised: 16 August 2018 / Accepted: 20 August 2018 / Published: 24 August 2018
PDF Full-text (597 KB) | HTML Full-text | XML Full-text
Abstract
We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of
[...] Read more.
We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of Markov processes. We apply this definition of stability to local maxima and minima of the stationary distribution of the Moran process with mutation and show that variations in population size, mutation rate, and strength of selection all affect the stability of the stationary extrema. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle On the Use of Transfer Entropy to Investigate the Time Horizon of Causal Influences between Signals
Entropy 2018, 20(9), 627; https://doi.org/10.3390/e20090627
Received: 28 June 2018 / Revised: 14 August 2018 / Accepted: 15 August 2018 / Published: 22 August 2018
PDF Full-text (4374 KB) | HTML Full-text | XML Full-text
Abstract
Understanding the details of the correlation between time series is an essential step on the route to assessing the causal relation between systems. Traditional statistical indicators, such as the Pearson correlation coefficient and the mutual information, have some significant limitations. More recently, transfer
[...] Read more.
Understanding the details of the correlation between time series is an essential step on the route to assessing the causal relation between systems. Traditional statistical indicators, such as the Pearson correlation coefficient and the mutual information, have some significant limitations. More recently, transfer entropy has been proposed as a powerful tool to understand the flow of information between signals. In this paper, the comparative advantages of transfer entropy, for determining the time horizon of causal influence, are illustrated with the help of synthetic data. The technique has been specifically revised for the analysis of synchronization experiments. The investigation of experimental data from thermonuclear plasma diagnostics proves the potential and limitations of the developed approach. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Approximate Bayesian Computation for Estimating Parameters of Data-Consistent Forbush Decrease Model
Entropy 2018, 20(8), 622; https://doi.org/10.3390/e20080622
Received: 20 July 2018 / Revised: 14 August 2018 / Accepted: 20 August 2018 / Published: 20 August 2018
PDF Full-text (2883 KB) | HTML Full-text | XML Full-text
Abstract
Realistic modeling of complex physical phenomena is always quite a challenging task. The main problem usually concerns the uncertainties surrounding model input parameters, especially when not all information about a modeled phenomenon is known. In such cases, Approximate Bayesian Computation (ABC) methodology may
[...] Read more.
Realistic modeling of complex physical phenomena is always quite a challenging task. The main problem usually concerns the uncertainties surrounding model input parameters, especially when not all information about a modeled phenomenon is known. In such cases, Approximate Bayesian Computation (ABC) methodology may be helpful. The ABC is based on a comparison of the model output data with the experimental data, to estimate the best set of input parameters of the particular model. In this paper, we present a framework applying the ABC methodology to estimate the parameters of the model of Forbush decrease (Fd) of the galactic cosmic ray intensity. The Fd is modeled by the numerical solution of the Fokker–Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The most problematic in Fd modeling is the lack of detailed knowledge about the spatial and temporal profiles of the parameters responsible for the creation of the Fd. Among these parameters, the diffusion coefficient plays a central role. We employ the ABC Sequential Monte Carlo algorithm, scanning the space of the diffusion coefficient parameters within the region of the heliosphere where the Fd is created. Assessment of the correctness of the proposed parameters is done by comparing the model output data with the experimental data of the galactic cosmic ray intensity. The particular attention is put on the rigidity dependence of the rigidity spectrum exponent. The proposed framework is adopted to create the model of the Fd observed by the neutron monitors and ground muon telescope in November 2004. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle The Maximum Entropy Method in Ultrasonic Non-Destructive Testing—Increasing the Resolution, Image Noise Reduction and Echo Acquisition Rate
Entropy 2018, 20(8), 621; https://doi.org/10.3390/e20080621
Received: 26 June 2018 / Revised: 27 July 2018 / Accepted: 17 August 2018 / Published: 20 August 2018
PDF Full-text (10074 KB) | HTML Full-text | XML Full-text
Abstract
The use of linear methods, for example, the Combined Synthetic Aperture Focusing Technique (C–SAFT), does not allow one to obtain images with high resolution and low noise, especially structural noise in all cases. Non-linear methods should improve the quality of the reconstructed image.
[...] Read more.
The use of linear methods, for example, the Combined Synthetic Aperture Focusing Technique (C–SAFT), does not allow one to obtain images with high resolution and low noise, especially structural noise in all cases. Non-linear methods should improve the quality of the reconstructed image. Several examples of the application of the maximum entropy (ME) method for ultrasonic echo processing in order to reconstruct the image of reflectors with Rayleigh super-resolution and a high signal-to-noise ratio are considered in the article. The use of the complex phase-shifted Barker code signal as a probe pulse and the compression of measured echoes by the ME method made it possible to increase the signal-to-noise ratio by more than 20 dB for the image of a flat-bottom hole with a diameter of 1 mm in a model experiment. A modification of the ME method for restoring the reflector image by the time-of-flight diffraction (TOFD) method is considered, taking into account the change of the echo signal shape, depending on the depth of the reflector. Using the ME method, 2.5D-images of models of dangling cracks in a pipeline with a diameter of 800 mm were obtained, which make it possible to determine their dimensions. In the object with structural noise, using the ME method, it was possible to increase the signal-to-noise ratio of the reflector image by more than 12 dB. To accelerate the acquisition of echoes in the dual scan mode, it is proposed to use code division multiple access (CDMA) technology based on simultaneous emission by all elements of the array of pseudo-orthogonal signals. The model experiment showed the effectiveness of applying the ME method. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Information Geometry of Randomized Quantum State Tomography
Entropy 2018, 20(8), 609; https://doi.org/10.3390/e20080609
Received: 29 June 2018 / Revised: 5 August 2018 / Accepted: 13 August 2018 / Published: 16 August 2018
PDF Full-text (559 KB) | HTML Full-text | XML Full-text
Abstract
Suppose that a d-dimensional Hilbert space HCd admits a full set of mutually unbiased bases |1(a),,|d(a), where a=1,,d
[...] Read more.
Suppose that a d-dimensional Hilbert space H C d admits a full set of mutually unbiased bases | 1 ( a ) , , | d ( a ) , where a = 1 , , d + 1 . A randomized quantum state tomography is a scheme for estimating an unknown quantum state on H through iterative applications of measurements M ( a ) = | 1 ( a ) 1 ( a ) | , , | d ( a ) d ( a ) | for a = 1 , , d + 1 , where the numbers of applications of these measurements are random variables. We show that the space of the resulting probability distributions enjoys a mutually orthogonal dualistic foliation structure, which provides us with a simple geometrical insight into the maximum likelihood method for the quantum state tomography. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Thermodynamic Analysis of Irreversible Desiccant Systems
Entropy 2018, 20(8), 595; https://doi.org/10.3390/e20080595
Received: 8 June 2018 / Revised: 2 August 2018 / Accepted: 6 August 2018 / Published: 9 August 2018
PDF Full-text (1900 KB) | HTML Full-text | XML Full-text
Abstract
A new general thermodynamic mapping of desiccant systems’ performance is conducted to estimate the potentiality and determine the proper application field of the technology. This targets certain room conditions and given outdoor temperature and humidity prior to the selection of the specific desiccant
[...] Read more.
A new general thermodynamic mapping of desiccant systems’ performance is conducted to estimate the potentiality and determine the proper application field of the technology. This targets certain room conditions and given outdoor temperature and humidity prior to the selection of the specific desiccant material and technical details of the system configuration. This allows the choice of the operative state of the system to be independent from the limitations of the specific design and working fluid. An expression of the entropy balance suitable for describing the operability of a desiccant system at steady state is obtained by applying a control volume approach, defining sensible and latent effectiveness parameters, and assuming ideal gas behaviour of the air-vapour mixture. This formulation, together with mass and energy balances, is used to conduct a general screening of the system performance. The theoretical advantage and limitation of desiccant dehumidification air conditioning, maximum efficiency for given conditions constraints, least irreversible configuration for a given operative target, and characteristics of the system for a target efficiency can be obtained from this thermodynamic mapping. Once the thermo-physical properties and the thermodynamic equilibrium relationship of the liquid desiccant mixture or solid coating material are known, this method can be applied to a specific technical case to select the most appropriate working medium and guide the specific system design to achieve the target performance. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle The Duality of Entropy/Extropy, and Completion of the Kullback Information Complex
Entropy 2018, 20(8), 593; https://doi.org/10.3390/e20080593
Received: 3 July 2018 / Revised: 26 July 2018 / Accepted: 6 August 2018 / Published: 9 August 2018
PDF Full-text (1095 KB) | HTML Full-text | XML Full-text
Abstract
The refinement axiom for entropy has been provocative in providing foundations of information theory, recognised as thoughtworthy in the writings of both Shannon and Jaynes. A resolution to their concerns has been provided recently by the discovery that the entropy measure of a
[...] Read more.
The refinement axiom for entropy has been provocative in providing foundations of information theory, recognised as thoughtworthy in the writings of both Shannon and Jaynes. A resolution to their concerns has been provided recently by the discovery that the entropy measure of a probability distribution has a dual measure, a complementary companion designated as “extropy”. We report here the main results that identify this fact, specifying the dual equations and exhibiting some of their structure. The duality extends beyond a simple assessment of entropy, to the formulation of relative entropy and the Kullback symmetric distance between two forecasting distributions. This is defined by the sum of a pair of directed divergences. Examining the defining equation, we notice that this symmetric measure can be generated by two other explicable pairs of functions as well, neither of which is a Bregman divergence. The Kullback information complex is constituted by the symmetric measure of entropy/extropy along with one of each of these three function pairs. It is intimately related to the total logarithmic score of two distinct forecasting distributions for a quantity under consideration, this being a complete proper score. The information complex is isomorphic to the expectations that the two forecasting distributions assess for their achieved scores, each for its own score and for the score achieved by the other. Analysis of the scoring problem exposes a Pareto optimal exchange of the forecasters’ scores that both are willing to engage. Both would support its evaluation for assessing the relative quality of the information they provide regarding the observation of an unknown quantity of interest. We present our results without proofs, as these appear in source articles that are referenced. The focus here is on their content, unhindered. The mathematical syntax of probability we employ relies upon the operational subjective constructions of Bruno de Finetti. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Entropic Stabilization of Cas4 Protein SSO0001 Predicted with Popcoen
Entropy 2018, 20(8), 580; https://doi.org/10.3390/e20080580
Received: 28 June 2018 / Revised: 27 July 2018 / Accepted: 28 July 2018 / Published: 7 August 2018
PDF Full-text (2881 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Popcoen is a method for configurational entropy estimation of proteins based on machine-learning. Entropy is predicted with an artificial neural network which was trained on simulation trajectories of a large set of representative proteins. Popcoen is extremely fast compared to other approaches based
[...] Read more.
Popcoen is a method for configurational entropy estimation of proteins based on machine-learning. Entropy is predicted with an artificial neural network which was trained on simulation trajectories of a large set of representative proteins. Popcoen is extremely fast compared to other approaches based on the sampling of a multitude of microstates. Consequently, Popcoen can be incorporated into a large class of protein software which currently neglects configurational entropy for performance reasons. Here, we apply Popcoen to various conformations of the Cas4 protein SSO0001 of Sulfolobus solfataricus, a protein that assembles to a decamer of known toroidal shape. We provide numerical evidence that the native state (NAT) of a SSO0001 monomer has a similar structure to the protomers of the oligomer, where NAT of the monomer is stabilized mainly entropically. Due to its large amount of configurational entropy, NAT has lower free energy than alternative conformations of very low enthalpy and solvation free-energy. Hence, SSO0001 serves as an example case where neglecting configurational entropy leads to incorrect conclusion. Our results imply that no refolding of the subunits is required during oligomerization which suggests that configurational entropy is employed by nature to largely enhance the rate of assembly. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Graphical abstract

Open AccessArticle Spherical Minimum Description Length
Entropy 2018, 20(8), 575; https://doi.org/10.3390/e20080575
Received: 8 June 2018 / Revised: 28 July 2018 / Accepted: 30 July 2018 / Published: 3 August 2018
PDF Full-text (358 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of model selection using the Minimum Description Length (MDL) criterion for distributions with parameters on the hypersphere. Model selection algorithms aim to find a compromise between goodness of fit and model complexity. Variables often considered for complexity penalties involve
[...] Read more.
We consider the problem of model selection using the Minimum Description Length (MDL) criterion for distributions with parameters on the hypersphere. Model selection algorithms aim to find a compromise between goodness of fit and model complexity. Variables often considered for complexity penalties involve number of parameters, sample size and shape of the parameter space, with the penalty term often referred to as stochastic complexity. Current model selection criteria either ignore the shape of the parameter space or incorrectly penalize the complexity of the model, largely because typical Laplace approximation techniques yield inaccurate results for curved spaces. We demonstrate how the use of a constrained Laplace approximation on the hypersphere yields a novel complexity measure that more accurately reflects the geometry of these spherical parameters spaces. We refer to this modified model selection criterion as spherical MDL. As proof of concept, spherical MDL is used for bin selection in histogram density estimation, performing favorably against other model selection criteria. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Investigating Information Geometry in Classical and Quantum Systems through Information Length
Entropy 2018, 20(8), 574; https://doi.org/10.3390/e20080574
Received: 19 July 2018 / Revised: 1 August 2018 / Accepted: 1 August 2018 / Published: 3 August 2018
Cited by 1 | PDF Full-text (2430 KB) | HTML Full-text | XML Full-text
Abstract
Stochastic processes are ubiquitous in nature and laboratories, and play a major role across traditional disciplinary boundaries. These stochastic processes are described by different variables and are thus very system-specific. In order to elucidate underlying principles governing different phenomena, it is extremely valuable
[...] Read more.
Stochastic processes are ubiquitous in nature and laboratories, and play a major role across traditional disciplinary boundaries. These stochastic processes are described by different variables and are thus very system-specific. In order to elucidate underlying principles governing different phenomena, it is extremely valuable to utilise a mathematical tool that is not specific to a particular system. We provide such a tool based on information geometry by quantifying the similarity and disparity between Probability Density Functions (PDFs) by a metric such that the distance between two PDFs increases with the disparity between them. Specifically, we invoke the information length L(t) to quantify information change associated with a time-dependent PDF that depends on time. L(t) is uniquely defined as a function of time for a given initial condition. We demonstrate the utility of L(t) in understanding information change and attractor structure in classical and quantum systems. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle A New Underwater Acoustic Signal Denoising Technique Based on CEEMDAN, Mutual Information, Permutation Entropy, and Wavelet Threshold Denoising
Entropy 2018, 20(8), 563; https://doi.org/10.3390/e20080563
Received: 1 July 2018 / Revised: 22 July 2018 / Accepted: 25 July 2018 / Published: 28 July 2018
PDF Full-text (4627 KB) | HTML Full-text | XML Full-text
Abstract
Owing to the complexity of the ocean background noise, underwater acoustic signal denoising is one of the hotspot problems in the field of underwater acoustic signal processing. In this paper, we propose a new technique for underwater acoustic signal denoising based on complete
[...] Read more.
Owing to the complexity of the ocean background noise, underwater acoustic signal denoising is one of the hotspot problems in the field of underwater acoustic signal processing. In this paper, we propose a new technique for underwater acoustic signal denoising based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), mutual information (MI), permutation entropy (PE), and wavelet threshold denoising. CEEMDAN is an improved algorithm of empirical mode decomposition (EMD) and ensemble EMD (EEMD). First, CEEMDAN is employed to decompose noisy signals into many intrinsic mode functions (IMFs). IMFs can be divided into three parts: noise IMFs, noise-dominant IMFs, and real IMFs. Then, the noise IMFs can be identified on the basis of MIs of adjacent IMFs; the other two parts of IMFs can be distinguished based on the values of PE. Finally, noise IMFs were removed, and wavelet threshold denoising is applied to noise-dominant IMFs; we can obtain the final denoised signal by combining real IMFs and denoised noise-dominant IMFs. Simulation experiments were conducted by using simulated data, chaotic signals, and real underwater acoustic signals; the proposed denoising technique performs better than other existing denoising techniques, which is beneficial to the feature extraction of underwater acoustic signal. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Magnetocaloric Effect in Non-Interactive Electron Systems: “The Landau Problem” and Its Extension to Quantum Dots
Entropy 2018, 20(8), 557; https://doi.org/10.3390/e20080557
Received: 29 June 2018 / Revised: 23 July 2018 / Accepted: 24 July 2018 / Published: 27 July 2018
PDF Full-text (17318 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we report the magnetocaloric effect (MCE) in two systems of non-interactive particles: the first corresponds to the Landau problem case and the second the case of an electron in a quantum dot subjected to a parabolic confinement potential. In the
[...] Read more.
In this work, we report the magnetocaloric effect (MCE) in two systems of non-interactive particles: the first corresponds to the Landau problem case and the second the case of an electron in a quantum dot subjected to a parabolic confinement potential. In the first scenario, we realize that the effect is totally different from what happens when the degeneracy of a single electron confined in a magnetic field is not taken into account. In particular, when the degeneracy of the system is negligible, the magnetocaloric effect cools the system, while in the other case, when the degeneracy is strong, the system heats up. For the second case, we study the competition between the characteristic frequency of the potential trap and the cyclotron frequency to find the optimal region that maximizes the ΔT of the magnetocaloric effect, and due to the strong degeneracy of this problem, the results are in coherence with those obtained for the Landau problem. Finally, we consider the case of a transition from a normal MCE to an inverse one and back to normal as a function of temperature. This is due to the competition between the diamagnetic and paramagnetic response when the electron spin in the formulation is included. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle On the van der Waals Gas, Contact Geometry and the Toda Chain
Entropy 2018, 20(8), 554; https://doi.org/10.3390/e20080554
Received: 7 June 2018 / Revised: 11 July 2018 / Accepted: 16 July 2018 / Published: 26 July 2018
PDF Full-text (206 KB) | HTML Full-text | XML Full-text
Abstract
A Toda–chain symmetry is shown to underlie the van der Waals gas and its close cousin, the ideal gas. Links to contact geometry are explored. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Open AccessArticle Hierarchical Structure of Generalized Thermodynamic and Informational Entropy
Entropy 2018, 20(8), 553; https://doi.org/10.3390/e20080553
Received: 26 June 2018 / Revised: 23 July 2018 / Accepted: 23 July 2018 / Published: 25 July 2018
Cited by 1 | PDF Full-text (585 KB) | HTML Full-text | XML Full-text
Abstract
The present research aimed at discussing the thermodynamic and informational aspects of entropy concept to propose a unitary perspective of its definitions as an inherent property of any system in any state. The dualism and the relation between physical nature of information and
[...] Read more.
The present research aimed at discussing the thermodynamic and informational aspects of entropy concept to propose a unitary perspective of its definitions as an inherent property of any system in any state. The dualism and the relation between physical nature of information and the informational content of physical states of matter and phenomena play a fundamental role in the description of multi-scale systems characterized by hierarchical configurations. A method is proposed to generalize thermodynamic and informational entropy property and characterize the hierarchical structure of its canonical definition at macroscopic and microscopic levels of a system described in the domain of classical and quantum physics. The conceptual schema is based on dualisms and symmetries inherent to the geometric and kinematic configurations and interactions occurring in many-particle and few-particle thermodynamic systems. The hierarchical configuration of particles and sub-particles, representing the constitutive elements of physical systems, breaks down into levels characterized by particle masses subdivision, implying positions and velocities degrees of freedom multiplication. This hierarchy accommodates the allocation of phenomena and processes from higher to lower levels in the respect of the equipartition theorem of energy. However, the opposite and reversible process, from lower to higher level, is impossible by virtue of the Second Law, expressed as impossibility of Perpetual Motion Machine of the Second Kind (PMM2) remaining valid at all hierarchical levels, and the non-existence of Maxwell’s demon. Based on the generalized definition of entropy property, the hierarchical structure of entropy contribution and production balance, determined by degrees of freedom and constraints of systems configuration, is established. Moreover, as a consequence of the Second Law, the non-equipartition theorem of entropy is enunciated, which would be complementary to the equipartition theorem of energy derived from the First Law. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Information Geometry of Nonlinear Stochastic Systems
Entropy 2018, 20(8), 550; https://doi.org/10.3390/e20080550
Received: 28 June 2018 / Revised: 20 July 2018 / Accepted: 23 July 2018 / Published: 25 July 2018
Cited by 1 | PDF Full-text (693 KB) | HTML Full-text | XML Full-text
Abstract
We elucidate the effect of different deterministic nonlinear forces on geometric structure of stochastic processes by investigating the transient relaxation of initial PDFs of a stochastic variable x under forces proportional to -xn (n=3,5,7
[...] Read more.
We elucidate the effect of different deterministic nonlinear forces on geometric structure of stochastic processes by investigating the transient relaxation of initial PDFs of a stochastic variable x under forces proportional to -xn (n=3,5,7) and different strength D of δ-correlated stochastic noise. We identify the three main stages consisting of nondiffusive evolution, quasi-linear Gaussian evolution and settling into stationary PDFs. The strength of stochastic noise is shown to play a crucial role in determining these timescales as well as the peak amplitude and width of PDFs. From time-evolution of PDFs, we compute the rate of information change for a given initial PDF and uniquely determine the information length L(t) as a function of time that represents the number of different statistical states that a system evolves through in time. We identify a robust geodesic (where the information changes at a constant rate) in the initial stage, and map out geometric structure of an attractor as L(t)μm, where μ is the position of an initial Gaussian PDF. The scaling exponent m increases with n, and also varies with D (although to a lesser extent). Our results highlight ubiquitous power-laws and multi-scalings of information geometry due to nonlinear interaction. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Entropy-Based Feature Extraction for Electromagnetic Discharges Classification in High-Voltage Power Generation
Entropy 2018, 20(8), 549; https://doi.org/10.3390/e20080549
Received: 22 May 2018 / Revised: 19 July 2018 / Accepted: 20 July 2018 / Published: 25 July 2018
PDF Full-text (2139 KB) | HTML Full-text | XML Full-text
Abstract
This work exploits four entropy measures known as Sample, Permutation, Weighted Permutation, and Dispersion Entropy to extract relevant information from Electromagnetic Interference (EMI) discharge signals that are useful in fault diagnosis of High-Voltage (HV) equipment. Multi-class classification algorithms are used to classify or
[...] Read more.
This work exploits four entropy measures known as Sample, Permutation, Weighted Permutation, and Dispersion Entropy to extract relevant information from Electromagnetic Interference (EMI) discharge signals that are useful in fault diagnosis of High-Voltage (HV) equipment. Multi-class classification algorithms are used to classify or distinguish between various discharge sources such as Partial Discharges (PD), Exciter, Arcing, micro Sparking and Random Noise. The signals were measured and recorded on different sites followed by EMI expert’s data analysis in order to identify and label the discharge source type contained within the signal. The classification was performed both within each site and across all sites. The system performs well for both cases with extremely high classification accuracy within site. This work demonstrates the ability to extract relevant entropy-based features from EMI discharge sources from time-resolved signals requiring minimal computation making the system ideal for a potential application to online condition monitoring based on EMI. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Photons Probe Entropic Potential Variation during Molecular Confinement in Nanocavities
Entropy 2018, 20(8), 545; https://doi.org/10.3390/e20080545
Received: 12 July 2018 / Revised: 20 July 2018 / Accepted: 21 July 2018 / Published: 24 July 2018
PDF Full-text (12888 KB) | HTML Full-text | XML Full-text
Abstract
In thin polymeric layers, external molecular analytes may well be confined within tiny surface nano/microcavities, or they may be attached to ligand adhesion binding sites via electrical dipole forces. Even though molecular trapping is followed by a variation of the entropic potential, the
[...] Read more.
In thin polymeric layers, external molecular analytes may well be confined within tiny surface nano/microcavities, or they may be attached to ligand adhesion binding sites via electrical dipole forces. Even though molecular trapping is followed by a variation of the entropic potential, the experimental evidence of entropic energy variation from molecular confinement is scarce because tiny thermodynamic energy density diverseness can be tracked only by sub-nm surface strain. Here, it is shown that water confinement within photon-induced nanocavities in Poly (2-hydroxyethyl methacrylate), (PHEMA) layers could be trailed by an entropic potential variation that competes with a thermodynamic potential from electric dipole attachment of molecular adsorbates in polymeric ligands. The nano/microcavities and the ligands were fabricated on a PHEMA matrix by vacuum ultraviolet laser photons at 157 nm. The entropic energy variation during confinement of water analytes on the photon processed PHEMA layer was monitored via sub-nm surface strain by applying white light reflectance spectroscopy, nanoindentation, contact angle measurements, Atomic Force Microscopy (AFM) imaging, and surface and fractal analysis. The methodology has the potency to identify entropic energy density variations less than 1 pJm−3 and to monitor dipole and entropic fields on biosurfaces. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Graphical abstract

Open AccessArticle Residual Multiparticle Entropy for a Fractal Fluid of Hard Spheres
Entropy 2018, 20(7), 544; https://doi.org/10.3390/e20070544
Received: 1 July 2018 / Revised: 18 July 2018 / Accepted: 20 July 2018 / Published: 23 July 2018
PDF Full-text (440 KB) | HTML Full-text | XML Full-text
Abstract
The residual multiparticle entropy (RMPE) of a fluid is defined as the difference, Δs, between the excess entropy per particle (relative to an ideal gas with the same temperature and density), sex, and the pair-correlation contribution, s2.
[...] Read more.
The residual multiparticle entropy (RMPE) of a fluid is defined as the difference, Δs, between the excess entropy per particle (relative to an ideal gas with the same temperature and density), sex, and the pair-correlation contribution, s2. Thus, the RMPE represents the net contribution to sex due to spatial correlations involving three, four, or more particles. A heuristic “ordering” criterion identifies the vanishing of the RMPE as an underlying signature of an impending structural or thermodynamic transition of the system from a less ordered to a more spatially organized condition (freezing is a typical example). Regardless of this, the knowledge of the RMPE is important to assess the impact of non-pair multiparticle correlations on the entropy of the fluid. Recently, an accurate and simple proposal for the thermodynamic and structural properties of a hard-sphere fluid in fractional dimension 1<d<3 has been proposed (Santos, A.; López de Haro, M. Phys. Rev. E 2016, 93, 062126). The aim of this work is to use this approach to evaluate the RMPE as a function of both d and the packing fraction ϕ. It is observed that, for any given dimensionality d, the RMPE takes negative values for small densities, reaches a negative minimum Δsmin at a packing fraction ϕmin, and then rapidly increases, becoming positive beyond a certain packing fraction ϕ0. Interestingly, while both ϕmin and ϕ0 monotonically decrease as dimensionality increases, the value of Δsmin exhibits a nonmonotonic behavior, reaching an absolute minimum at a fractional dimensionality d2.38. A plot of the scaled RMPE Δs/|Δsmin| shows a quasiuniversal behavior in the region 0.14ϕϕ00.02. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle From Identity to Uniqueness: The Emergence of Increasingly Higher Levels of Hierarchy in the Process of the Matter Evolution
Entropy 2018, 20(7), 533; https://doi.org/10.3390/e20070533
Received: 25 June 2018 / Revised: 7 July 2018 / Accepted: 16 July 2018 / Published: 17 July 2018
Cited by 1 | PDF Full-text (2888 KB) | HTML Full-text | XML Full-text
Abstract
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the
[...] Read more.
This article focuses on several factors of complification, which worked during the evolution of our Universe. During the early stages of such evolution up to the Recombination Era, it was laws of quantum mechanics; during the Dark Ages it was gravitation; during the chemical evolution-diversification; and during the biological and human evolution—a process of distinctifying. The main event in the evolution of the Universe was the emergence of new levels of hierarchy, which together constitute the process of hierarchogenesis. This process contains 14 such events so far, and its dynamics is presented graphically by a very regular and smooth curve. The function that the curve presents is odd, i.e., symmetric about its central part, due to the similarity of patterns of the deceleration during the cosmic/chemical evolution (1st half of the general evolution) and the acceleration during the biological/human evolution (its 2nd half). The main driver of the hierarchogenesis as described by this odd function is counteraction and counterbalance of attraction and repulsion that take various forms at the different hierarchical levels. Direction and pace of the irreversible and inevitable increase of the Universe complexity in accordance with the general law of complification result from a consistent influence of all these factors. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Free Final Time Input Design Problem for Robust Entropy-Like System Parameter Estimation
Entropy 2018, 20(7), 528; https://doi.org/10.3390/e20070528
Received: 3 June 2018 / Revised: 6 July 2018 / Accepted: 12 July 2018 / Published: 14 July 2018
PDF Full-text (1735 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra
[...] Read more.
In this paper, a novel method is proposed to design a free final time input signal, which is then used in the robust system identification process. The solution of the constrained optimal input design problem is based on the minimization of an extra state variable representing the free final time scaling factor, formulated in the Bolza functional form, subject to the D-efficiency constraint as well as the input energy constraint. The objective function used for the model of the system identification provides robustness regarding the outlying data and was constructed using the so-called Entropy-like estimator. The perturbation time interval has a significant impact on the cost of the real-life system identification experiment. The contribution of this work is to examine the economic aspects between the imposed constraints on the input signal design, and the experiment duration while undertaking an identification experiment in the real operating conditions. The methodology is applicable to the general class of systems and was supported by numerical examples. Illustrative examples of the Least Squares, and the Entropy-Like estimators for the system parameter data validation where measurements include additive white noise are compared using ellipsoidal confidence regions. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessArticle Relating Vertex and Global Graph Entropy in Randomly Generated Graphs
Entropy 2018, 20(7), 481; https://doi.org/10.3390/e20070481
Received: 22 May 2018 / Revised: 14 June 2018 / Accepted: 17 June 2018 / Published: 21 June 2018
PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
Combinatoric measures of entropy capture the complexity of a graph but rely upon the calculation of its independent sets, or collections of non-adjacent vertices. This decomposition of the vertex set is a known NP-Complete problem and for most real world graphs is an
[...] Read more.
Combinatoric measures of entropy capture the complexity of a graph but rely upon the calculation of its independent sets, or collections of non-adjacent vertices. This decomposition of the vertex set is a known NP-Complete problem and for most real world graphs is an inaccessible calculation. Recent work by Dehmer et al. and Tee et al. identified a number of vertex level measures that do not suffer from this pathological computational complexity, but that can be shown to be effective at quantifying graph complexity. In this paper, we consider whether these local measures are fundamentally equivalent to global entropy measures. Specifically, we investigate the existence of a correlation between vertex level and global measures of entropy for a narrow subset of random graphs. We use the greedy algorithm approximation for calculating the chromatic information and therefore Körner entropy. We are able to demonstrate strong correlation for this subset of graphs and outline how this may arise theoretically. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1a

Open AccessArticle Quantum Statistical Manifolds
Entropy 2018, 20(6), 472; https://doi.org/10.3390/e20060472
Received: 26 May 2018 / Revised: 15 June 2018 / Accepted: 15 June 2018 / Published: 17 June 2018
Cited by 1 | PDF Full-text (293 KB) | HTML Full-text | XML Full-text
Abstract
Quantum information geometry studies families of quantum states by means of differential geometry. A new approach is followed with the intention to facilitate the introduction of a more general theory in subsequent work. To this purpose, the emphasis is shifted from a manifold
[...] Read more.
Quantum information geometry studies families of quantum states by means of differential geometry. A new approach is followed with the intention to facilitate the introduction of a more general theory in subsequent work. To this purpose, the emphasis is shifted from a manifold of strictly positive density matrices to a manifold of faithful quantum states on the C*-algebra of bounded linear operators. In addition, ideas from the parameter-free approach to information geometry are adopted. The underlying Hilbert space is assumed to be finite-dimensional. In this way, technicalities are avoided so that strong results are obtained, which one can hope to prove later on in a more general context. Two different atlases are introduced, one in which it is straightforward to show that the quantum states form a Banach manifold, the other which is compatible with the inner product of Bogoliubov and which yields affine coordinates for the exponential connection. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Open AccessArticle Divergence from, and Convergence to, Uniformity of Probability Density Quantiles
Entropy 2018, 20(5), 317; https://doi.org/10.3390/e20050317
Received: 7 March 2018 / Revised: 10 April 2018 / Accepted: 19 April 2018 / Published: 25 April 2018
PDF Full-text (995 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated
[...] Read more.
We demonstrate that questions of convergence and divergence regarding shapes of distributions can be carried out in a location- and scale-free environment. This environment is the class of probability density quantiles (pdQs), obtained by normalizing the composition of the density with the associated quantile function. It has earlier been shown that the pdQ is representative of a location-scale family and carries essential information regarding shape and tail behavior of the family. The class of pdQs are densities of continuous distributions with common domain, the unit interval, facilitating metric and semi-metric comparisons. The Kullback–Leibler divergences from uniformity of these pdQs are mapped to illustrate their relative positions with respect to uniformity. To gain more insight into the information that is conserved under the pdQ mapping, we repeatedly apply the pdQ mapping and find that further applications of it are quite generally entropy increasing so convergence to the uniform distribution is investigated. New fixed point theorems are established with elementary probabilistic arguments and illustrated by examples. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Graphical abstract

Open AccessArticle On Normalized Mutual Information: Measure Derivations and Properties
Entropy 2017, 19(11), 631; https://doi.org/10.3390/e19110631
Received: 26 October 2017 / Revised: 12 November 2017 / Accepted: 20 November 2017 / Published: 22 November 2017
Cited by 1 | PDF Full-text (278 KB) | HTML Full-text | XML Full-text
Abstract
Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds,
[...] Read more.
Starting with a new formulation for the mutual information (MI) between a pair of events, this paper derives alternative upper bounds and extends those to the case of two discrete random variables. Normalized mutual information (NMI) measures are then obtained from those bounds, emphasizing the use of least upper bounds. Conditional NMI measures are also derived for three different events and three different random variables. Since the MI formulation for a pair of events is always nonnegative, it can properly be extended to include weighted MI and NMI measures for pairs of events or for random variables that are analogous to the well-known weighted entropy. This weighted MI is generalized to the case of continuous random variables. Such weighted measures have the advantage over previously proposed measures of always being nonnegative. A simple transformation is derived for the NMI, such that the transformed measures have the value-validity property necessary for making various appropriate comparisons between values of those measures. A numerical example is provided. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Review

Jump to: Research

Open AccessReview Entropy, or Information, Unifies Ecology and Evolution and Beyond
Entropy 2018, 20(10), 727; https://doi.org/10.3390/e20100727
Received: 6 July 2018 / Revised: 18 August 2018 / Accepted: 11 September 2018 / Published: 21 September 2018
PDF Full-text (946 KB) | HTML Full-text | XML Full-text
Abstract
This article discusses how entropy/information methods are well-suited to analyzing and forecasting the four processes of innovation, transmission, movement, and adaptation, which are the common basis to ecology and evolution. Macroecologists study assemblages of differing species, whereas micro-evolutionary biologists study variants of heritable
[...] Read more.
This article discusses how entropy/information methods are well-suited to analyzing and forecasting the four processes of innovation, transmission, movement, and adaptation, which are the common basis to ecology and evolution. Macroecologists study assemblages of differing species, whereas micro-evolutionary biologists study variants of heritable information within species, such as DNA and epigenetic modifications. These two different modes of variation are both driven by the same four basic processes, but approaches to these processes sometimes differ considerably. For example, macroecology often documents patterns without modeling underlying processes, with some notable exceptions. On the other hand, evolutionary biologists have a long history of deriving and testing mathematical genetic forecasts, previously focusing on entropies such as heterozygosity. Macroecology calls this Gini–Simpson, and has borrowed the genetic predictions, but sometimes this measure has shortcomings. Therefore it is important to note that predictive equations have now been derived for molecular diversity based on Shannon entropy and mutual information. As a result, we can now forecast all major types of entropy/information, creating a general predictive approach for the four basic processes in ecology and evolution. Additionally, the use of these methods will allow seamless integration with other studies such as the physical environment, and may even extend to assisting with evolutionary algorithms. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Open AccessReview Beyond Moments: Extending the Maximum Entropy Principle to Feature Distribution Constraints
Entropy 2018, 20(9), 650; https://doi.org/10.3390/e20090650
Received: 27 June 2018 / Revised: 2 August 2018 / Accepted: 20 August 2018 / Published: 30 August 2018
PDF Full-text (347 KB) | HTML Full-text | XML Full-text
Abstract
The maximum entropy principle introduced by Jaynes proposes that a data distribution should maximize the entropy subject to constraints imposed by the available knowledge. Jaynes provided a solution for the case when constraints were imposed on the expected value of a set of
[...] Read more.
The maximum entropy principle introduced by Jaynes proposes that a data distribution should maximize the entropy subject to constraints imposed by the available knowledge. Jaynes provided a solution for the case when constraints were imposed on the expected value of a set of scalar functions of the data. These expected values are typically moments of the distribution. This paper describes how the method of maximum entropy PDF projection can be used to generalize the maximum entropy principle to constraints on the joint distribution of this set of functions. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Figures

Figure 1

Back to Top