Previous Issue
Volume 22, March

Table of Contents

Entropy, Volume 22, Issue 4 (April 2020) – 36 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Energy Conservation in Absorption Refrigeration Cycles Using DES as a New Generation of Green Absorbents
Entropy 2020, 22(4), 409; https://doi.org/10.3390/e22040409 (registering DOI) - 03 Apr 2020
Abstract
Deep eutectic solvents (DESs) are emerging green solvents with very unique characteristics. Their contribution to atmospheric pollution is negligible, and they can be “designed” for desired properties. In this study, the feasibility of applying DESs (Reline, Ethaline, or Glyceline) as absorbents in absorption [...] Read more.
Deep eutectic solvents (DESs) are emerging green solvents with very unique characteristics. Their contribution to atmospheric pollution is negligible, and they can be “designed” for desired properties. In this study, the feasibility of applying DESs (Reline, Ethaline, or Glyceline) as absorbents in absorption refrigeration cycles was investigated. The sophisticated cubic-plus-association (CPA) equation of state, considering the strong intermolecular interactions of such complex systems, was used to estimate the thermodynamic properties. At a fixed set of base case operating conditions, the coefficients of performance were calculated to be 0.705, 0.713, and 0.716 for Reline/water, Ethaline/water, and Glyceline/water systems, respectively, while the corresponding mass flow rate ratios were 33.73, 11.53, and 16.06, respectively. Furthermore, the optimum operating conditions of each system were estimated. To verify the feasibility, results were compared to literature systems, including LiBr/water and various ionic liquid/water systems. The results indicate that DES/water working fluids have the potential to be used in such cycles. Since DESs have the characteristic to be tuned (designed) to desired properties, including their solvent power and their enthalpies of absorption, much further research needs to be done to propose new DESs with higher energy efficiencies. Full article
(This article belongs to the Section Thermodynamics)
Open AccessReview
How Incomputable Is Kolmogorov Complexity?
Entropy 2020, 22(4), 408; https://doi.org/10.3390/e22040408 - 03 Apr 2020
Abstract
Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. We discuss the incomputability of [...] Read more.
Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. We discuss the incomputability of Kolmogorov complexity, which formal loopholes this leaves us with, recent approaches to compute or approximate Kolmogorov complexity, which approaches are problematic, and which approaches are viable. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Open AccessArticle
Particle Swarm Contour Search Algorithm
Entropy 2020, 22(4), 407; https://doi.org/10.3390/e22040407 - 02 Apr 2020
Viewed by 123
Abstract
In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search [...] Read more.
In this article, we present a new algorithm called Particle Swarm Contour Search (PSCS)—a Particle Swarm Optimisation inspired algorithm to find object contours in 2D environments. Currently, most contour-finding algorithms are based on image processing and require a complete overview of the search space in which the contour is to be found. However, for real-world applications this would require a complete knowledge about the search space, which may not be always feasible or possible. The proposed algorithm removes this requirement and is only based on the local information of the particles to accurately identify a contour. Particles search for the contour of an object and then traverse alongside using their known information about positions in- and out-side of the object. Our experiments show that the proposed PSCS algorithm can deliver comparable results as the state-of-the-art. Full article
(This article belongs to the Special Issue Unconventional Methods for Particle Swarm Optimization)
Open AccessArticle
Use of Entropy in Developing SDG-based Indices for Assessing Regional Sustainable Development: A Provincial Case Study of China
Entropy 2020, 22(4), 406; https://doi.org/10.3390/e22040406 - 02 Apr 2020
Viewed by 100
Abstract
Sustainable development appears to be the theme of our time. To assess the progress of sustainable development, a simple but comprehensive index is of great use. To this end, a multivariate index of sustainable development was developed in this study based on indicators [...] Read more.
Sustainable development appears to be the theme of our time. To assess the progress of sustainable development, a simple but comprehensive index is of great use. To this end, a multivariate index of sustainable development was developed in this study based on indicators of the United Nations Sustainable Development Goals (SDGs). To demonstrate the usability of this developed index, we applied it to Fujian Province, China. According to the China SDGs indicators and the Fujian situation, we divided the SDGs into three dimensions and selected indicators based on these dimensions. We calculated the weights and two indices with the entropy weight coefficient method based on collecting and processing of data from 2007 to 2017. We assessed and analyzed the sustainable development of Fujian with two indices and we drew three main conclusions. From 2007 to 2017, the development index of Fujian showed an increasing trend and the coordination index of Fujian showed a fluctuating trend. It is difficult to smoothly improve the coordination index of Fujian because the development speeds of Goal 3 (Good Health and Well-being) and Goal 16 (Peace, Justice, and Strong Institutions) were low. The coordination index of Fujian changed from strong coordination to medium coordination from 2011 to 2012 because the development speed of the environmental dimension suddenly improved. It changed from strong coordination to medium coordination from 2015 to 2016 because the values of the development index of the social dimension were decreasing. To the best of our knowledge, these are the first SDGs-based multivariate indices of sustainable development for a region of China. These indices are applicable to different regions. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology II )
Show Figures

Figure 1

Open AccessArticle
Changes of Conformation in Albumin with Temperature by Molecular Dynamics Simulations
Entropy 2020, 22(4), 405; https://doi.org/10.3390/e22040405 - 01 Apr 2020
Viewed by 181
Abstract
This work presents the analysis of the conformation of albumin in the temperature range of 300 K 312 K , i.e., in the physiological range. Using molecular dynamics simulations, we calculate values of the backbone and dihedral angles for this molecule. We [...] Read more.
This work presents the analysis of the conformation of albumin in the temperature range of 300 K 312 K , i.e., in the physiological range. Using molecular dynamics simulations, we calculate values of the backbone and dihedral angles for this molecule. We analyze the global dynamic properties of albumin treated as a chain. In this range of temperature, we study parameters of the molecule and the conformational entropy derived from two angles that reflect global dynamics in the conformational space. A thorough rationalization, based on the scaling theory, for the subdiffusion Flory–De Gennes type exponent of 0 . 4 unfolds in conjunction with picking up the most appreciable fluctuations of the corresponding statistical-test parameter. These fluctuations coincide adequately with entropy fluctuations, namely the oscillations out of thermodynamic equilibrium. Using Fisher’s test, we investigate the conformational entropy over time and suggest its oscillatory properties in the corresponding time domain. Using the Kruscal–Wallis test, we also analyze differences between the root mean square displacement of a molecule at various temperatures. Here we show that its values in the range of 306 K 309 K are different than in another temperature. Using the Kullback–Leibler theory, we investigate differences between the distribution of the root mean square displacement for each temperature and time window. Full article
Open AccessArticle
The Fisher-Rao Distance between Multivariate Normal Distributions: Special Cases, Boundsand Applications
Entropy 2020, 22(4), 404; https://doi.org/10.3390/e22040404 - 01 Apr 2020
Viewed by 158
Abstract
The Fisher–Rao distance is a measure of dissimilarity between probability distributions, which, under certain regularity conditions of the statistical model, is up to a scaling factor the unique Riemannian metric invariant under Markov morphisms. It is related to the Shannon entropy and has [...] Read more.
The Fisher–Rao distance is a measure of dissimilarity between probability distributions, which, under certain regularity conditions of the statistical model, is up to a scaling factor the unique Riemannian metric invariant under Markov morphisms. It is related to the Shannon entropy and has been used to enlarge the perspective of analysis in a wide variety of domains such as image processing, radar systems, and morphological classification. Here, we approach this metric considered in the statistical model of normal multivariate probability distributions, for which there is not an explicit expression in general, by gathering known results (closed forms for submanifolds and bounds) and derive expressions for the distance between distributions with the same covariance matrix and between distributions with mirrored covariance matrices. An application of the Fisher–Rao distance to the simplification of Gaussian mixtures using the hierarchical clustering algorithm is also presented. Full article
(This article belongs to the Special Issue Information Geometry III)
Open AccessArticle
Rate Adaption for Secure HARQ-CC System with Multiple Eavesdroppers
Entropy 2020, 22(4), 403; https://doi.org/10.3390/e22040403 - 31 Mar 2020
Viewed by 177
Abstract
In this paper, we studied the secure transmission of a hybrid automatic repeat request with chase combining (HARQ-CC) system, under the existence of multiple eavesdroppers and limited latency. First, we analyzed some critical performance metrics, including connection outage probability (COP), secrecy outage probability [...] Read more.
In this paper, we studied the secure transmission of a hybrid automatic repeat request with chase combining (HARQ-CC) system, under the existence of multiple eavesdroppers and limited latency. First, we analyzed some critical performance metrics, including connection outage probability (COP), secrecy outage probability (SOP) and effective secrecy throughput (EST). Then, to maximize the EST, three optimization problems of rate adaption were discussed: (i) optimizing the code rate with a given secrecy redundancy rate by a parameterized closed-form solution; (ii) optimizing the secrecy redundancy rate with a given code rate by a fixed-point method; (iii) optimizing both code rate and secrecy redundancy rate by an iterative optimization algorithm. We also considered COP and SOP constraints among the problems while corresponding solutions were deduced. Finally, numerical and simulated results verified our conclusions that the approximated SOP matches well with Monte–Carlo simulation for a strict reliable constraint, and that the optimized transmitting rate enhances EST efficiently with multiple eavesdroppers and retransmissions. Moreover, the influence of the number of eavesdroppers on secrecy performance was analyzed. Briefly, secrecy performance inevitably deteriorates with increasing number of eavesdroppers due to raised information leakage. Full article
Open AccessArticle
Constraining LQG Graph with Light Surfaces: Properties of BH Thermodynamics for Mini-Super-Space, Semi-Classical Polymeric BH
Entropy 2020, 22(4), 402; https://doi.org/10.3390/e22040402 - 31 Mar 2020
Viewed by 195
Abstract
This work participates in the research for potential areas of observational evidence of quantum effects on geometry in a black hole astrophysical context. We consider properties of a family of loop quantum corrected regular black hole (BHs) solutions and their horizons, focusing on [...] Read more.
This work participates in the research for potential areas of observational evidence of quantum effects on geometry in a black hole astrophysical context. We consider properties of a family of loop quantum corrected regular black hole (BHs) solutions and their horizons, focusing on the geometry symmetries. We study here a recently developed model, where the geometry is determined by a metric quantum modification outside the horizon. This is a regular static spherical solution of mini-super-space BH metric with Loop Quantum Gravity (LQG) corrections. The solutions are characterized delineating certain polymeric functions on the basis of the properties of the horizons and the emergence of a singularity in the limiting case of the Schwarzschild geometry. We discuss particular metric solutions on the base of the parameters of the polymeric model related to similar properties of structures, the metric Killing bundles (or metric bundles MBs), related to the BH horizons’ properties. A comparison with the Reissner–Norström geometry and the Kerr geometry with which analogies exist from the point of their respective MBs properties is done. The analysis provides a way to recognize these geometries and detect their main distinctive phenomenological evidence of LQG origin on the basis of the detection of stationary/static observers and the properties of light-like orbits within the analysis of the (conformal invariant) MBs related to the (local) causal structure. This approach could be applied in other quantum corrected BH solutions, constraining the characteristics of the underlining LQG-graph, as the minimal loop area, through the analysis of the null-like orbits and photons detection. The study of light surfaces associated with a diversified and wide range of BH phenomenology and grounding MBs definition provides a channel to search for possible astrophysical evidence. The main BHs thermodynamic characteristics are studied as luminosity, surface gravity, and temperature. Ultimately, the application of this method to this spherically symmetric approximate solution provides us with a way to clarify some formal aspects of MBs, in the presence of static, spherical symmetric spacetimes. Full article
(This article belongs to the Special Issue Entropy in Covariant Quantum Gravity)
Open AccessArticle
Heat Transfer Enhancement in Unsteady MHD Natural Convective Flow of CNTs Oldroyd-B Nanofluid Under Ramped Wall Velocity and Ramped Wall Temperature
Entropy 2020, 22(4), 401; https://doi.org/10.3390/e22040401 - 31 Mar 2020
Viewed by 238
Abstract
This article analyzes heat transfer enhancement in incompressible time dependent magnetohydrodynamic (MHD) convective flow of Oldroyd-B nanofluid with carbon nanotubes (CNTs). Single wall carbon nanotubes (SWCNTs) and multi-wall carbon nanotubes (MWCNTs) are immersed in a base fluid named Sodium alginate. The flow is [...] Read more.
This article analyzes heat transfer enhancement in incompressible time dependent magnetohydrodynamic (MHD) convective flow of Oldroyd-B nanofluid with carbon nanotubes (CNTs). Single wall carbon nanotubes (SWCNTs) and multi-wall carbon nanotubes (MWCNTs) are immersed in a base fluid named Sodium alginate. The flow is restricted to an infinite vertical plate saturated in a porous material incorporating the generalized Darcy’s law and heat suction/injection. The governing equations for momentum, shear stress and energy are modelled in the form of partial differential equations along with ramped wall temperature and ramped wall velocity boundary conditions. Laplace transformation is applied to convert principal partial differential equations to ordinary differential equations first and, later, complex multivalued functions of Laplace parameter are handled with numerical inversion to obtain the solutions in real time domain. Expression for Nusselt number is also obtained to clearly examine the difference in rate of heat transfer. A comparison for isothermal wall condition and ramped wall condition is also made to analyze the difference in both profiles. A graphical study is conducted to analyze how the fluid profiles are significantly affected by several pertinent parameters. Rate of heat transfer increases with increasing volume fraction of nanoparticle while shear stress reduces with elevation in retardation time. Moreover, flow gets accelerated with increase in Grashof number and Porosity parameter. For every parameter, a comparison between solutions of SWCNTs and MWCNTs is also presented. Full article
(This article belongs to the Special Issue Heat Transfer in Nanofluids and Porous Media)
Open AccessArticle
Hard-Decision Coded Modulation for High-Throughput Short-Reach Optical Interconnect
Entropy 2020, 22(4), 400; https://doi.org/10.3390/e22040400 - 31 Mar 2020
Viewed by 206
Abstract
Coded modulation (CM), a combination of forward error correction (FEC) and high order modulation formats, has become a key part of modern optical communication systems. Designing CM schemes with strict complexity requirements for optical communications (e.g., data center interconnects) is still challenging mainly [...] Read more.
Coded modulation (CM), a combination of forward error correction (FEC) and high order modulation formats, has become a key part of modern optical communication systems. Designing CM schemes with strict complexity requirements for optical communications (e.g., data center interconnects) is still challenging mainly because of the expected low latency, low overhead, and the stringent high data rate requirements. In this paper, we propose a CM scheme with bit-wise hard-decision FEC and geometric shaping. In particular, we propose to combine the recently introduced soft-aided bit-marking decoding algorithm for staircase codes (SCCs) with geometrically-shaped constellations. The main goal of this CM scheme is to jointly boost the coding gain and provide shaping gain, while keeping the complexity low. When compared to existing CM systems based on M-ary quadrature-amplitude modulation (MQAM, M = 64 , 128 , 256 ) and conventional decoding of SCCs, the proposed scheme shows improvements of up to 0 . 83 dB at a bit-error rate of 10 - 6 in the additive white Gaussian noise channel. For a nonlinear optical fiber system, simulation results show up to 24 % reach increase. In addition, the proposed CM scheme enables rate adaptivity in single-wavelength systems, offering six different data rates between 450 Gbit/s and 666 Gbit/s. Full article
(This article belongs to the Special Issue Information Theory of Optical Fiber)
Open AccessArticle
Robust Regression with Density Power Divergence: Theory, Comparisons, and Data Analysis
Entropy 2020, 22(4), 399; https://doi.org/10.3390/e22040399 - 31 Mar 2020
Viewed by 231
Abstract
Minimum density power divergence estimation provides a general framework for robust statistics, depending on a parameter α , which determines the robustness properties of the method. The usual estimation method is numerical minimization of the power divergence. The paper considers the special case [...] Read more.
Minimum density power divergence estimation provides a general framework for robust statistics, depending on a parameter α , which determines the robustness properties of the method. The usual estimation method is numerical minimization of the power divergence. The paper considers the special case of linear regression. We developed an alternative estimation procedure using the methods of S-estimation. The rho function so obtained is proportional to one minus a suitably scaled normal density raised to the power α . We used the theory of S-estimation to determine the asymptotic efficiency and breakdown point for this new form of S-estimation. Two sets of comparisons were made. In one, S power divergence is compared with other S-estimators using four distinct rho functions. Plots of efficiency against breakdown point show that the properties of S power divergence are close to those of Tukey’s biweight. The second set of comparisons is between S power divergence estimation and numerical minimization. Monitoring these two procedures in terms of breakdown point shows that the numerical minimization yields a procedure with larger robust residuals and a lower empirical breakdown point, thus providing an estimate of α leading to more efficient parameter estimates. Full article
Open AccessArticle
Completeness of Classical Thermodynamics: The Ideal Gas, the Unconventional Systems, the Rubber Band, the Paramagnetic Solid and the Kelly Plasma
Entropy 2020, 22(4), 398; https://doi.org/10.3390/e22040398 - 31 Mar 2020
Viewed by 186
Abstract
A method is developed to complete an incomplete set of equations of state of a thermodynamic system. Once the complete set of equations is found, in order to verify the thermodynamic validity of a system, the Hessian and entropy methods are exposed. An [...] Read more.
A method is developed to complete an incomplete set of equations of state of a thermodynamic system. Once the complete set of equations is found, in order to verify the thermodynamic validity of a system, the Hessian and entropy methods are exposed. An original approach called the completeness method in order to complete all the information about the thermodynamic system is exposed. The Hessian method is improved by developing a procedure to calculate the Hessian when it is not possible to have an expression of the internal energy as a fundamental equation. The entropy method is improved by showing how to prove the first-degree homogeneous property of the entropy without having a fundamental expression of it. The completeness method is developed giving a total study of the thermodynamic system by obtaining the set of independent T d S equations and a recipe to obtain all the thermodynamics identities. In order to show the viability of the methods, they are applied to a typical thermodynamic system as the ideal gas. Some well-known and unknown thermodynamic identities are deduced. We also analyze a set of nonphysical equations of state showing that they can represent a thermodynamic system, but in an unstable manner. The rubber band, the paramagnetic solid and the Kelly equation of state for a plasma are corrected using our methods. In each case, a comparison is made between the three methods, showing that the three of them are complementary to the understanding of a thermodynamic system. Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle
Performance of Universal Reciprocating Heat-Engine Cycle with Variable Specific Heats Ratio of Working Fluid
Entropy 2020, 22(4), 397; https://doi.org/10.3390/e22040397 - 31 Mar 2020
Viewed by 139
Abstract
Considering the finite time characteristic, heat transfer loss, friction loss and internal irreversibility loss, an air standard reciprocating heat-engine cycle model is founded by using finite time thermodynamics. The cycle model, which consists of two endothermic processes, two exothermic processes and two adiabatic [...] Read more.
Considering the finite time characteristic, heat transfer loss, friction loss and internal irreversibility loss, an air standard reciprocating heat-engine cycle model is founded by using finite time thermodynamics. The cycle model, which consists of two endothermic processes, two exothermic processes and two adiabatic processes, is well generalized. The performance parameters, including the power output and efficiency (PAE), are obtained. The PAE versus compression ratio relations are obtained by numerical computation. The impacts of variable specific heats ratio (SHR) of working fluid (WF) on universal cycle performances are analyzed and various special cycles are also discussed. The results include the PAE performance characteristics of various special cycles (including Miller, Dual, Atkinson, Brayton, Diesel and Otto cycles) when the SHR of WF is constant and variable (including the SHR varied with linear function (LF) and nonlinear function (NLF) of WF temperature). The maximum power outputs and the corresponding optimal compression ratios, as well as the maximum efficiencies and the corresponding optimal compression ratios for various special cycles with three SHR models are compared. Full article
Show Figures

Figure 1

Open AccessArticle
On Geometry of Information Flow for Causal Inference
Entropy 2020, 22(4), 396; https://doi.org/10.3390/e22040396 - 30 Mar 2020
Viewed by 293
Abstract
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. [...] Read more.
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. This paper takes the perspective of information flow, which includes the Nobel prize winning work on Granger-causality, and the recently highly popular transfer entropy, these being probabilistic in nature. Our main contribution will be to develop analysis tools that will allow a geometric interpretation of information flow as a causal inference indicated by positive transfer entropy. We will describe the effective dimensionality of an underlying manifold as projected into the outcome space that summarizes information flow. Therefore, contrasting the probabilistic and geometric perspectives, we will introduce a new measure of causal inference based on the fractal correlation dimension conditionally applied to competing explanations of future forecasts, which we will write G e o C y x . This avoids some of the boundedness issues that we show exist for the transfer entropy, T y x . We will highlight our discussions with data developed from synthetic models of successively more complex nature: these include the Hénon map example, and finally a real physiological example relating breathing and heart rate function. Full article
Open AccessArticle
Generalized Term Similarity for Feature Selection in Text Classification Using Quadratic Programming
Entropy 2020, 22(4), 395; https://doi.org/10.3390/e22040395 - 30 Mar 2020
Viewed by 218
Abstract
The rapid growth of Internet technologies has led to an enormous increase in the number
of electronic documents used worldwide. To organize and manage big data for unstructured
documents effectively and efficiently, text categorization has been employed in recent decades.
To conduct text [...] Read more.
The rapid growth of Internet technologies has led to an enormous increase in the number
of electronic documents used worldwide. To organize and manage big data for unstructured
documents effectively and efficiently, text categorization has been employed in recent decades.
To conduct text categorization tasks, documents are usually represented using the bag-of-words
model, owing to its simplicity. In this representation for text classification, feature selection
becomes an essential method because all terms in the vocabulary induce enormous feature space
corresponding to the documents. In this paper, we propose a new feature selection method that
considers term similarity to avoid the selection of redundant terms. Term similarity is measured
using a general method such as mutual information, and serves as a second measure in feature
selection in addition to term ranking. To consider balance of term ranking and term similarity
for feature selection, we use a quadratic programming-based numerical optimization approach.
Experimental results demonstrate that considering term similarity is effective and has higher accuracy
than conventional methods. The source code of the proposed method can be downloaded from
‘http://ai.cau.ac.kr/?f=softwares&m=tcqp’. Full article
(This article belongs to the Special Issue Information Theoretic Feature Selection Methods for Big Data)
Open AccessArticle
Analyzing the Influence of Hyper-parameters and Regularizers of Topic Modeling in Terms of Renyi Entropy
Entropy 2020, 22(4), 394; https://doi.org/10.3390/e22040394 - 30 Mar 2020
Viewed by 261
Abstract
Topic modeling is a popular technique for clustering large collections of text documents. A variety of different types of regularization is implemented in topic modeling. In this paper, we propose a novel approach for analyzing the influence of different regularization types on results [...] Read more.
Topic modeling is a popular technique for clustering large collections of text documents. A variety of different types of regularization is implemented in topic modeling. In this paper, we propose a novel approach for analyzing the influence of different regularization types on results of topic modeling. Based on Renyi entropy, this approach is inspired by the concepts from statistical physics, where an inferred topical structure of a collection can be considered an information statistical system residing in a non-equilibrium state. By testing our approach on four models—Probabilistic Latent Semantic Analysis (pLSA), Additive Regularization of Topic Models (BigARTM), Latent Dirichlet Allocation (LDA) with Gibbs sampling, LDA with variational inference (VLDA)—we, first of all, show that the minimum of Renyi entropy coincides with the “true” number of topics, as determined in two labelled collections. Simultaneously, we find that Hierarchical Dirichlet Process (HDP) model as a well-known approach for topic number optimization fails to detect such optimum. Next, we demonstrate that large values of the regularization coefficient in BigARTM significantly shift the minimum of entropy from the topic number optimum, which effect is not observed for hyper-parameters in LDA with Gibbs sampling. We conclude that regularization may introduce unpredictable distortions into topic models that need further research. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Open AccessArticle
Spatial-Temporal Characteristic Analysis of Ethnic Toponyms Based on Spatial Information Entropy at the Rural Level in Northeast China
Entropy 2020, 22(4), 393; https://doi.org/10.3390/e22040393 - 30 Mar 2020
Viewed by 175
Abstract
As a symbol language, toponyms have inherited the unique local historical culture in the long process of historical development. As the birthplace of Manchu, there are many toponyms originated from multi-ethnic groups (e.g., Manchu, Mongol, Korean, Hui, and Xibe) in Northeast China which [...] Read more.
As a symbol language, toponyms have inherited the unique local historical culture in the long process of historical development. As the birthplace of Manchu, there are many toponyms originated from multi-ethnic groups (e.g., Manchu, Mongol, Korean, Hui, and Xibe) in Northeast China which possess unique cultural connotations. This study aimed to (1) establish a spatial-temporal database of toponyms in Northeast China using a multi-source data set, and identify their ethnic types and origin times; and (2) explore the geographical distribution characteristics of ethnic toponyms and the evolution of rural settlements by comparing the spatial analysis and spatial information entropy methods. The results found that toponyms reflect not only the spatial distribution characteristics of the density and direction of ethnic groups, but also the migration law of rural settlements. Results also confirm that toponyms contain unique cultural connotations and provide a theoretical basis for the protection and promotion of the cultural connotations of toponyms. This research provides an entropic perspective and method for exploring the spatial-temporal evolutionary characteristics of ethnic groups and toponym mapping. Full article
(This article belongs to the Special Issue Information Theory for Human and Social Processes)
Show Figures

Figure 1

Open AccessArticle
Photon Detection as a Process of Information Gain
Entropy 2020, 22(4), 392; https://doi.org/10.3390/e22040392 - 30 Mar 2020
Viewed by 194
Abstract
Making use of the equivalence between information and entropy, we have shown in a recent paper that particles moving with a kinetic energy ε carry potential information i p o t ( ε , T ) = 1 ln ( 2 ) ε [...] Read more.
Making use of the equivalence between information and entropy, we have shown in a recent paper that particles moving with a kinetic energy ε carry potential information i p o t ( ε , T ) = 1 ln ( 2 ) ε k B   T relative to a heat reservoir of temperature T . In this paper we build on this result and consider in more detail the process of information gain in photon detection. Considering photons of energy E p h and a photo-ionization detector operated at a temperature T D , we evaluate the signal-to-noise ratio S N ( E p h , T D ) for different detector designs and detector operation conditions and show that the information gain realized upon detection, i r e a l ( E p h , T D ) , always remains smaller than the potential information i p o t ( E p h , T D ) carried with the photons themselves, i.e.,: i r e a l ( E p h , T D ) = 1 ln ( 2 ) ln ( S N ( E p h , T D ) ) i p o t ( E p h , T D ) = 1 ln ( 2 ) E p h k B T D   . This result is shown to be generally valid for all kinds of technical photon detectors, which shows that i p o t ( E p h , T D ) can indeed be regarded as an intrinsic information content that is carried with the photons themselves. Overall, our results suggest that photon detectors perform as thermodynamic engines that incompletely convert potential information into realized information with an efficiency that is limited by the second law of thermodynamics and the Landauer energy bounds on information gain and information erasure. Full article
(This article belongs to the Special Issue The Landauer Principle: Meaning, Physical Roots and Applications)
Show Figures

Figure 1

Open AccessArticle
CDE++: Learning Categorical Data Embedding by Enhancing Heterogeneous Feature Value Coupling Relationships
Entropy 2020, 22(4), 391; https://doi.org/10.3390/e22040391 - 29 Mar 2020
Viewed by 237
Abstract
Categorical data are ubiquitous in machine learning tasks, and the representation of categorical data plays an important role in the learning performance. The heterogeneous coupling relationships between features and feature values reflect the characteristics of the real-world categorical data which need to be [...] Read more.
Categorical data are ubiquitous in machine learning tasks, and the representation of categorical data plays an important role in the learning performance. The heterogeneous coupling relationships between features and feature values reflect the characteristics of the real-world categorical data which need to be captured in the representations. The paper proposes an enhanced categorical data embedding method, i.e., CDE++, which captures the heterogeneous feature value coupling relationships into the representations. Based on information theory and the hierarchical couplings defined in our previous work CDE (Categorical Data Embedding by learning hierarchical value coupling), CDE++ adopts mutual information and margin entropy to capture feature couplings and designs a hybrid clustering strategy to capture multiple types of feature value clusters. Moreover, Autoencoder is used to learn non-linear couplings between features and value clusters. The categorical data embeddings generated by CDE++ are low-dimensional numerical vectors which are directly applied to clustering and classification and achieve the best performance comparing with other categorical representation learning methods. Parameter sensitivity and scalability tests are also conducted to demonstrate the superiority of CDE++. Full article
(This article belongs to the Special Issue Information Theoretic Feature Selection Methods for Big Data)
Open AccessArticle
Quaternion Valued Risk Diversification
Entropy 2020, 22(4), 390; https://doi.org/10.3390/e22040390 - 29 Mar 2020
Viewed by 413
Abstract
Risk diversification is an important topic for portfolio managers. Various portfolio optimization algorithms have been developed to minimize portfolio risk under certain constraints. As an extension of the complex risk diversification portfolio proposed by Uchiyama, Kadoya, and Nakagawa in January 2019 (Yusuke et [...] Read more.
Risk diversification is an important topic for portfolio managers. Various portfolio optimization algorithms have been developed to minimize portfolio risk under certain constraints. As an extension of the complex risk diversification portfolio proposed by Uchiyama, Kadoya, and Nakagawa in January 2019 (Yusuke et al. Entropy. 2019, 21, 119.), we propose a risk diversification portfolio construction method which incorporates quaternion risk. We show that the proposed method outperforms the conventional complex risk diversification portfolio method. Full article
Open AccessArticle
Information Bottleneck For Estimating Treatment Effects with Systematically Missing Covariates
Entropy 2020, 22(4), 389; https://doi.org/10.3390/e22040389 - 29 Mar 2020
Viewed by 258
Abstract
Estimating the effects of an intervention from high-dimensional observational data is a challenging problem due to the existence of confounding. The task is often further complicated in healthcare applications where a set of observations may be entirely missing for certain patients at test [...] Read more.
Estimating the effects of an intervention from high-dimensional observational data is a challenging problem due to the existence of confounding. The task is often further complicated in healthcare applications where a set of observations may be entirely missing for certain patients at test time, thereby prohibiting accurate inference. In this paper, we address this issue using an approach based on the information bottleneck to reason about the effects of interventions. To this end, we first train an information bottleneck to perform a low-dimensional compression of covariates by explicitly considering the relevance of information for treatment effects. As a second step, we subsequently use the compressed covariates to perform a transfer of relevant information to cases where data are missing during testing. In doing so, we can reliably and accurately estimate treatment effects even in the absence of a full set of covariate information at test time. Our results on two causal inference benchmarks and a real application for treating sepsis show that our method achieves state-of-the-art performance, without compromising interpretability. Full article
(This article belongs to the Special Issue The Information Bottleneck in Deep Learning)
Open AccessArticle
A Low Complexity Near-Optimal Iterative Linear Detector for Massive MIMO in Realistic Radio Channels of 5G Communication Systems
Entropy 2020, 22(4), 388; https://doi.org/10.3390/e22040388 (registering DOI) - 28 Mar 2020
Viewed by 270
Abstract
Massive multiple-input multiple-output (M-MIMO) is a substantial pillar in fifth generation (5G) mobile communication systems. Although the maximum likelihood (ML) detector attains the optimum performance, it has an exponential complexity. Linear detectors are one of the substitutions and they are comparatively simple to [...] Read more.
Massive multiple-input multiple-output (M-MIMO) is a substantial pillar in fifth generation (5G) mobile communication systems. Although the maximum likelihood (ML) detector attains the optimum performance, it has an exponential complexity. Linear detectors are one of the substitutions and they are comparatively simple to implement. Unfortunately, they sustain a considerable performance loss in high loaded systems. They also include a matrix inversion which is not hardware-friendly. In addition, if the channel matrix is singular or nearly singular, the system will be classified as an ill-conditioned and hence, the signal cannot be equalized. To defeat the inherent noise enhancement, iterative matrix inversion methods are used in the detectors’ design where approximate matrix inversion is replacing the exact computation. In this paper, we study a linear detector based on iterative matrix inversion methods in realistic radio channels called QUAsi Deterministic RadIo channel GenerAtor (QuaDRiGa) package. Numerical results illustrate that the conjugate-gradient (CG) method is numerically robust and obtains the best performance with lowest number of multiplications. In the QuaDRiGA environment, iterative methods crave large n to obtain a pleasurable performance. This paper also shows that when the ratio between the user antennas and base station (BS) antennas ( β ) is close to 1, iterative matrix inversion methods are not attaining a good detector’s performance. Full article
(This article belongs to the Special Issue Information Theory and 5G/6G Mobile Communications)
Show Figures

Figure 1

Open AccessArticle
Statistical Approaches for the Analysis of Dependency Among Neurons Under Noise
Entropy 2020, 22(4), 387; https://doi.org/10.3390/e22040387 - 28 Mar 2020
Viewed by 231
Abstract
Neuronal noise is a major factor affecting the communication between coupled neurons. In this work, we propose a statistical toolset to infer the coupling between two neurons under noise. We estimate these statistical dependencies from data which are generated by a coupled Hodgkin–Huxley [...] Read more.
Neuronal noise is a major factor affecting the communication between coupled neurons. In this work, we propose a statistical toolset to infer the coupling between two neurons under noise. We estimate these statistical dependencies from data which are generated by a coupled Hodgkin–Huxley (HH) model with additive noise. To infer the coupling using observation data, we employ copulas and information-theoretic quantities, such as the mutual information (MI) and the transfer entropy (TE). Copulas and MI between two variables are symmetric quantities, whereas TE is asymmetric. We demonstrate the performances of copulas and MI as functions of different noise levels and show that they are effective in the identification of the interactions due to coupling and noise. Moreover, we analyze the inference of TE values between neurons as a function of noise and conclude that TE is an effective tool for finding out the direction of coupling between neurons under the effects of noise. Full article
Show Figures

Figure 1

Open AccessArticle
Analysis of Solidarity Effect for Entropy, Pareto, and Gini Indices on Two-Class Society Using Kinetic Wealth Exchange Model
Entropy 2020, 22(4), 386; https://doi.org/10.3390/e22040386 - 28 Mar 2020
Viewed by 219
Abstract
It is well known that two different underlying dynamics lead to different patterns of income/wealth distribution such as the Boltzmann–Gibbs form for the lower end and the Pareto-like power-law form for the higher-end. The Boltzmann–Gibbs distribution is naturally derived from maximizing the entropy [...] Read more.
It is well known that two different underlying dynamics lead to different patterns of income/wealth distribution such as the Boltzmann–Gibbs form for the lower end and the Pareto-like power-law form for the higher-end. The Boltzmann–Gibbs distribution is naturally derived from maximizing the entropy of random interactions among agents, whereas the Pareto distribution requires a rational approach of economics dependent on the wealth level. More interestingly, the Pareto regime is very dynamic, whereas the Boltzmann–Gibbs regime is stable over time. Also, there are some cases in which the distributions of income/wealth are bimodal or polymodal. In order to incorporate the dynamic aspects of the Pareto regime and the polymodal forms of income/wealth distribution into one stochastic model, we present a modified agent-based model based on classical kinetic wealth exchange models. First, we adopt a simple two-class society consisting of the rich and the poor where the agents in the same class engage in random exchanges while the agents in the different classes perform a wealth-dependent winner-takes-all trading. This modification leads the system to an extreme polarized society with preserving the Pareto exponent. Second, we incorporate a solidarity formation among agents belonging to the lower class in our model, in order to confront a super-rich agent. This modification leads the system to a drastic bimodal distribution of wealth with a varying Pareto exponent over varying the solidarity parameter, that is, the Pareto-regime becomes narrower and the Pareto exponent gets larger as the solidarity parameter increases. We argue that the solidarity formation is the key ingredient in the varying Pareto exponent and the polymodal distribution. Lastly, we take two approaches to evaluate the level of inequality of wealth such as Gini coefficients and the entropy measure. According to the numerical results, the increasing solidarity parameter leads to a decreasing Gini coefficient not linearly but nonlinearly, whereas the entropy measure is robust over varying solidarity parameters, implying that there is a trade-off between the intermediate party and the high end. Full article
Show Figures

Figure 1

Open AccessArticle
Can Transfer Entropy Infer Information Flow in Neuronal Circuits for Cognitive Processing?
Entropy 2020, 22(4), 385; https://doi.org/10.3390/e22040385 - 28 Mar 2020
Viewed by 220
Abstract
How cognitive neural systems process information is largely unknown, in part because of how difficult it is to accurately follow the flow of information from sensors via neurons to actuators. Measuring the flow of information is different from measuring correlations between firing neurons, [...] Read more.
How cognitive neural systems process information is largely unknown, in part because of how difficult it is to accurately follow the flow of information from sensors via neurons to actuators. Measuring the flow of information is different from measuring correlations between firing neurons, for which several measures are available, foremost among them the Shannon information, which is an undirected measure. Several information-theoretic notions of “directed information” have been used to successfully detect the flow of information in some systems, in particular in the neuroscience community. However, recent work has shown that directed information measures such as transfer entropy can sometimes inadequately estimate information flow, or even fail to identify manifest directed influences, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Because it is unclear how often such cryptic influences emerge in cognitive systems, the usefulness of transfer entropy measures to reconstruct information flow is unknown. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks (motion detection and sound localization). Besides counting the frequency of problematic logic gates, we also test whether transfer entropy applied to an activity time-series recorded from behaving digital brains can infer information flow, compared to a ground-truth model of direct influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer directed information when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Open AccessArticle
Channels’ Confirmation and Predictions’ Confirmation: From the Medical Test to the Raven Paradox
Entropy 2020, 22(4), 384; https://doi.org/10.3390/e22040384 - 26 Mar 2020
Viewed by 308
Abstract
After long arguments between positivism and falsificationism, the verification of universal hypotheses was replaced with the confirmation of uncertain major premises. Unfortunately, Hemple proposed the Raven Paradox. Then, Carnap used the increment of logical probability as the confirmation measure. So far, many confirmation [...] Read more.
After long arguments between positivism and falsificationism, the verification of universal hypotheses was replaced with the confirmation of uncertain major premises. Unfortunately, Hemple proposed the Raven Paradox. Then, Carnap used the increment of logical probability as the confirmation measure. So far, many confirmation measures have been proposed. Measure F proposed by Kemeny and Oppenheim among them possesses symmetries and asymmetries proposed by Elles and Fitelson, monotonicity proposed by Greco et al., and normalizing property suggested by many researchers. Based on the semantic information theory, a measure b* similar to F is derived from the medical test. Like the likelihood ratio, measures b* and F can only indicate the quality of channels or the testing means instead of the quality of probability predictions. Furthermore, it is still not easy to use b*, F, or another measure to clarify the Raven Paradox. For this reason, measure c* similar to the correct rate is derived. Measure c* supports the Nicod Criterion and undermines the Equivalence Condition, and hence, can be used to eliminate the Raven Paradox. An example indicates that measures F and b* are helpful for diagnosing the infection of Novel Coronavirus, whereas most popular confirmation measures are not. Another example reveals that all popular confirmation measures cannot be used to explain that a black raven can confirm “Ravens are black” more strongly than a piece of chalk. Measures F, b*, and c* indicate that the existence of fewer counterexamples is more important than more positive examples’ existence, and hence, are compatible with Popper’s falsification thought. Full article
(This article belongs to the Special Issue Data Science: Measuring Uncertainties)
Show Figures

Figure 1

Open AccessArticle
Endoreversible Modeling of a Hydraulic Recuperation System
Entropy 2020, 22(4), 383; https://doi.org/10.3390/e22040383 - 26 Mar 2020
Viewed by 268
Abstract
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. [...] Read more.
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. Thus far, the investigation of such systems has been limited to individual components or optimizing their control. In this paper, we focus on thermodynamic effects and their impact on the overall systems energy saving potential using endoreversible thermodynamics as the ideal framework for modeling. The dynamical behavior of the hydraulic recuperation system as well as energy savings are estimated using real data of a vehicle suitable for application. Here, energy savings accelerating the vehicle around 10% and a reduction in energy transferred to the conventional disc brakes around 58% are predicted. We further vary certain design and loss parameters—such as accumulator volume, displacement of the hydraulic unit, heat transfer coefficients or pipe diameter—and discuss their influence on the energy saving potential of the system. It turns out that heat transfer coefficients and pipe diameter are of less importance than accumulator volume and displacement of the hydraulic unit. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessArticle
Spectral Structure and Many-Body Dynamics of Ultracold Bosons in a Double-Well
Entropy 2020, 22(4), 382; https://doi.org/10.3390/e22040382 - 26 Mar 2020
Viewed by 448
Abstract
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior [...] Read more.
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior of the particles launched either at the single-particle ground state or saddle-point energy, in a time-independent potential. We complement these results by a characterization of the cross-over from diabatic to quasi-adiabatic evolution under finite-time switching of the potential barrier, via the associated time evolution of a single particle’s von Neumann entropy. This is achieved with the help of the multiconfigurational time-dependent Hartree method for indistinguishable particles (MCTDH-X)—which also allows us to extrapolate our results for increasing particle numbers. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Show Figures

Figure 1

Open AccessArticle
Calculating the Wasserstein Metric-Based Boltzmann Entropy of a Landscape Mosaic
Entropy 2020, 22(4), 381; https://doi.org/10.3390/e22040381 - 26 Mar 2020
Viewed by 230
Abstract
Shannon entropy is currently the most popular method for quantifying the disorder or information of a spatial data set such as a landscape pattern and a cartographic map. However, its drawback when applied to spatial data is also well documented; it is incapable [...] Read more.
Shannon entropy is currently the most popular method for quantifying the disorder or information of a spatial data set such as a landscape pattern and a cartographic map. However, its drawback when applied to spatial data is also well documented; it is incapable of capturing configurational disorder. In addition, it has been recently criticized to be thermodynamically irrelevant. Therefore, Boltzmann entropy was revisited, and methods have been developed for its calculation with landscape patterns. The latest method was developed based on the Wasserstein metric. This method incorporates spatial repetitiveness, leading to a Wasserstein metric-based Boltzmann entropy that is capable of capturing the configurational disorder of a landscape mosaic. However, the numerical work required to calculate this entropy is beyond what can be practically achieved through hand calculation. This study developed a new software tool for conveniently calculating the Wasserstein metric-based Boltzmann entropy. The tool provides a user-friendly human–computer interface and many functions. These functions include multi-format data file import function, calculation function, and data clear or copy function. This study outlines several essential technical implementations of the tool and reports the evaluation of the software tool and a case study. Experimental results demonstrate that the software tool is both efficient and convenient. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology II )
Show Figures

Figure 1

Open AccessFeature PaperArticle
Geometrical Aspects in the Analysis of Microcanonical Phase-Transitions
Entropy 2020, 22(4), 380; https://doi.org/10.3390/e22040380 - 26 Mar 2020
Viewed by 201
Abstract
In the present work, we discuss how the functional form of thermodynamic observables can be deduced from the geometric properties of subsets of phase space. The geometric quantities taken into account are mainly extrinsic curvatures of the energy level sets of the Hamiltonian [...] Read more.
In the present work, we discuss how the functional form of thermodynamic observables can be deduced from the geometric properties of subsets of phase space. The geometric quantities taken into account are mainly extrinsic curvatures of the energy level sets of the Hamiltonian of a system under investigation. In particular, it turns out that peculiar behaviours of thermodynamic observables at a phase transition point are rooted in more fundamental changes of the geometry of the energy level sets in phase space. More specifically, we discuss how microcanonical and geometrical descriptions of phase-transitions are shaped in the special case of ϕ 4 models with either nearest-neighbours and mean-field interactions. Full article
(This article belongs to the Special Issue The Ubiquity of Entropy)
Previous Issue
Back to TopTop