Editor's Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
Entropy 2020, 22(12), 1432; https://doi.org/10.3390/e22121432 - 18 Dec 2020
Cited by 3
Abstract
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when [...] Read more.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

Article
Examining the Causal Structures of Deep Neural Networks Using Information Theory
Entropy 2020, 22(12), 1429; https://doi.org/10.3390/e22121429 - 18 Dec 2020
Abstract
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the [...] Read more.
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the “causal plane”, which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. Full article
Show Figures

Figure 1

Article
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
Entropy 2020, 22(12), 1435; https://doi.org/10.3390/e22121435 - 18 Dec 2020
Cited by 1
Abstract
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market [...] Read more.
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market indices: namely Dow Jones Industrial Average, Ibex 35, Nasdaq-100 and Nikkei 225. To this end, a comprehensive framework has been adopted encompassing a wide range of techniques and the most suitable methods for the analysis of noisy time series. By using daily closing values from January 1992 to July 2013, this study employs twelve techniques and tools of which five are specific to detecting chaos. The findings show no clear evidence of chaos, suggesting that the behavior of financial markets is nonlinear and stochastic. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

Article
Foundations of the Quaternion Quantum Mechanics
Entropy 2020, 22(12), 1424; https://doi.org/10.3390/e22121424 - 17 Dec 2020
Cited by 2
Abstract
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy [...] Read more.
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy theory (classical balance equations for isotropic Cauchy-elastic material) and using the Hamilton quaternion algebra, we present a rigorous derivation of the quaternion form of the non- and relativistic wave equations. The family of the wave equations and the Poisson equation are a straightforward consequence of the quaternion representation of the Cauchy model of the elastic continuum. This is the most general kind of quantum mechanics possessing the same kind of calculus of assertions as conventional quantum mechanics. The problem of the Schrödinger equation, where imaginary ‘i’ should emerge, is solved. This interpretation is a serious attempt to describe the ontology of quantum mechanics, and demonstrates that, besides Bohmian mechanics, the complete ontological interpretations of quantum theory exists. The model can be generalized and falsified. To ensure this theory to be true, we specified problems, allowing exposing its falsity. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Article
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach
Entropy 2020, 22(12), 1421; https://doi.org/10.3390/e22121421 - 16 Dec 2020
Cited by 4
Abstract
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel [...] Read more.
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Full article
Show Figures

Figure 1

Article
Statistical Features in High-Frequency Bands of Interictal iEEG Work Efficiently in Identifying the Seizure Onset Zone in Patients with Focal Epilepsy
Entropy 2020, 22(12), 1415; https://doi.org/10.3390/e22121415 - 15 Dec 2020
Abstract
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the [...] Read more.
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the possible seizure onset zone (SOZ) channels. It is known that the activity of HFCs in interictal iEEGs, including ripple and fast ripple bands, is associated with epileptic seizures. This paper proposes to decompose multi-channel interictal iEEG signals into a number of subbands. For every 20 s segment, twelve features are computed from each subband. A mutual information (MI)-based method with grid search was applied to select the most prominent bands and features. A gradient-boosting decision tree-based algorithm called LightGBM was used to score each segment of the channels and these were averaged together to achieve a final score for each channel. The possible SOZ channels were localized based on the higher value channels. The experimental results with eleven epilepsy patients were tested to observe the efficiency of the proposed design compared to the state-of-the-art methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
Entropy 2020, 22(11), 1326; https://doi.org/10.3390/e22111326 - 20 Nov 2020
Cited by 1
Abstract
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors [...] Read more.
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors come into play. Ion dynamics both inside and outside the geometric volume of the pore can be critical in determining the transport properties of the channel due to several commensurate length scales, such as the effective membrane thickness, radii of the first and the second hydration layers, pore radius, and Debye length. In particular, for biomimetic pores, such as the graphene crown ether we examine here, there are regimes where transport is highly sensitive to the pore size due to the interplay of dehydration and interaction with pore charge. Picometer changes in the size, e.g., due to a minute strain, can lead to a large change in conductance. Outside of these regimes, the small pore size itself gives a large resistance, even when electrostatic factors and dehydration compensate each other to give a relatively flat—e.g., near barrierless—free energy landscape. The permeability, though, can still be large and ions will translocate rapidly after they arrive within the capture radius of the pore. This, in turn, leads to diffusion and drift effects dominating the conductance. The current thus plateaus and becomes effectively independent of pore-free energy characteristics. Measurement of this effect will give an estimate of the magnitude of kinetically limiting features, and experimentally constrain the local electromechanical conditions. Full article
Show Figures

Figure 1

Article
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
Entropy 2020, 22(11), 1315; https://doi.org/10.3390/e22111315 - 18 Nov 2020
Cited by 3
Abstract
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient [...] Read more.
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient C=1U. The latter is a multiplicative version of the Kullback-Leibler distance, with values between 0 and 1. For product measures and self-similar phenomena, it does not depend on the measurement level. Hence, C is an alternative to Gini’s concentration coefficient for measures with variation on different levels. Simple examples concern population density and gross domestic product. Application to time series patterns is indicated with a Markov chain. For the Covid-19 pandemic, entropy ratios indicate a homogeneous distribution of infections and the potential of local action when compared to measures for a whole region. Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

Article
Coherence and Entanglement Dynamics in Training Variational Quantum Perceptron
Entropy 2020, 22(11), 1277; https://doi.org/10.3390/e22111277 - 11 Nov 2020
Cited by 1
Abstract
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum [...] Read more.
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum perceptron proposed by Y. Du et al., arXiv:1809.06056 (2018). In detail, we show that in the first part of the training of the variational quantum perceptron, the quantum coherence of the total system is concentrated in the index register and in the second part, the Grover algorithm consumes the quantum coherence in the index register. This implies that the quantum coherence distribution and the quantum coherence depletion are required in the training of variational quantum perceptron. In addition, we investigate the behavior of entanglement during the training of variational quantum perceptron. We show that the bipartite concurrence between feature and index register decreases since Grover operation is only performed on the index register. Also, we reveal that the concurrence between the two qubits of index register increases as the variational quantum perceptron is trained. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

Article
Quantum Finite-Time Thermodynamics: Insight from a Single Qubit Engine
Entropy 2020, 22(11), 1255; https://doi.org/10.3390/e22111255 - 04 Nov 2020
Cited by 14
Abstract
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin [...] Read more.
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin of irreversibility, originating from heat transport, quantum friction, and thermalization in the presence of external driving. We construct various finite-time engine cycles that are based on the Otto and Carnot templates. Our analysis highlights the role of coherence and the quantum origin of entropy production. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Graphical abstract

Article
Entropy Production in Exactly Solvable Systems
Entropy 2020, 22(11), 1252; https://doi.org/10.3390/e22111252 - 03 Nov 2020
Cited by 2
Abstract
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and [...] Read more.
The rate of entropy production by a stochastic process quantifies how far it is from thermodynamic equilibrium. Equivalently, entropy production captures the degree to which global detailed balance and time-reversal symmetry are broken. Despite abundant references to entropy production in the literature and its many applications in the study of non-equilibrium stochastic particle systems, a comprehensive list of typical examples illustrating the fundamentals of entropy production is lacking. Here, we present a brief, self-contained review of entropy production and calculate it from first principles in a catalogue of exactly solvable setups, encompassing both discrete- and continuous-state Markov processes, as well as single- and multiple-particle systems. The examples covered in this work provide a stepping stone for further studies on entropy production of more complex systems, such as many-particle active matter, as well as a benchmark for the development of alternative mathematical formalisms. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

Article
Quantum Work Statistics with Initial Coherence
Entropy 2020, 22(11), 1223; https://doi.org/10.3390/e22111223 - 27 Oct 2020
Cited by 1
Abstract
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both [...] Read more.
The two-point measurement scheme for computing the thermodynamic work performed on a system requires it to be initially in equilibrium. The Margenau–Hill scheme, among others, extends the previous approach to allow for a non-equilibrium initial state. We establish a quantitative comparison between both schemes in terms of the amount of coherence present in the initial state of the system, as quantified by the l1-coherence measure. We show that the difference between the two first moments of work, the variances of work, and the average entropy production obtained in both schemes can be cast in terms of such initial coherence. Moreover, we prove that the average entropy production can take negative values in the Margenau–Hill framework. Full article
(This article belongs to the Special Issue Thermodynamics of Quantum Information)
Show Figures

Figure 1

Article
Numerical Investigation into the Development Performance of Gas Hydrate by Depressurization Based on Heat Transfer and Entropy Generation Analyses
Entropy 2020, 22(11), 1212; https://doi.org/10.3390/e22111212 - 26 Oct 2020
Cited by 9
Abstract
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the [...] Read more.
The purpose of this study is to analyze the dynamic properties of gas hydrate development from a large hydrate simulator through numerical simulation. A mathematical model of heat transfer and entropy production of methane hydrate dissociation by depressurization has been established, and the change behaviors of various heat flows and entropy generations have been evaluated. Simulation results show that most of the heat supplied from outside is assimilated by methane hydrate. The energy loss caused by the fluid production is insignificant in comparison to the heat assimilation of the hydrate reservoir. The entropy generation of gas hydrate can be considered as the entropy flow from the ambient environment to the hydrate particles, and it is favorable from the perspective of efficient hydrate exploitation. On the contrary, the undesirable entropy generations of water, gas and quartz sand are induced by the irreversible heat conduction and thermal convection under notable temperature gradient in the deposit. Although lower production pressure will lead to larger entropy production of the whole system, the irreversible energy loss is always extremely limited when compared with the amount of thermal energy utilized by methane hydrate. The production pressure should be set as low as possible for the purpose of enhancing exploitation efficiency, as the entropy production rate is not sensitive to the energy recovery rate under depressurization. Full article
Show Figures

Figure 1

Article
Thermodynamic Curvature of the Binary van der Waals Fluid
Entropy 2020, 22(11), 1208; https://doi.org/10.3390/e22111208 - 26 Oct 2020
Cited by 2
Abstract
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In [...] Read more.
The thermodynamic Ricci curvature scalar R has been applied in a number of contexts, mostly for systems characterized by 2D thermodynamic geometries. Calculations of R in thermodynamic geometries of dimension three or greater have been very few, especially in the fluid regime. In this paper, we calculate R for two examples involving binary fluid mixtures: a binary mixture of a van der Waals (vdW) fluid with only repulsive interactions, and a binary vdW mixture with attractive interactions added. In both of these examples, we evaluate R for full 3D thermodynamic geometries. Our finding is that basic physical patterns found for R in the pure fluid are reproduced to a large extent for the binary fluid. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
The Heisenberg Indeterminacy Principle in the Context of Covariant Quantum Gravity
Entropy 2020, 22(11), 1209; https://doi.org/10.3390/e22111209 - 26 Oct 2020
Cited by 3
Abstract
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of [...] Read more.
The subject of this paper deals with the mathematical formulation of the Heisenberg Indeterminacy Principle in the framework of Quantum Gravity. The starting point is the establishment of the so-called time-conjugate momentum inequalities holding for non-relativistic and relativistic Quantum Mechanics. The validity of analogous Heisenberg inequalities in quantum gravity, which must be based on strictly physically observable quantities (i.e., necessarily either 4-scalar or 4-vector in nature), is shown to require the adoption of a manifestly covariant and unitary quantum theory of the gravitational field. Based on the prescription of a suitable notion of Hilbert space scalar product, the relevant Heisenberg inequalities are established. Besides the coordinate-conjugate momentum inequalities, these include a novel proper-time-conjugate extended momentum inequality. Physical implications and the connection with the deterministic limit recovering General Relativity are investigated. Full article
(This article belongs to the Special Issue Axiomatic Approaches to Quantum Mechanics)
Article
The World as a Neural Network
Entropy 2020, 22(11), 1210; https://doi.org/10.3390/e22111210 - 26 Oct 2020
Cited by 6
Abstract
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). [...] Read more.
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal function). This shows that the trainable variables can indeed exhibit classical and quantum behaviors with the state vector of neurons representing the hidden variables. We then study stochastic evolution of the hidden variables by considering D non-interacting subsystems with average state vectors, x¯1, …, x¯D and an overall average state vector x¯0. In the limit when the weight matrix is a permutation matrix, the dynamics of x¯μ can be described in terms of relativistic strings in an emergent D+1 dimensional Minkowski space-time. If the subsystems are minimally interacting, with interactions that are described by a metric tensor, and then the emergent space-time becomes curved. We argue that the entropy production in such a system is a local function of the metric tensor which should be determined by the symmetries of the Onsager tensor. It turns out that a very simple and highly symmetric Onsager tensor leads to the entropy production described by the Einstein–Hilbert term. This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors that were described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other. Full article
(This article belongs to the Section Statistical Physics)
Article
Dynamic Topology Reconfiguration of Boltzmann Machines on Quantum Annealers
Entropy 2020, 22(11), 1202; https://doi.org/10.3390/e22111202 - 24 Oct 2020
Cited by 2
Abstract
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex [...] Read more.
Boltzmann machines have useful roles in deep learning applications, such as generative data modeling, initializing weights for other types of networks, or extracting efficient representations from high-dimensional data. Most Boltzmann machines use restricted topologies that exclude looping connectivity, as such connectivity creates complex distributions that are difficult to sample. We have used an open-system quantum annealer to sample from complex distributions and implement Boltzmann machines with looping connectivity. Further, we have created policies mapping Boltzmann machine variables to the quantum bits of an annealer. These policies, based on correlation and entropy metrics, dynamically reconfigure the topology of Boltzmann machines during training and improve performance. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

Article
Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
Entropy 2020, 22(10), 1186; https://doi.org/10.3390/e22101186 - 21 Oct 2020
Cited by 1
Abstract
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best [...] Read more.
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER. Full article
Show Figures

Figure 1

Article
Two-Qubit Entanglement Generation through Non-Hermitian Hamiltonians Induced by Repeated Measurements on an Ancilla
Entropy 2020, 22(10), 1184; https://doi.org/10.3390/e22101184 - 20 Oct 2020
Cited by 1
Abstract
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian [...] Read more.
In contrast to classical systems, actual implementation of non-Hermitian Hamiltonian dynamics for quantum systems is a challenge because the processes of energy gain and dissipation are based on the underlying Hermitian system–environment dynamics, which are trace preserving. Recently, a scheme for engineering non-Hermitian Hamiltonians as a result of repetitive measurements on an ancillary qubit has been proposed. The induced conditional dynamics of the main system is described by the effective non-Hermitian Hamiltonian arising from the procedure. In this paper, we demonstrate the effectiveness of such a protocol by applying it to physically relevant multi-spin models, showing that the effective non-Hermitian Hamiltonian drives the system to a maximally entangled stationary state. In addition, we report a new recipe to construct a physical scenario where the quantum dynamics of a physical system represented by a given non-Hermitian Hamiltonian model may be simulated. The physical implications and the broad scope potential applications of such a scheme are highlighted. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Figure 1

Article
The Smoluchowski Ensemble—Statistical Mechanics of Aggregation
Entropy 2020, 22(10), 1181; https://doi.org/10.3390/e22101181 - 20 Oct 2020
Abstract
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean [...] Read more.
We present a rigorous thermodynamic treatment of irreversible binary aggregation. We construct the Smoluchowski ensemble as the set of discrete finite distributions that are reached in fixed number of merging events and define a probability measure on this ensemble, such that the mean distribution in the mean-field approximation is governed by the Smoluchowski equation. In the scaling limit this ensemble gives rise to a set of relationships identical to those of familiar statistical thermodynamics. The central element of the thermodynamic treatment is the selection functional, a functional of feasible distributions that connects the probability of distribution to the details of the aggregation model. We obtain scaling expressions for general kernels and closed-form results for the special case of the constant, sum and product kernel. We study the stability of the most probable distribution, provide criteria for the sol-gel transition and obtain the distribution in the post-gel region by simple thermodynamic arguments. Full article
(This article belongs to the Special Issue Generalized Statistical Thermodynamics)
Show Figures

Figure 1

Article
Unifying Aspects of Generalized Calculus
Entropy 2020, 22(10), 1180; https://doi.org/10.3390/e22101180 - 19 Oct 2020
Cited by 3
Abstract
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate [...] Read more.
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate courses of mathematics. Its theoretical potential is huge, and yet it remains unknown or unappreciated. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy)
Show Figures

Figure 1

Article
Segmentation of High Dimensional Time-Series Data Using Mixture of Sparse Principal Component Regression Model with Information Complexity
Entropy 2020, 22(10), 1170; https://doi.org/10.3390/e22101170 - 17 Oct 2020
Cited by 4
Abstract
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our [...] Read more.
This paper presents a new and novel hybrid modeling method for the segmentation of high dimensional time-series data using the mixture of the sparse principal components regression (MIX-SPCR) model with information complexity (ICOMP) criterion as the fitness function. Our approach encompasses dimension reduction in high dimensional time-series data and, at the same time, determines the number of component clusters (i.e., number of segments across time-series data) and selects the best subset of predictors. A large-scale Monte Carlo simulation is performed to show the capability of the MIX-SPCR model to identify the correct structure of the time-series data successfully. MIX-SPCR model is also applied to a high dimensional Standard & Poor’s 500 (S&P 500) index data to uncover the time-series’s hidden structure and identify the structure change points. The approach presented in this paper determines both the relationships among the predictor variables and how various predictor variables contribute to the explanatory power of the response variable through the sparsity settings cluster wise. Full article
Show Figures

Figure 1

Article
Non-Equilibrium Living Polymers
Entropy 2020, 22(10), 1130; https://doi.org/10.3390/e22101130 - 06 Oct 2020
Cited by 1
Abstract
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These [...] Read more.
Systems of “living” polymers are ubiquitous in industry and are traditionally realised using surfactants. Here I first review the theoretical state-of-the-art of living polymers and then discuss non-equilibrium extensions that may be realised with advanced synthetic chemistry or DNA functionalised by proteins. These systems are not only interesting in order to realise novel “living” soft matter but can also shed insight into how genomes are (topologically) regulated in vivo. Full article
(This article belongs to the Special Issue Statistical Physics of Living Systems)
Show Figures

Figure 1

Article
Exploration of Outliers in If-Then Rule-Based Knowledge Bases
Entropy 2020, 22(10), 1096; https://doi.org/10.3390/e22101096 - 29 Sep 2020
Cited by 1
Abstract
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier [...] Read more.
The article presents both methods of clustering and outlier detection in complex data, such as rule-based knowledge bases. What distinguishes this work from others is, first, the application of clustering algorithms to rules in domain knowledge bases, and secondly, the use of outlier detection algorithms to detect unusual rules in knowledge bases. The aim of the paper is the analysis of using four algorithms for outlier detection in rule-based knowledge bases: Local Outlier Factor (LOF), Connectivity-based Outlier Factor (COF), K-MEANS, and SMALLCLUSTERS. The subject of outlier mining is very important nowadays. Outliers in rules If-Then mean unusual rules, which are rare in comparing to others and should be explored by the domain expert as soon as possible. In the research, the authors use the outlier detection methods to find a given number of outliers in rules (1%, 5%, 10%), while in small groups, the number of outliers covers no more than 5% of the rule cluster. Subsequently, the authors analyze which of seven various quality indices, which they use for all rules and after removing selected outliers, improve the quality of rule clusters. In the experimental stage, the authors use six different knowledge bases. The best results (the most often the clusters quality was improved) are achieved for two outlier detection algorithms LOF and COF. Full article
Show Figures

Graphical abstract

Article
Modified Distribution Entropy as a Complexity Measure of Heart Rate Variability (HRV) Signal
Entropy 2020, 22(10), 1077; https://doi.org/10.3390/e22101077 - 24 Sep 2020
Cited by 1
Abstract
The complexity of a heart rate variability (HRV) signal is considered an important nonlinear feature to detect cardiac abnormalities. This work aims at explaining the physiological meaning of a recently developed complexity measurement method, namely, distribution entropy ( [...] Read more.
The complexity of a heart rate variability (HRV) signal is considered an important nonlinear feature to detect cardiac abnormalities. This work aims at explaining the physiological meaning of a recently developed complexity measurement method, namely, distribution entropy (DistEn), in the context of HRV signal analysis. We thereby propose modified distribution entropy (mDistEn) to remove the physiological discrepancy involved in the computation of DistEn. The proposed method generates a distance matrix that is devoid of over-exerted multi-lag signal changes. Restricted element selection in the distance matrix makes “mDistEn” a computationally inexpensive and physiologically more relevant complexity measure in comparison to DistEn. Full article
(This article belongs to the Special Issue Entropy in Data Analysis)
Show Figures

Figure 1

Article
The Quantum Friction and Optimal Finite-Time Performance of the Quantum Otto Cycle
Entropy 2020, 22(9), 1060; https://doi.org/10.3390/e22091060 - 22 Sep 2020
Cited by 7
Abstract
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider [...] Read more.
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider frictionless trajectories since these have been shown to maximize the work extraction during the adiabatic processes. Furthermore, for frictionless cycles, the energy of the system decouples from the other degrees of freedom, thereby simplifying the mathematical treatment. Instead, we considered general limit cycles and we used analytical techniques to compute the derivative of the work production over the whole cycle with respect to the time allocated for each of the adiabatic processes. By doing so, we were able to directly show that the frictionless cycle maximizes the work production, implying that the optimal power production must necessarily allow for some friction generation so that the duration of the cycle is reduced. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Surface-Codes-Based Quantum Communication Networks
Entropy 2020, 22(9), 1059; https://doi.org/10.3390/e22091059 - 22 Sep 2020
Abstract
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission [...] Read more.
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission distance between arbitrary two nodes by using the SCs. The numerical results indicate that transmission distance between nodes can be extended to beyond 1000 km by employing simple syndrome decoding. Finally, we describe how to operate the proposed QCN by employing the software-defined networking (SDN) concept. Full article
Show Figures

Figure 1

Article
TMEA: A Thermodynamically Motivated Framework for Functional Characterization of Biological Responses to System Acclimation
Entropy 2020, 22(9), 1030; https://doi.org/10.3390/e22091030 - 15 Sep 2020
Cited by 1
Abstract
The objective of gene set enrichment analysis (GSEA) in modern biological studies is to identify functional profiles in huge sets of biomolecules generated by high-throughput measurements of genes, transcripts, metabolites, and proteins. GSEA is based on a two-stage process using classical statistical analysis [...] Read more.
The objective of gene set enrichment analysis (GSEA) in modern biological studies is to identify functional profiles in huge sets of biomolecules generated by high-throughput measurements of genes, transcripts, metabolites, and proteins. GSEA is based on a two-stage process using classical statistical analysis to score the input data and subsequent testing for overrepresentation of the enrichment score within a given functional coherent set. However, enrichment scores computed by different methods are merely statistically motivated and often elusive to direct biological interpretation. Here, we propose a novel approach, called Thermodynamically Motivated Enrichment Analysis (TMEA), to account for the energy investment in biological relevant processes. Therefore, TMEA is based on surprisal analysis, which offers a thermodynamic-free energy-based representation of the biological steady state and of the biological change. The contribution of each biomolecule underlying the changes in free energy is used in a Monte Carlo resampling procedure resulting in a functional characterization directly coupled to the thermodynamic characterization of biological responses to system perturbations. To illustrate the utility of our method on real experimental data, we benchmark our approach on plant acclimation to high light and compare the performance of TMEA with the most frequently used method for GSEA. Full article
(This article belongs to the Special Issue Thermodynamics and Information Theory of Living Systems)
Show Figures

Graphical abstract

Article
Life’s Energy and Information: Contrasting Evolution of Volume- versus Surface-Specific Rates of Energy Consumption
Entropy 2020, 22(9), 1025; https://doi.org/10.3390/e22091025 - 13 Sep 2020
Cited by 2
Abstract
As humanity struggles to find a path to resilience amidst global change vagaries, understanding organizing principles of living systems as the pillar for human existence is rapidly growing in importance. However, finding quantitative definitions for order, complexity, information and functionality of living systems [...] Read more.
As humanity struggles to find a path to resilience amidst global change vagaries, understanding organizing principles of living systems as the pillar for human existence is rapidly growing in importance. However, finding quantitative definitions for order, complexity, information and functionality of living systems remains a challenge. Here, we review and develop insights into this problem from the concept of the biotic regulation of the environment developed by Victor Gorshkov (1935–2019). Life’s extraordinary persistence—despite being a strongly non-equilibrium process—requires a quantum-classical duality: the program of life is written in molecules and thus can be copied without information loss, while life’s interaction with its non-equilibrium environment is performed by macroscopic classical objects (living individuals) that age. Life’s key energetic parameter, the volume-specific rate of energy consumption, is maintained within universal limits by most life forms. Contrary to previous suggestions, it cannot serve as a proxy for “evolutionary progress”. In contrast, ecosystem-level surface-specific energy consumption declines with growing animal body size in stable ecosystems. High consumption by big animals is associated with instability. We suggest that the evolutionary increase in body size may represent a spontaneous loss of information about environmental regulation, a manifestation of life’s algorithm ageing as a whole. Full article
(This article belongs to the Special Issue Evolution and Thermodynamics)
Show Figures

Figure 1

Article
Investigation of Forced Convection Enhancement and Entropy Generation of Nanofluid Flow through a Corrugated Minichannel Filled with a Porous Media
Entropy 2020, 22(9), 1008; https://doi.org/10.3390/e22091008 - 09 Sep 2020
Cited by 5
Abstract
Corrugating channel wall is considered to be an efficient procedure for achieving improved heat transfer. Further enhancement can be obtained through the utilization of nanofluids and porous media with high thermal conductivity. This paper presents the effect of geometrical parameters for the determination [...] Read more.
Corrugating channel wall is considered to be an efficient procedure for achieving improved heat transfer. Further enhancement can be obtained through the utilization of nanofluids and porous media with high thermal conductivity. This paper presents the effect of geometrical parameters for the determination of an appropriate configuration. Furthermore, the optimization of forced convective heat transfer and fluid/nanofluid flow through a sinusoidal wavy-channel inside a porous medium is performed through the optimization of entropy generation. The fluid flow in porous media is considered to be laminar and Darcy–Brinkman–Forchheimer model has been utilized. The obtained results were compared with the corresponding numerical data in order to ensure the accuracy and reliability of the numerical procedure. As a result, increasing the Darcy number leads to the increased portion of thermal entropy generation as well as the decreased portion of frictional entropy generation in all configurations. Moreover, configuration with wavelength of 10 mm, amplitude of 0.5 mm and phase shift of 60° was selected as an optimum geometry for further investigations on the addition of nanoparticles. Additionally, increasing trend of average Nusselt number and friction factor, besides the decreasing trend of performance evaluation criteria (PEC) index, were inferred by increasing the volume fraction of the nanofluid (Al2O3 and CuO). Full article
(This article belongs to the Special Issue Thermal Radiation and Entropy Analysis)
Show Figures

Figure 1

Article
Using Matrix-Product States for Open Quantum Many-Body Systems: Efficient Algorithms for Markovian and Non-Markovian Time-Evolution
Entropy 2020, 22(9), 984; https://doi.org/10.3390/e22090984 - 04 Sep 2020
Cited by 4
Abstract
This paper presents an efficient algorithm for the time evolution of open quantum many-body systems using matrix-product states (MPS) proposing a convenient structure of the MPS-architecture, which exploits the initial state of system and reservoir. By doing so, numerically expensive re-ordering protocols are [...] Read more.
This paper presents an efficient algorithm for the time evolution of open quantum many-body systems using matrix-product states (MPS) proposing a convenient structure of the MPS-architecture, which exploits the initial state of system and reservoir. By doing so, numerically expensive re-ordering protocols are circumvented. It is applicable to systems with a Markovian type of interaction, where only the present state of the reservoir needs to be taken into account. Its adaption to a non-Markovian type of interaction between the many-body system and the reservoir is demonstrated, where the information backflow from the reservoir needs to be included in the computation. Also, the derivation of the basis in the quantum stochastic Schrödinger picture is shown. As a paradigmatic model, the Heisenberg spin chain with nearest-neighbor interaction is used. It is demonstrated that the algorithm allows for the access of large systems sizes. As an example for a non-Markovian type of interaction, the generation of highly unusual steady states in the many-body system with coherent feedback control is demonstrated for a chain length of N=30. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

Article
Strong Coupling and Nonextensive Thermodynamics
Entropy 2020, 22(9), 975; https://doi.org/10.3390/e22090975 - 01 Sep 2020
Cited by 1
Abstract
We propose a Hamiltonian-based approach to the nonextensive thermodynamics of small systems, where small is a relative term comparing the size of the system to the size of the effective interaction region around it. We show that the effective Hamiltonian approach gives easy [...] Read more.
We propose a Hamiltonian-based approach to the nonextensive thermodynamics of small systems, where small is a relative term comparing the size of the system to the size of the effective interaction region around it. We show that the effective Hamiltonian approach gives easy accessibility to the thermodynamic properties of systems strongly coupled to their surroundings. The theory does not rely on the classical concept of dividing surface to characterize the system’s interaction with the environment. Instead, it defines an effective interaction region over which a system exchanges extensive quantities with its surroundings, easily producing laws recently shown to be valid at the nanoscale. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Article
On Products of Random Matrices
Entropy 2020, 22(9), 972; https://doi.org/10.3390/e22090972 - 31 Aug 2020
Cited by 4
Abstract
We introduce a family of models, which we name matrix models associated with children’s drawings—the so-called dessin d’enfant. Dessins d’enfant are graphs of a special kind drawn on a closed connected orientable surface (in the sky). The vertices of such a graph are [...] Read more.
We introduce a family of models, which we name matrix models associated with children’s drawings—the so-called dessin d’enfant. Dessins d’enfant are graphs of a special kind drawn on a closed connected orientable surface (in the sky). The vertices of such a graph are small disks that we call stars. We attach random matrices to the edges of the graph and get multimatrix models. Additionally, to the stars we attach source matrices. They play the role of free parameters or model coupling constants. The answers for our integrals are expressed through quantities that we call the “spectrum of stars”. The answers may also include some combinatorial numbers, such as Hurwitz numbers or characters from group representation theory. Full article
(This article belongs to the Special Issue Random Matrix Approaches in Classical and Quantum Information Theory)
Show Figures

Figure 1

Article
Fractional Lotka-Volterra-Type Cooperation Models: Impulsive Control on Their Stability Behavior
Entropy 2020, 22(9), 970; https://doi.org/10.3390/e22090970 - 31 Aug 2020
Cited by 5
Abstract
We present a biological fractional n-species delayed cooperation model of Lotka-Volterra type. The considered fractional derivatives are in the Caputo sense. Impulsive control strategies are applied for several stability properties of the states, namely Mittag-Leffler stability, practical stability and stability with respect [...] Read more.
We present a biological fractional n-species delayed cooperation model of Lotka-Volterra type. The considered fractional derivatives are in the Caputo sense. Impulsive control strategies are applied for several stability properties of the states, namely Mittag-Leffler stability, practical stability and stability with respect to sets. The proposed results extend the existing stability results for integer-order nspecies delayed Lotka-Volterra cooperation models to the fractional-order case under impulsive control. Full article
(This article belongs to the Special Issue Dynamics in Complex Neural Networks)
Show Figures

Figure 1

Article
A New Look on Financial Markets Co-Movement through Cooperative Dynamics in Many-Body Physics
Entropy 2020, 22(9), 954; https://doi.org/10.3390/e22090954 - 29 Aug 2020
Cited by 3
Abstract
One of the main contributions of the Capital Assets Pricing Model (CAPM) to portfolio theory was to explain the correlation between assets through its relationship with the market index. According to this approach, the market index is expected to explain the co-movement between [...] Read more.
One of the main contributions of the Capital Assets Pricing Model (CAPM) to portfolio theory was to explain the correlation between assets through its relationship with the market index. According to this approach, the market index is expected to explain the co-movement between two different stocks to a great extent. In this paper, we try to verify this hypothesis using a sample of 3.000 stocks of the USA market (attending to liquidity, capitalization, and free float criteria) by using some functions inspired by cooperative dynamics in physical particle systems. We will show that all of the co-movement among the stocks is completely explained by the market, even without considering the market beta of the stocks. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

Article
Effect of Machine Entropy Production on the Optimal Performance of a Refrigerator
Entropy 2020, 22(9), 913; https://doi.org/10.3390/e22090913 - 20 Aug 2020
Cited by 7
Abstract
The need for cooling is more and more important in current applications, as environmental constraints become more and more restrictive. Therefore, the optimization of reverse cycle machines is currently required. This optimization could be split in two parts, namely, (1) the design optimization, [...] Read more.
The need for cooling is more and more important in current applications, as environmental constraints become more and more restrictive. Therefore, the optimization of reverse cycle machines is currently required. This optimization could be split in two parts, namely, (1) the design optimization, leading to an optimal dimensioning to fulfill the specific demand (static or nominal steady state optimization); and (2) the dynamic optimization, where the demand fluctuates, and the system must be continuously adapted. Thus, the variability of the system load (with or without storage) implies its careful control-command. The topic of this paper is concerned with part (1) and proposes a novel and more complete modeling of an irreversible Carnot refrigerator that involves the coupling between sink (source) and machine through a heat transfer constraint. Moreover, it induces the choice of a reference heat transfer entropy, which is the heat transfer entropy at the source of a Carnot irreversible refrigerator. The thermodynamic optimization of the refrigerator provides new results regarding the optimal allocation of heat transfer conductances and minimum energy consumption with associated coefficient of performance (COP) when various forms of entropy production owing to internal irreversibility are considered. The reported results and their consequences represent a new fundamental step forward regarding the performance upper bound of Carnot irreversible refrigerator. Full article
(This article belongs to the Special Issue Thermodynamics of Heat Pump and Refrigeration Cycles)
Show Figures

Figure 1

Article
Rate of Entropy Production in Evolving Interfaces and Membranes under Astigmatic Kinematics: Shape Evolution in Geometric-Dissipation Landscapes
Entropy 2020, 22(9), 909; https://doi.org/10.3390/e22090909 - 19 Aug 2020
Abstract
This paper presents theory and simulation of viscous dissipation in evolving interfaces and membranes under kinematic conditions, known as astigmatic flow, ubiquitous during growth processes in nature. The essential aim is to characterize and explain the underlying connections between curvedness and shape evolution [...] Read more.
This paper presents theory and simulation of viscous dissipation in evolving interfaces and membranes under kinematic conditions, known as astigmatic flow, ubiquitous during growth processes in nature. The essential aim is to characterize and explain the underlying connections between curvedness and shape evolution and the rate of entropy production due to viscous bending and torsion rates. The membrane dissipation model used here is known as the Boussinesq-Scriven fluid model. Since the standard approaches in morphological evolution are based on the average, Gaussian and deviatoric curvatures, which comingle shape with curvedness, this paper introduces a novel decoupled approach whereby shape is independent of curvedness. In this curvedness-shape landscape, the entropy production surface under constant homogeneous normal velocity decays with growth but oscillates with shape changes. Saddles and spheres are minima while cylindrical patches are maxima. The astigmatic flow trajectories on the entropy production surface, show that only cylinders and spheres grow under the constant shape. Small deviations from cylindrical shapes evolve towards spheres or saddles depending on the initial condition, where dissipation rates decrease. Taken together the results and analysis provide novel and significant relations between shape evolution and viscous dissipation in deforming viscous membrane and surfaces. Full article
(This article belongs to the Special Issue Statistical Physics of Soft Matter and Complex Systems)
Show Figures

Figure 1

Article
Finite-Time Thermodynamics in Economics
Entropy 2020, 22(8), 891; https://doi.org/10.3390/e22080891 - 13 Aug 2020
Cited by 9
Abstract
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure [...] Read more.
In this paper, we consider optimal trading processes in economic systems. The analysis is based on accounting for irreversibility factors using the wealth function concept. The existence of the welfare function is proved, the concept of capital dissipation is introduced as a measure of the irreversibility of processes in the microeconomic system, and the economic balances are recorded, including capital dissipation. Problems in the form of kinetic equations leading to given conditions of minimal dissipation are considered. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Optimization of a New Design of Molten Salt-to-CO2 Heat Exchanger Using Exergy Destruction Minimization
Entropy 2020, 22(8), 883; https://doi.org/10.3390/e22080883 - 12 Aug 2020
Abstract
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element [...] Read more.
One of the ways to make cost-competitive electricity, from concentrated solar thermal energy, is increasing the thermoelectric conversion efficiency. To achieve this objective, the most promising scheme is a molten salt central receiver, coupled to a supercritical carbon dioxide cycle. A key element to be developed in this scheme is the molten salt-to-CO2 heat exchanger. This paper presents a heat exchanger design that avoids the molten salt plugging and the mechanical stress due to the high pressure of the CO2, while improving the heat transfer of the supercritical phase, due to its compactness with a high heat transfer area. This design is based on a honeycomb-like configuration, in which a thermal unit consists of a circular channel for the molten salt surrounded by six smaller trapezoidal ducts for the CO2. Further, an optimization based on the exergy destruction minimization has been accomplished, obtained the best working conditions of this heat exchanger: a temperature approach of 50 °C between both streams and a CO2 pressure drop of 2.7 bar. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Complex Energy Systems)
Show Figures

Figure 1

Article
Direct and Indirect Effects—An Information Theoretic Perspective
Entropy 2020, 22(8), 854; https://doi.org/10.3390/e22080854 - 31 Jul 2020
Cited by 3
Abstract
Information theoretic (IT) approaches to quantifying causal influences have experienced some popularity in the literature, in both theoretical and applied (e.g., neuroscience and climate science) domains. While these causal measures are desirable in that they are model agnostic and can capture non-linear interactions, [...] Read more.
Information theoretic (IT) approaches to quantifying causal influences have experienced some popularity in the literature, in both theoretical and applied (e.g., neuroscience and climate science) domains. While these causal measures are desirable in that they are model agnostic and can capture non-linear interactions, they are fundamentally different from common statistical notions of causal influence in that they (1) compare distributions over the effect rather than values of the effect and (2) are defined with respect to random variables representing a cause rather than specific values of a cause. We here present IT measures of direct, indirect, and total causal effects. The proposed measures are unlike existing IT techniques in that they enable measuring causal effects that are defined with respect to specific values of a cause while still offering the flexibility and general applicability of IT techniques. We provide an identifiability result and demonstrate application of the proposed measures in estimating the causal effect of the El Niño–Southern Oscillation on temperature anomalies in the North American Pacific Northwest. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Article
Forecasting Bitcoin Trends Using Algorithmic Learning Systems
Entropy 2020, 22(8), 838; https://doi.org/10.3390/e22080838 - 30 Jul 2020
Cited by 3
Abstract
This research has examined the ability of two forecasting methods to forecast Bitcoin’s price trends. The research is based on Bitcoin—USA dollar prices from the beginning of 2012 until the end of March 2020. Such a long period of time that includes volatile [...] Read more.
This research has examined the ability of two forecasting methods to forecast Bitcoin’s price trends. The research is based on Bitcoin—USA dollar prices from the beginning of 2012 until the end of March 2020. Such a long period of time that includes volatile periods with strong up and downtrends introduces challenges to any forecasting system. We use particle swarm optimization to find the best forecasting combinations of setups. Results show that Bitcoin’s price changes do not follow the “Random Walk” efficient market hypothesis and that both Darvas Box and Linear Regression techniques can help traders to predict the bitcoin’s price trends. We also find that both methodologies work better predicting an uptrend than a downtrend. The best setup for the Darvas Box strategy is six days of formation. A Darvas box uptrend signal was found efficient predicting four sequential daily returns while a downtrend signal faded after two days on average. The best setup for the Linear Regression model is 42 days with 1 standard deviation. Full article
Show Figures

Figure 1

Article
Deep Learning for Stock Market Prediction
Entropy 2020, 22(8), 840; https://doi.org/10.3390/e22080840 - 30 Jul 2020
Cited by 30
Abstract
The prediction of stock groups values has always been attractive and challenging for shareholders due to its inherent dynamics, non-linearity, and complex nature. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals, and [...] Read more.
The prediction of stock groups values has always been attractive and challenging for shareholders due to its inherent dynamics, non-linearity, and complex nature. This paper concentrates on the future prediction of stock market groups. Four groups named diversified financials, petroleum, non-metallic minerals, and basic metals from Tehran stock exchange were chosen for experimental evaluations. Data were collected for the groups based on 10 years of historical records. The value predictions are created for 1, 2, 5, 10, 15, 20, and 30 days in advance. Various machine learning algorithms were utilized for prediction of future values of stock market groups. We employed decision tree, bagging, random forest, adaptive boosting (Adaboost), gradient boosting, and eXtreme gradient boosting (XGBoost), and artificial neural networks (ANN), recurrent neural network (RNN) and long short-term memory (LSTM). Ten technical indicators were selected as the inputs into each of the prediction models. Finally, the results of the predictions were presented for each technique based on four metrics. Among all algorithms used in this paper, LSTM shows more accurate results with the highest model fitting ability. In addition, for tree-based models, there is often an intense competition between Adaboost, Gradient Boosting, and XGBoost. Full article
(This article belongs to the Special Issue Information Transfer in Multilayer/Deep Architectures)
Show Figures

Figure 1

Article
Hybrid Quantum-Classical Neural Network for Calculating Ground State Energies of Molecules
Entropy 2020, 22(8), 828; https://doi.org/10.3390/e22080828 - 29 Jul 2020
Cited by 3
Abstract
We present a hybrid quantum-classical neural network that can be trained to perform electronic structure calculation and generate potential energy curves of simple molecules. The method is based on the combination of parameterized quantum circuits and measurements. With unsupervised training, the neural network [...] Read more.
We present a hybrid quantum-classical neural network that can be trained to perform electronic structure calculation and generate potential energy curves of simple molecules. The method is based on the combination of parameterized quantum circuits and measurements. With unsupervised training, the neural network can generate electronic potential energy curves based on training at certain bond lengths. To demonstrate the power of the proposed new method, we present the results of using the quantum-classical hybrid neural network to calculate ground state potential energy curves of simple molecules such as H2, LiH, and BeH2. The results are very accurate and the approach could potentially be used to generate complex molecular potential energy surfaces. Full article
(This article belongs to the Special Issue Noisy Intermediate-Scale Quantum Technologies (NISQ))
Show Figures

Figure 1

Article
An Efficient Method Based on Framelets for Solving Fractional Volterra Integral Equations
Entropy 2020, 22(8), 824; https://doi.org/10.3390/e22080824 - 28 Jul 2020
Cited by 7
Abstract
This paper is devoted to shedding some light on the advantages of using tight frame systems for solving some types of fractional Volterra integral equations (FVIEs) involved by the Caputo fractional order derivative. A tight frame or simply framelet, is a generalization of [...] Read more.
This paper is devoted to shedding some light on the advantages of using tight frame systems for solving some types of fractional Volterra integral equations (FVIEs) involved by the Caputo fractional order derivative. A tight frame or simply framelet, is a generalization of an orthonormal basis. A lot of applications are modeled by non-negative functions; taking this into account in this paper, we consider framelet systems generated using some refinable non-negative functions, namely, B-splines. The FVIEs we considered were reduced to a set of linear system of equations and were solved numerically based on a collocation discretization technique. We present many important examples of FVIEs for which accurate and efficient numerical solutions have been accomplished and the numerical results converge very rapidly to the exact ones. Full article
(This article belongs to the Special Issue Entropy and Its Applications across Disciplines II)
Show Figures

Figure 1

Article
Thermodynamic Analysis of Working Fluids for a New “Heat from Cold” Cycle
Entropy 2020, 22(8), 808; https://doi.org/10.3390/e22080808 - 23 Jul 2020
Cited by 2
Abstract
Adsorptive Heat Transformation systems are at the interface between thermal and chemical engineering. Their study and development need a thorough thermodynamic analysis aimed at the smart choice of adsorbent-adsorptive pair and its fitting with a particular heat transformation cycle. This paper addresses such [...] Read more.
Adsorptive Heat Transformation systems are at the interface between thermal and chemical engineering. Their study and development need a thorough thermodynamic analysis aimed at the smart choice of adsorbent-adsorptive pair and its fitting with a particular heat transformation cycle. This paper addresses such an analysis for a new “Heat from Cold” cycle proposed for amplification of the ambient heat in cold countries. A comparison of four working fluids is made in terms of the useful heat per cycle and the temperature lift. The useful heat increases in the row water > ammonia ≥ methanol > hydrofluorocarbon R32. A threshold mass of exchanged adsorbate, below which the useful heat equals zero, raises in the same sequence. The most promising adsorbents for this cycle are activated carbons Maxsorb III and SRD 1352/2. For all the adsorptives studied, a linear relationship F = A·ΔT is found between the Dubinin adsorption potential and the driving temperature difference ΔT between the two natural thermal baths. It allows the maximum temperature lift during the heat generation stage to be assessed. Thus, a larger ΔT-value promotes the removal of the more strongly bound adsorbate. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

Article
Relative Entropy as a Measure of Difference between Hermitian and Non-Hermitian Systems
Entropy 2020, 22(8), 809; https://doi.org/10.3390/e22080809 - 23 Jul 2020
Abstract
We employ the relative entropy as a measure to quantify the difference of eigenmodes between Hermitian and non-Hermitian systems in elliptic optical microcavities. We have found that the average value of the relative entropy in the range of the collective Lamb shift is [...] Read more.
We employ the relative entropy as a measure to quantify the difference of eigenmodes between Hermitian and non-Hermitian systems in elliptic optical microcavities. We have found that the average value of the relative entropy in the range of the collective Lamb shift is large, while that in the range of self-energy is small. Furthermore, the weak and strong interactions in the non-Hermitian system exhibit rather different behaviors in terms of the relative entropy, and thus it displays an obvious exchange of eigenmodes in the elliptic microcavity. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Figure 1

Article
Interacting Particle Solutions of Fokker–Planck Equations Through Gradient–Log–Density Estimation
Entropy 2020, 22(8), 802; https://doi.org/10.3390/e22080802 - 22 Jul 2020
Cited by 2
Abstract
Fokker–Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort [...] Read more.
Fokker–Planck equations are extensively employed in various scientific fields as they characterise the behaviour of stochastic systems at the level of probability density functions. Although broadly used, they allow for analytical treatment only in limited settings, and often it is inevitable to resort to numerical solutions. Here, we develop a computational approach for simulating the time evolution of Fokker–Planck solutions in terms of a mean field limit of an interacting particle system. The interactions between particles are determined by the gradient of the logarithm of the particle density, approximated here by a novel statistical estimator. The performance of our method shows promising results, with more accurate and less fluctuating statistics compared to direct stochastic simulations of comparable particle number. Taken together, our framework allows for effortless and reliable particle-based simulations of Fokker–Planck equations in low and moderate dimensions. The proposed gradient–log–density estimator is also of independent interest, for example, in the context of optimal control. Full article
Show Figures

Figure 1

Article
Power Conversion and Its Efficiency in Thermoelectric Materials
Entropy 2020, 22(8), 803; https://doi.org/10.3390/e22080803 - 22 Jul 2020
Cited by 4
Abstract
The basic principles of thermoelectrics rely on the coupling of entropy and electric charge. However, the long-standing dispute of energetics versus entropy has long paralysed the field. Herein, it is shown that treating entropy and electric charge in a symmetric manner enables a [...] Read more.
The basic principles of thermoelectrics rely on the coupling of entropy and electric charge. However, the long-standing dispute of energetics versus entropy has long paralysed the field. Herein, it is shown that treating entropy and electric charge in a symmetric manner enables a simple transport equation to be obtained and the power conversion and its efficiency to be deduced for a single thermoelectric material apart from a device. The material’s performance in both generator mode (thermo-electric) and entropy pump mode (electro-thermal) are discussed on a single voltage-electrical current curve, which is presented in a generalized manner by relating it to the electrically open-circuit voltage and the electrically closed-circuited electrical current. The electrical and thermal power in entropy pump mode are related to the maximum electrical power in generator mode, which depends on the material’s power factor. Particular working points on the material’s voltage-electrical current curve are deduced, namely, the electrical open circuit, electrical short circuit, maximum electrical power, maximum power conversion efficiency, and entropy conductivity inversion. Optimizing a thermoelectric material for different working points is discussed with respect to its figure-of-merit z T and power factor. The importance of the results to state-of-the-art and emerging materials is emphasized. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

Article
Evolution Equations for Quantum Semi-Markov Dynamics
Entropy 2020, 22(7), 796; https://doi.org/10.3390/e22070796 - 21 Jul 2020
Cited by 4
Abstract
Using a newly introduced connection between the local and non-local description of open quantum system dynamics, we investigate the relationship between these two characterisations in the case of quantum semi-Markov processes. This class of quantum evolutions, which is a direct generalisation of the [...] Read more.
Using a newly introduced connection between the local and non-local description of open quantum system dynamics, we investigate the relationship between these two characterisations in the case of quantum semi-Markov processes. This class of quantum evolutions, which is a direct generalisation of the corresponding classical concept, guarantees mathematically well-defined master equations, while accounting for a wide range of phenomena, possibly in the non-Markovian regime. In particular, we analyse the emergence of a dephasing term when moving from one type of master equation to the other, by means of several examples. We also investigate the corresponding Redfield-like approximated dynamics, which are obtained after a coarse graining in time. Relying on general properties of the associated classical random process, we conclude that such an approximation always leads to a Markovian evolution for the considered class of dynamics. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

Article
Blind Witnesses Quench Quantum Interference without Transfer of Which-Path Information
Entropy 2020, 22(7), 776; https://doi.org/10.3390/e22070776 - 16 Jul 2020
Abstract
Quantum computation is often limited by environmentally-induced decoherence. We examine the loss of coherence for a two-branch quantum interference device in the presence of multiple witnesses, representing an idealized environment. Interference oscillations are visible in the output as the magnetic flux through the [...] Read more.
Quantum computation is often limited by environmentally-induced decoherence. We examine the loss of coherence for a two-branch quantum interference device in the presence of multiple witnesses, representing an idealized environment. Interference oscillations are visible in the output as the magnetic flux through the branches is varied. Quantum double-dot witnesses are field-coupled and symmetrically attached to each branch. The global system—device and witnesses—undergoes unitary time evolution with no increase in entropy. Witness states entangle with the device state, but for these blind witnesses, which-path information is not able to be transferred to the quantum state of witnesses—they cannot “see” or make a record of which branch is traversed. The system which-path information leaves no imprint on the environment. Yet, the presence of a multiplicity of witnesses rapidly quenches quantum interference. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

Article
Quantum-Gravity Stochastic Effects on the de Sitter Event Horizon
Entropy 2020, 22(6), 696; https://doi.org/10.3390/e22060696 - 22 Jun 2020
Cited by 3
Abstract
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity [...] Read more.
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity based on the hydrodynamic representation of the same CQG theory developed recently. The conclusion follows by investigating the indeterminacy properties of the probability density function and its representation associated with the quantum gravity state, which corresponds to a hydrodynamic continuity equation that satisfies the unitarity principle. As a result, the corresponding form of stochastic quantum-modified Einstein field equations is obtained and shown to admit a stochastic cosmological de Sitter solution for the space-time metric tensor. The analytical calculation of the stochastic averages of relevant physical observables is obtained. These include in particular the radius of the de Sitter sphere fixing the location of the event horizon and the expression of the Hawking temperature associated with the related particle tunneling effect. Theoretical implications for cosmology and field theories are pointed out. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Show Figures

Graphical abstract

Article
Mathematical Models of Consciousness
Entropy 2020, 22(6), 609; https://doi.org/10.3390/e22060609 - 30 May 2020
Cited by 3
Abstract
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, [...] Read more.
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, we aim to remedy this situation. We give an account of what warrants mathematical representation of phenomenal experience, derive a general mathematical framework that takes into account consciousness’ epistemic context, and study which mathematical structures some of the key characteristics of conscious experience imply, showing precisely where mathematical approaches allow to go beyond what the standard methodology can do. The result is a general mathematical framework for models of consciousness that can be employed in the theory-building process. Full article
(This article belongs to the Special Issue Models of Consciousness)
Article
Differential Parametric Formalism for the Evolution of Gaussian States: Nonunitary Evolution and Invariant States
Entropy 2020, 22(5), 586; https://doi.org/10.3390/e22050586 - 23 May 2020
Cited by 5
Abstract
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in [...] Read more.
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in generic form the differential equations for the covariance matrix, the mean values, and the density matrix parameters of a multipartite Gaussian state, unitarily evolving according to a Hamiltonian H ^ . We also present the corresponding differential equations, which describe the nonunitary evolution of the subsystems. The resulting nonlinear equations are used to solve the dynamics of the system instead of the Schrödinger equation. The formalism elaborated allows us to define new specific invariant and quasi-invariant states, as well as states with invariant covariance matrices, i.e., states were only the mean values evolve according to the classical Hamilton equations. By using density matrices in the position and in the tomographic-probability representations, we study examples of these properties. As examples, we present novel invariant states for the two-mode frequency converter and quasi-invariant states for the bipartite parametric amplifier. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Article
A Method to Present and Analyze Ensembles of Information Sources
Entropy 2020, 22(5), 580; https://doi.org/10.3390/e22050580 - 21 May 2020
Abstract
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with [...] Read more.
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided. Full article
(This article belongs to the Special Issue Information Theory in Computational Neuroscience)
Show Figures

Figure 1

Article
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564 - 18 May 2020
Cited by 7
Abstract
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees [...] Read more.
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories. Full article
Show Figures

Graphical abstract

Article
Re-Thinking the World with Neutral Monism:Removing the Boundaries Between Mind, Matter, and Spacetime
Entropy 2020, 22(5), 551; https://doi.org/10.3390/e22050551 - 14 May 2020
Cited by 1
Abstract
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious [...] Read more.
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious experience and fundamental physics are most deeply related. Any attempt to mathematically and formally model conscious experience and its relationship to physics must begin with some metaphysical assumption in mind about the nature of conscious experience, the nature of matter and the nature of the relationship between them. These days the most prominent metaphysical fixed points are strong emergence or some variant of panpsychism. In this paper we will detail another distinct metaphysical starting point known as neutral monism. In particular, we will focus on a variant of the neutral monism of William James and Bertrand Russell. Rather than starting with physics as fundamental, as both strong emergence and panpsychism do in their own way, our goal is to suggest how one might derive fundamental physics from neutral monism. Thus, starting with two axioms grounded in our characterization of neutral monism, we will sketch out a derivation of and explanation for some key features of relativity and quantum mechanics that suggest a unity between those two theories that is generally unappreciated. Our mode of explanation throughout will be of the principle as opposed to constructive variety in something like Einstein’s sense of those terms. We will argue throughout that a bias towards property dualism and a bias toward reductive dynamical and constructive explanation lead to the hard problem and the explanatory gap in consciousness studies, and lead to serious unresolved problems in fundamental physics, such as the measurement problem and the mystery of entanglement in quantum mechanics and lack of progress in producing an empirically well-grounded theory of quantum gravity. We hope to show that given our take on neutral monism and all that follows from it, the aforementioned problems can be satisfactorily resolved leaving us with a far more intuitive and commonsense model of the relationship between conscious experience and physics. Full article
(This article belongs to the Special Issue Models of Consciousness)
Show Figures

Graphical abstract

Article
Prediction and Variable Selection in High-Dimensional Misspecified Binary Classification
Entropy 2020, 22(5), 543; https://doi.org/10.3390/e22050543 - 13 May 2020
Cited by 2
Abstract
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic [...] Read more.
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic regression to the classification data, which possibly do not follow the logistic model. The second method is even more radical: we just treat class labels of objects as they were numbers and apply penalized linear regression. In this paper, we investigate thoroughly these two approaches and provide conditions, which guarantee that they are successful in prediction and variable selection. Our results hold even if the number of predictors is much larger than the sample size. The paper is completed by the experimental results. Full article
Article
Inferring What to Do (And What Not to)
Entropy 2020, 22(5), 536; https://doi.org/10.3390/e22050536 - 11 May 2020
Cited by 3
Abstract
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of [...] Read more.
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). Full article
Show Figures

Graphical abstract

Article
Non-Hermitian Hamiltonians and Quantum Transport in Multi-Terminal Conductors
Entropy 2020, 22(4), 459; https://doi.org/10.3390/e22040459 - 17 Apr 2020
Abstract
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression [...] Read more.
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression for the lead-to-lead transmission coefficients of an arbitrary multi-terminal system. This expression can be easily analyzed, and a robust set of conditions for finding zero and unity transmissions (even in the presence of extra electrodes) can be formulated. Using the proposed formalism, a detailed comparison between three- and two-terminal systems is performed, and it is shown, in particular, that transmission at bound states in the continuum does not change with the third electrode insertion. The main conclusions are illustratively exemplified by some three-terminal toy models. For instance, the influence of the tunneling coupling to the gate electrode is discussed for a model of quantum interference transistor. The results of this paper will be of high interest, in particular, within the field of quantum design of molecular electronic devices. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Graphical abstract

Article
Towards a Unified Theory of Learning and Information
Entropy 2020, 22(4), 438; https://doi.org/10.3390/e22040438 - 13 Apr 2020
Cited by 1
Abstract
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and [...] Read more.
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and we will use it to derive generalization bounds for finite hypothesis spaces, differential privacy, and countable domains, among others. Moreover, we prove that under the Axiom of Choice, the existence of an empirical risk minimization (ERM) rule that has a vanishing learning capacity is equivalent to the assertion that the hypothesis space has a finite Vapnik–Chervonenkis (VC) dimension, thus establishing an equivalence relation between two of the most fundamental concepts in statistical learning theory and information theory. In addition, we show how the learning capacity of an algorithm provides important qualitative results, such as on the relation between generalization and algorithmic stability, information leakage, and data processing. Finally, we conclude by listing some open problems and suggesting future directions of research. Full article
Show Figures

Graphical abstract

Article
Measuring the Tangle of Three-Qubit States
Entropy 2020, 22(4), 436; https://doi.org/10.3390/e22040436 - 11 Apr 2020
Cited by 4
Abstract
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters [...] Read more.
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters are learned in a variational fashion. Once this transformation is achieved, direct measurement of outcome probabilities in the computational basis provides an estimate of the tangle, which quantifies genuine tripartite entanglement. We perform simulations on a set of random states under different noise conditions to asses the validity of the method. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Article
Sensitivity Analysis of Selected Parameters in the Order Picking Process Simulation Model, with Randomly Generated Orders
Entropy 2020, 22(4), 423; https://doi.org/10.3390/e22040423 - 09 Apr 2020
Cited by 8
Abstract
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data [...] Read more.
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data connected to material flows on entry to models, whereas most models assume unified load units as default. To provide such data, pseudorandom number generators (PRNGs) are used. The original generator described in the paper was employed in order to generate picking lists for order picking process (OPP). This ensures building a hypothetical, yet close to reality process in terms of unpredictable customers’ orders. Models with applied PRNGs ensure more detailed and more understandable representation of OPPs in comparison to analytical models. Therefore, the author’s motivation was to present the original model as a tool for enterprises’ managers who might control OPP, devices and means of transport employed therein. The outcomes and implications of the contribution are connected to presentation of selected possibilities in OPP analyses, which might be developed and solved within the model. The presented model has some limitations. One of them is assumption that one mean of transport per one aisle is taken into consideration. Another limitation is the indirectly randomization of certain model’s parameters. Full article
Show Figures

Graphical abstract

Article
On Geometry of Information Flow for Causal Inference
Entropy 2020, 22(4), 396; https://doi.org/10.3390/e22040396 - 30 Mar 2020
Cited by 2
Abstract
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. [...] Read more.
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. This paper takes the perspective of information flow, which includes the Nobel prize winning work on Granger-causality, and the recently highly popular transfer entropy, these being probabilistic in nature. Our main contribution will be to develop analysis tools that will allow a geometric interpretation of information flow as a causal inference indicated by positive transfer entropy. We will describe the effective dimensionality of an underlying manifold as projected into the outcome space that summarizes information flow. Therefore, contrasting the probabilistic and geometric perspectives, we will introduce a new measure of causal inference based on the fractal correlation dimension conditionally applied to competing explanations of future forecasts, which we will write G e o C y x . This avoids some of the boundedness issues that we show exist for the transfer entropy, T y x . We will highlight our discussions with data developed from synthetic models of successively more complex nature: these include the Hénon map example, and finally a real physiological example relating breathing and heart rate function. Full article
Show Figures

Graphical abstract

Article
Spectral Structure and Many-Body Dynamics of Ultracold Bosons in a Double-Well
Entropy 2020, 22(4), 382; https://doi.org/10.3390/e22040382 - 26 Mar 2020
Cited by 2
Abstract
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior [...] Read more.
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior of the particles launched either at the single-particle ground state or saddle-point energy, in a time-independent potential. We complement these results by a characterization of the cross-over from diabatic to quasi-adiabatic evolution under finite-time switching of the potential barrier, via the associated time evolution of a single particle’s von Neumann entropy. This is achieved with the help of the multiconfigurational time-dependent Hartree method for indistinguishable particles (MCTDH-X)—which also allows us to extrapolate our results for increasing particle numbers. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Show Figures

Graphical abstract

Article
Endoreversible Modeling of a Hydraulic Recuperation System
Entropy 2020, 22(4), 383; https://doi.org/10.3390/e22040383 - 26 Mar 2020
Cited by 17
Abstract
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. [...] Read more.
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. Thus far, the investigation of such systems has been limited to individual components or optimizing their control. In this paper, we focus on thermodynamic effects and their impact on the overall systems energy saving potential using endoreversible thermodynamics as the ideal framework for modeling. The dynamical behavior of the hydraulic recuperation system as well as energy savings are estimated using real data of a vehicle suitable for application. Here, energy savings accelerating the vehicle around 10% and a reduction in energy transferred to the conventional disc brakes around 58% are predicted. We further vary certain design and loss parameters—such as accumulator volume, displacement of the hydraulic unit, heat transfer coefficients or pipe diameter—and discuss their influence on the energy saving potential of the system. It turns out that heat transfer coefficients and pipe diameter are of less importance than accumulator volume and displacement of the hydraulic unit. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Article
Theory, Analysis, and Applications of the Entropic Lattice Boltzmann Model for Compressible Flows
Entropy 2020, 22(3), 370; https://doi.org/10.3390/e22030370 - 24 Mar 2020
Cited by 3
Abstract
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both [...] Read more.
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both these problems are presented by modifying the original lattices without increasing the number of discrete velocities and without altering the numerical algorithm. In order to increase the Mach number, we employ shifted lattices while the magnitude of lattice speeds is increased in order to extend the temperature range. Accuracy and efficiency of the shifted lattices are demonstrated with simulations of the supersonic flow field around a diamond-shaped and NACA0012 airfoil, the subsonic, transonic, and supersonic flow field around the Busemann biplane, and the interaction of vortices with a planar shock wave. For the lattices with extended temperature range, the model is validated with the simulation of the Richtmyer–Meshkov instability. We also discuss some key ideas of how to reduce the number of discrete speeds in three-dimensional simulations by pruning of the higher-order lattices, and introduce a new construction of the corresponding guided equilibrium by entropy minimization. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Article
BiEntropy, TriEntropy and Primality
Entropy 2020, 22(3), 311; https://doi.org/10.3390/e22030311 - 10 Mar 2020
Abstract
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small [...] Read more.
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small Gaussian distributed error. The work is repeated in binary using a Monte Carlo simulation of a sample of natural numbers < 232 and in trinary for all natural numbers < 39 with similar but cubic results. We found a significant relationship between BiEntropy and TriEntropy such that we can discriminate between the primes and numbers divisible by six. We discuss the theoretical basis of these results and show how they generalise to give a tight bound on the variance of Pi(x)–Li(x) for all x. This bound is much tighter than the bound given by Von Koch in 1901 as an equivalence for proof of the Riemann Hypothesis. Since the primes are Gaussian due to a simple induction on the binary derivative, this implies that the twin primes conjecture is true. We also provide absolutely convergent asymptotes for the numbers of Fermat and Mersenne primes in the appendices. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Article
Two Faced Janus of Quantum Nonlocality
Entropy 2020, 22(3), 303; https://doi.org/10.3390/e22030303 - 06 Mar 2020
Cited by 13
Abstract
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the [...] Read more.
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the violation of Bell type inequalities implies the existence of mysterious instantaneous influences between distant physical systems). According to the Lüders projection postulate, a quantum measurement performed on one of the two distant entangled physical systems modifies their compound quantum state instantaneously. Therefore, if the quantum state is considered to be an attribute of the individual physical system and if one assumes that experimental outcomes are produced in a perfectly random way, one quickly arrives at the contradiction. It is a primary source of speculations about a spooky action at a distance. Bell nonlocality as defined above was explained and rejected by several authors; thus, we concentrate in this paper on the apparent nonlocality of the Lüders projection. As already pointed out by Einstein, the quantum paradoxes disappear if one adopts the purely statistical interpretation of quantum mechanics (QM). In the statistical interpretation of QM, if probabilities are considered to be objective properties of random experiments we show that the Lüders projection corresponds to the passage from joint probabilities describing all set of data to some marginal conditional probabilities describing some particular subsets of data. If one adopts a subjective interpretation of probabilities, such as QBism, then the Lüders projection corresponds to standard Bayesian updating of the probabilities. The latter represents degrees of beliefs of local agents about outcomes of individual measurements which are placed or which will be placed at distant locations. In both approaches, probability-transformation does not happen in the physical space, but only in the information space. Thus, all speculations about spooky interactions or spooky predictions at a distance are simply misleading. Coming back to Bell nonlocality, we recall that in a recent paper we demonstrated, using exclusively the quantum formalism, that CHSH inequalities may be violated for some quantum states only because of the incompatibility of quantum observables and Bohr’s complementarity. Finally, we explain that our criticism of quantum nonlocality is in the spirit of Hertz-Boltzmann methodology of scientific theories. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Article
Exploring the Phase Space of Multi-Principal-Element Alloys and Predicting the Formation of Bulk Metallic Glasses
Entropy 2020, 22(3), 292; https://doi.org/10.3390/e22030292 - 02 Mar 2020
Cited by 1
Abstract
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work [...] Read more.
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work presents a computational method for the prediction of possible glass-forming compositions for a chosen alloys system as well as the calculation of their critical cooling rates. The obtained results compare well to experimental data for Pd-Ni-P, micro-alloyed Pd-Ni-P, Cu-Mg-Ca, and Cu-Zr-Ti. Furthermore, a random-number-generator-based algorithm is employed to explore glass-forming candidate alloys with a minimum critical cooling rate, reducing the number of datapoints necessary to find suitable glass-forming compositions. A comparison with experimental results for the quaternary Ti-Zr-Cu-Ni system shows a promising overlap of calculation and experiment, implying that it is a reasonable method to find candidates for glass-forming alloys with a sufficiently low critical cooling rate to allow the formation of bulk metallic glasses. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

Article
Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis
Entropy 2020, 22(3), 280; https://doi.org/10.3390/e22030280 - 28 Feb 2020
Cited by 11
Abstract
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection [...] Read more.
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection (D) of a failure mode. To deal with the uncertainty in the assessments given by domain experts, a novel Deng entropy weighted risk priority number (DEWRPN) for FMEA is proposed in the framework of Dempster–Shafer evidence theory (DST). DEWRPN takes into consideration the relative importance in both risk factors and FMEA experts. The uncertain degree of objective assessments coming from experts are measured by the Deng entropy. An expert’s weight is comprised of the three risk factors’ weights obtained independently from expert’s assessments. In DEWRPN, the strategy of assigning weight for each expert is flexible and compatible to the real decision-making situation. The entropy-based relative weight symbolizes the relative importance. In detail, the higher the uncertain degree of a risk factor from an expert is, the lower the weight of the corresponding risk factor will be and vice versa. We utilize Deng entropy to construct the exponential weight of each risk factor as well as an expert’s relative importance on an FMEA item in a state-of-the-art way. A case study is adopted to verify the practicability and effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Article
Maxwell’s Demon in Quantum Mechanics
Entropy 2020, 22(3), 269; https://doi.org/10.3390/e22030269 - 27 Feb 2020
Cited by 1
Abstract
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow [...] Read more.
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow of time, the threat to its universality threatens the account of temporal directionality as well. Various attempts to “exorcise” the Demon, by proving that it is impossible for one reason or another, have been made throughout the years, but none of them were successful. We have shown (in a number of publications) by a general state-space argument that Maxwell’s Demon is compatible with classical mechanics, and that the most recent solutions, based on Landauer’s thesis, are not general. In this paper we demonstrate that Maxwell’s Demon is also compatible with quantum mechanics. We do so by analyzing a particular (but highly idealized) experimental setup and proving that it violates the Second Law. Our discussion is in the framework of standard quantum mechanics; we give two separate arguments in the framework of quantum mechanics with and without the projection postulate. We address in our analysis the connection between measurement and erasure interactions and we show how these notions are applicable in the microscopic quantum mechanical structure. We discuss what might be the quantum mechanical counterpart of the classical notion of “macrostates”, thus explaining why our Quantum Demon setup works not only at the micro level but also at the macro level, properly understood. One implication of our analysis is that the Second Law cannot provide a universal lawlike basis for an account of the arrow of time; this account has to be sought elsewhere. Full article
(This article belongs to the Special Issue Time and Entropy)
Show Figures

Figure 1

Article
Entropy of Conduction Electrons from Transport Experiments
Entropy 2020, 22(2), 244; https://doi.org/10.3390/e22020244 - 21 Feb 2020
Cited by 1
Abstract
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence [...] Read more.
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence of meta-magnetic phase transitions and the electronic structure in strongly disordered materials, such as alloys. We showed that the electronic entropy change in meta-magnetic transitions is not constant with the applied magnetic field, as is usually assumed. Furthermore, we traced the evolution of the electronic entropy with respect to the chemical composition of an alloy series. Insights about the strength and kind of interactions appearing in the exemplary materials can be identified in the experiments. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

Article
Thermodynamic and Transport Properties of Equilibrium Debye Plasmas
Entropy 2020, 22(2), 237; https://doi.org/10.3390/e22020237 - 20 Feb 2020
Cited by 2
Abstract
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity [...] Read more.
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity of the rigorous approach led to the development of simplified models, able to include the neighbor-effects on the isolated system while remaining consistent with the traditional thermodynamic approach. High-density conditions have been simulated assuming particle interactions described by a screened Coulomb potential. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

Article
Global Geometry of Bayesian Statistics
Entropy 2020, 22(2), 240; https://doi.org/10.3390/e22020240 - 20 Feb 2020
Abstract
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact [...] Read more.
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact Hamiltonian flow presents a certain Bayesian inference on normal distributions. In this paper, we describe Bayesian statistics and the information geometry in the language of current geometry in order to spread our interest in statistics through general geometers and topologists. Then, we foliate the space of multivariate normal distributions by symplectic leaves to generalize the above result of the author. This foliation arises from the Cholesky decomposition of the covariance matrices. Full article
(This article belongs to the Special Issue Information Geometry III)
Show Figures

Graphical abstract

Article
Entropy, Information, and Symmetry; Ordered Is Symmetrical, II: System of Spins in the Magnetic Field
Entropy 2020, 22(2), 235; https://doi.org/10.3390/e22020235 - 19 Feb 2020
Cited by 2
Abstract
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and [...] Read more.
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and “disorder” are vague and subjective, to a great extent. This leads to numerous misinterpretations of entropy. We propose that the disorder is viewed as an absence of symmetry and identify “ordering” with symmetrizing of a physical system; in other words, introducing the elements of symmetry into an initially disordered physical system. We explore the initially disordered system of elementary magnets exerted to the external magnetic field H . Imposing symmetry restrictions diminishes the entropy of the system and decreases its temperature. The general case of the system of elementary magnets demonstrating j-fold symmetry is studied. The T j = T j interrelation takes place, where T and T j are the temperatures of non-symmetrized and j-fold-symmetrized systems of the magnets, correspondingly. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Communication
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf’s Law for Word Frequencies
Entropy 2020, 22(2), 224; https://doi.org/10.3390/e22020224 - 17 Feb 2020
Cited by 5
Abstract
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This [...] Read more.
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This paper presents a new perspective to establish a connection between different statistical linguistic laws. Characterizing each word type by two random variables—length (in number of characters) and absolute frequency—we show that the corresponding bivariate joint probability distribution shows a rich and precise phenomenology, with the type-length and the type-frequency distributions as its two marginals, and the conditional distribution of frequency at fixed length providing a clear formulation for the brevity-frequency phenomenon. The type-length distribution turns out to be well fitted by a gamma distribution (much better than with the previously proposed lognormal), and the conditional frequency distributions at fixed length display power-law-decay behavior with a fixed exponent α 1.4 and a characteristic-frequency crossover that scales as an inverse power δ 2.8 of length, which implies the fulfillment of a scaling law analogous to those found in the thermodynamics of critical phenomena. As a by-product, we find a possible model-free explanation for the origin of Zipf’s law, which should arise as a mixture of conditional frequency distributions governed by the crossover length-dependent frequency. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

Article
Generalised Measures of Multivariate Information Content
Entropy 2020, 22(2), 216; https://doi.org/10.3390/e22020216 - 14 Feb 2020
Cited by 4
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted [...] Read more.
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Article
Finite-Time Thermodynamic Model for Evaluating Heat Engines in Ocean Thermal Energy Conversion
Entropy 2020, 22(2), 211; https://doi.org/10.3390/e22020211 - 13 Feb 2020
Cited by 20
Abstract
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of [...] Read more.
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of low thermal efficiency. Research has been conducted to propose unique systems that can increase the thermal efficiency. This thermal efficiency is generally applied for the system performance metric, and researchers have focused on using the higher available temperature difference of heat engines to improve this efficiency without considering the finite flow rate and sensible heat of seawater. In this study, our model shows a new concept of thermodynamics for OTEC. The first step is to define the transferable thermal energy in the OTEC as the equilibrium state and the dead state instead of the atmospheric condition. Second, the model shows the available maximum work, the new concept of exergy, by minimizing the entropy generation while considering external heat loss. The maximum thermal energy and exergy allow the normalization of the first and second laws of thermal efficiencies. These evaluation methods can be applied to optimized OTEC systems and their effectiveness is confirmed. Full article
(This article belongs to the Special Issue Entropy in Renewable Energy Systems)
Show Figures

Figure 1

Article
On the Irrationality of Being in Two Minds
Entropy 2020, 22(2), 174; https://doi.org/10.3390/e22020174 - 04 Feb 2020
Cited by 2
Abstract
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology [...] Read more.
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology and cognitive science shows that human decisions regularly deviate from these axioms. Bistable probabilities are proposed as a principled and straight forward means for modeling (ir)rational decision making, which occurs when a decision maker is in “two minds”. We show that bistable probabilities can be formalized by positive-operator-valued projections in quantum mechanics. We found that (1) irrational decision making necessarily involves a wider spectrum of causal relationships than rational decision making, (2) the accessible information turns out to be greater in irrational decision making when compared to rational decision making, and (3) irrational decision making is quantum-like because it violates the Bell–Wigner polytope. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

Article
Nonlinear Fokker–Planck Equation Approach to Systems of Interacting Particles: Thermostatistical Features Related to the Range of the Interactions
Entropy 2020, 22(2), 163; https://doi.org/10.3390/e22020163 - 31 Jan 2020
Cited by 3
Abstract
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and [...] Read more.
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and the S q entropies have focused on systems interacting through short-range forces. In the present contribution we re-visit the NLFPE approach to interacting systems in order to clarify the role played by the range of the interactions, and to explore the possibility of developing similar treatments for systems with long-range interactions, such as those corresponding to Newtonian gravitation. In particular, we consider a system of particles interacting via forces following the inverse square law and performing overdamped motion, that is described by a density obeying an integro-differential evolution equation that admits exact time-dependent solutions of the q-Gaussian form. These q-Gaussian solutions, which constitute a signature of S q -thermostatistics, evolve in a similar but not identical way to the solutions of an appropriate nonlinear, power-law Fokker–Planck equation. Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Show Figures

Figure 1

Article
Evolution of Neuroaesthetic Variables in Portrait Paintings throughout the Renaissance
Entropy 2020, 22(2), 146; https://doi.org/10.3390/e22020146 - 26 Jan 2020
Cited by 3
Abstract
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied [...] Read more.
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied across art periods despite relatively unvaried neural function. We measured several neuroaesthetic variables in portrait paintings from the Early and High Renaissance, and from Mannerism. The variables included symmetry, balance, and contrast (chiaroscuro), as well as intensity and spatial complexities measured by two forms of normalized entropy. The results showed that the degree of symmetry remained relatively constant during the Renaissance. However, the balance of portraits decayed abruptly at the end of the Early Renaissance, that is, at the closing of the 15th century. Intensity and spatial complexities, and thus entropies, of portraits also fell in such manner around the same time. Our data also showed that the decline of complexity and entropy could be attributed to the rise of chiaroscuro. With few exceptions, the values of aesthetic variables from the top of artists of the Renaissance resembled those of their peers. We conclude that neuroaesthetic variables have flexibility to change in brains of artists (and observers). Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Article
Adapting Logic to Physics: The Quantum-Like Eigenlogic Program
Entropy 2020, 22(2), 139; https://doi.org/10.3390/e22020139 - 24 Jan 2020
Cited by 4
Abstract
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear [...] Read more.
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear algebra. Logical functions are represented by operators and logical truth tables correspond to the eigenvalue structure. It extends the possibilities of classical logic by changing the semantics from the Boolean binary alphabet { 0 , 1 } using projection operators to the binary alphabet { + 1 , 1 } employing reversible involution operators. Also, many-valued logical operators are synthesized, for whatever alphabet, using operator methods based on Lagrange interpolation and on the Cayley–Hamilton theorem. Considering a superposition of logical input states one gets a fuzzy logic representation where the fuzzy membership function is the quantum probability given by the Born rule. Historical parallels from Boole, Post, Poincaré and Combinatory Logic are presented in relation to probability theory, non-commutative quaternion algebra and Turing machines. An extension to first order logic is proposed inspired by Grover’s algorithm. Eigenlogic is essentially a logic of operators and its truth-table logical semantics is provided by the eigenvalue structure which is shown to be related to the universality of logical quantum gates, a fundamental role being played by non-commutativity and entanglement. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

Article
Electric Double Layers with Surface Charge Regulation Using Density Functional Theory
Entropy 2020, 22(2), 132; https://doi.org/10.3390/e22020132 - 22 Jan 2020
Cited by 5
Abstract
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in [...] Read more.
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in determining the local structure is simply due to the much larger number of solvent molecules in a typical electrolyte solution.The dominant local packing of solvent then creates a space left for the charged species. Our classical density functional theory work demonstrates that the solvent structural effect strongly couples to the surface chemistry, which governs the charge and potential. In this article we address some outstanding questions relating double layer modeling. Firstly, we address the role of ion-ion correlations that go beyond mean field correlations. Secondly we consider the effects of a density dependent dielectric constant which is crucial in the description of a electrolyte-vapor interface. Full article
Show Figures

Figure 1

Article
Eigenvalues of Two-State Quantum Walks Induced by the Hadamard Walk
Entropy 2020, 22(1), 127; https://doi.org/10.3390/e22010127 - 20 Jan 2020
Cited by 3
Abstract
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks [...] Read more.
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks in one dimension we had treated in our previous studies. Especially, we clarified the characteristic parameter dependence for the distributions of the eigenvalues with the aid of numerical simulation. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Show Figures

Figure 1

Article
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Cited by 4
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Article
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Cited by 7
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

Article
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Show Figures

Graphical abstract

Article
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Cited by 2
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
The Convex Information Bottleneck Lagrangian
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Cited by 4
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Graphical abstract

Article
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Cited by 3
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Show Figures

Figure 1

Article
On Unitary t-Designs from Relaxed Seeds
Entropy 2020, 22(1), 92; https://doi.org/10.3390/e22010092 - 12 Jan 2020
Cited by 3
Abstract
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. [...] Read more.
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. In particular, they are composed of ensembles of unitaries which, for technical reasons, must contain inverses and whose entries are algebraic. In this work, we reduce the requirements for generating an ε -approximate unitary t-design. To do so, we first construct a specific n-qubit random quantum circuit composed of a sequence of randomly chosen 2-qubit gates, chosen from a set of unitaries which is approximately universal on U ( 4 ) , yet need not contain unitaries and their inverses nor are in general composed of unitaries whose entries are algebraic; dubbed r e l a x e d seed. We then show that this relaxed seed, when used as a basis for our construction, gives rise to an ε -approximate unitary t-design efficiently, where the depth of our random circuit scales as p o l y ( n , t , l o g ( 1 / ε ) ) , thereby overcoming the two requirements which limited previous constructions. We suspect the result found here is not optimal and can be improved; particularly because the number of gates in the relaxed seeds introduced here grows with n and t. We conjecture that constant sized seeds such as those which are usually present in the literature are sufficient. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Article
Development of Novel Lightweight Dual-Phase Al-Ti-Cr-Mn-V Medium-Entropy Alloys with High Strength and Ductility
Entropy 2020, 22(1), 74; https://doi.org/10.3390/e22010074 - 06 Jan 2020
Cited by 1
Abstract
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 [...] Read more.
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 g/cm3. However, the fraction of the BCC phase and morphology of the FCC phase can be controlled by incorporating other elements. The results of compression tests indicated that these Al-Ti-Cr-Mn-V alloys exhibited a prominent compression strength (~1940 MPa) and ductility (~30%). Moreover, homogenized samples maintained a high compression strength of 1900 MPa and similar ductility (30%). Due to the high specific compressive strength (0.433 GPa·g/cm3) and excellent combination of strength and ductility, the cast lightweight Al-Ti-Cr-Mn-V MEAs are a promising alloy system for application in transportation and energy industries. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Figure 1

Article
Energy Disaggregation Using Elastic Matching Algorithms
Entropy 2020, 22(1), 71; https://doi.org/10.3390/e22010071 - 06 Jan 2020
Cited by 8
Abstract
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant [...] Read more.
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic matching algorithms were evaluated across different datasets and the experimental results showed that the minimum variance matching algorithm outperforms all other evaluated matching algorithms. The best performing minimum variance matching algorithm improved the energy disaggregation accuracy by 2.7% when compared to the baseline dynamic time warping algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Bounds on Mixed State Entanglement
Entropy 2020, 22(1), 62; https://doi.org/10.3390/e22010062 - 01 Jan 2020
Cited by 2
Abstract
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on [...] Read more.
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on the assumption of finite dimensionality. In addition, it turns out to be of experimental interest since some purity-measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to the monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Article
Nonlinear Information Bottleneck
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181 - 30 Nov 2019
Cited by 15
Abstract
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. [...] Read more.
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

Article
OTEC Maximum Net Power Output Using Carnot Cycle and Application to Simplify Heat Exchanger Selection
Entropy 2019, 21(12), 1143; https://doi.org/10.3390/e21121143 - 22 Nov 2019
Cited by 18
Abstract
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly [...] Read more.
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly by focusing on improving cycle performances or central elements of OTEC, such as heat exchangers. It is difficult to choose a suitable heat exchanger for OTEC with the separate evaluations of the heat transfer coefficient and pressure drop that are usually found in the literature. Accordingly, this paper presents a method to evaluate heat exchangers for OTEC. On the basis of finite-time thermodynamics, the maximum net power output for different heat exchangers using both heat transfer performance and pressure drop was assessed and compared. This method was successfully applied to three heat exchangers. The most suitable heat exchanger was found to lead to a maximum net power output 158% higher than the output of the least suitable heat exchanger. For a difference of 3.7% in the net power output, a difference of 22% in the Reynolds numbers was found. Therefore, those numbers also play a significant role in the choice of heat exchangers as they affect the pumping power required for seawater flowing. A sensitivity analysis showed that seawater temperature does not affect the choice of heat exchangers, even though the net power output was found to decrease by up to 10% with every temperature difference drop of 1 °C. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Article
Topological Information Data Analysis
Entropy 2019, 21(9), 869; https://doi.org/10.3390/e21090869 - 06 Sep 2019
Cited by 13
Abstract
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information [...] Read more.
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information introduced in a serie of studies. In particular, we show that the vanishing of all I k for 2 k n of n random variables is equivalent to their statistical independence. Pursuing the work of Hu Kuo Ting and Te Sun Han, we show that information functions provide co-ordinates for binary variables, and that they are analytically independent from the probability simplex for any set of finite variables. The maximal positive I k identifies the variables that co-vary the most in the population, whereas the minimal negative I k identifies synergistic clusters and the variables that differentiate–segregate the most in the population. Finite data size effects and estimation biases severely constrain the effective computation of the information topology on data, and we provide simple statistical tests for the undersampling bias and the k-dependences. We give an example of application of these methods to genetic expression and unsupervised cell-type classification. The methods unravel biologically relevant subtypes, with a sample size of 41 genes and with few errors. It establishes generic basic methods to quantify the epigenetic information storage and a unified epigenetic unsupervised learning formalism. We propose that higher-order statistical interactions and non-identically distributed variables are constitutive characteristics of biological systems that should be estimated in order to unravel their significant statistical structure and diversity. The topological information data analysis presented here allows for precisely estimating this higher-order structure characteristic of biological systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Image Encryption Scheme with Compressed Sensing Based on New Three-Dimensional Chaotic System
Entropy 2019, 21(9), 819; https://doi.org/10.3390/e21090819 - 22 Aug 2019
Cited by 21
Abstract
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation [...] Read more.
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation diagram and complexity of the new three-dimensional chaotic system are analyzed. The performance analysis shows that the chaotic system has two positive Lyapunov exponents and high complexity. In the encryption scheme, a new chaotic system is used as the measurement matrix for compressed sensing, and Arnold is used to scrambling the image further. The proposed method has better reconfiguration ability in the compressible range of the algorithm compared with other methods. The experimental results show that the proposed encryption scheme has good encryption effect and image compression capability. Full article
Show Figures

Figure 1

Article
Distinguishing between Clausius, Boltzmann and Pauling Entropies of Frozen Non-Equilibrium States
Entropy 2019, 21(8), 799; https://doi.org/10.3390/e21080799 - 15 Aug 2019
Cited by 3
Abstract
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do [...] Read more.
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do not necessarily provide mutually consistent results, and for equilibrium systems their difference was explained by introducing a residual zero-point entropy (following Pauling), apparently violating the Nernst theorem. At finite temperatures, associated statistical entropies which count microstates that do not contribute to a body’s heat capacity, differ systematically from Clausius entropy, and are of particular relevance as measures for metastable, frozen-in non-equilibrium structures and for symbolic information processing (following Shannon). In this paper, it is suggested to consider Clausius, Boltzmann, Pauling and Shannon entropies as distinct, though related, physical quantities with different key properties, in order to avoid confusion by loosely speaking about just “entropy” while actually referring to different kinds of it. For instance, zero-point entropy exclusively belongs to Boltzmann rather than Clausius entropy, while the Nernst theorem holds rigorously for Clausius rather than Boltzmann entropy. The discussion of those terms is underpinned by a brief historical review of the emergence of corresponding fundamental thermodynamic concepts. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

Article
Comparing Information Metrics for a Coupled Ornstein–Uhlenbeck Process
Entropy 2019, 21(8), 775; https://doi.org/10.3390/e21080775 - 08 Aug 2019
Cited by 8
Abstract
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion [...] Read more.
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion of a metric between two given states. A popular measure of information change in a system under perturbation has been the relative entropy of the states, as this notion allows us to quantify the difference between states of a system at different times. In this paper, we investigate the relaxation problem given by a single and coupled Ornstein–Uhlenbeck (O-U) process and compare the information length with entropy-based metrics (relative entropy, Jensen divergence) as well as others. By measuring the total information length in the long time limit, we show that it is only the information length that preserves the linear geometry of the O-U process. In the coupled O-U process, the information length is shown to be capable of detecting changes in both components of the system even when other metrics would detect almost nothing in one of the components. We show in detail that the information length is sensitive to the evolution of subsystems. Full article
(This article belongs to the Special Issue Statistical Mechanics and Mathematical Physics)
Show Figures

Figure 1

Article
Power, Efficiency and Fluctuations in a Quantum Point Contact as Steady-State Thermoelectric Heat Engine
Entropy 2019, 21(8), 777; https://doi.org/10.3390/e21080777 - 08 Aug 2019
Cited by 16
Abstract
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation [...] Read more.
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation of a quantum point contact (QPC) with an energy-dependent transmission function as a steady-state thermoelectric heat engine. As a starting point, we review and extend previous analysis of the power production and efficiency. Thereafter the power fluctuations and the bound jointly imposed on the power, efficiency, and fluctuations by the TURs are analyzed as additional performance quantifiers. We allow for arbitrary smoothness of the transmission probability of the QPC, which exhibits a close to step-like dependence in energy, and consider both the linear and the non-linear regime of operation. It is found that for a broad range of parameters, the power production reaches nearly its theoretical maximum value, with efficiencies more than half of the Carnot efficiency and at the same time with rather small fluctuations. Moreover, we show that by demanding a non-zero power production, in the linear regime a stronger TUR can be formulated in terms of the thermoelectric figure of merit. Interestingly, this bound holds also in a wide parameter regime beyond linear response for our QPC device. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

Article
Two Measures of Dependence
Entropy 2019, 21(8), 778; https://doi.org/10.3390/e21080778 - 08 Aug 2019
Cited by 8
Abstract
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first [...] Read more.
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

Article
Electron Traversal Times in Disordered Graphene Nanoribbons
Entropy 2019, 21(8), 737; https://doi.org/10.3390/e21080737 - 27 Jul 2019
Cited by 7
Abstract
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be [...] Read more.
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be related to operational frequencies of GNR-based devices and their consequent rational design. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

Article
Empirical Estimation of Information Measures: A Literature Guide
Entropy 2019, 21(8), 720; https://doi.org/10.3390/e21080720 - 24 Jul 2019
Cited by 17
Abstract
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in [...] Read more.
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in data science, machine learning, biology, neuroscience, economics, language, and other experimental sciences. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

Article
Dynamic Maximum Entropy Reduction
Entropy 2019, 21(7), 715; https://doi.org/10.3390/e21070715 - 22 Jul 2019
Cited by 11
Abstract
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state [...] Read more.
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state variables. The method is based on explicit recognition of the state and conjugate variables, which can relax towards the respective quasi-equilibria in different ways. Detailed state variables are reduced using the usual principle of maximum entropy (MaxEnt), whereas relaxation of conjugate variables guarantees that the reduced equations are closed. Moreover, an infinite chain of consecutive DynMaxEnt approximations can be constructed. The method is demonstrated on a particle with friction, complex fluids (equipped with conformation and Reynolds stress tensors), hyperbolic heat conduction and magnetohydrodynamics. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
Show Figures

Figure 1

Communication
Derivations of the Core Functions of the Maximum Entropy Theory of Ecology
Entropy 2019, 21(7), 712; https://doi.org/10.3390/e21070712 - 21 Jul 2019
Cited by 11
Abstract
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the [...] Read more.
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the METE framework, “ecological state variables” (representing total area, total species richness, total abundance, and total metabolic energy) describe macroecological properties of an ecosystem. METE incorporates these state variables into constraints on underlying probability distributions. The method of Lagrange multipliers and maximization of information entropy (MaxEnt) lead to predicted functional forms of distributions of interest. We demonstrate how information entropy is maximized for the general case of a distribution, which has empirical information that provides constraints on the overall predictions. We then show how METE’s two core functions are derived. These functions, called the “Spatial Structure Function” and the “Ecosystem Structure Function” are the core pieces of the theory, from which all the predictions of METE follow (including the Species Area Relationship, the Species Abundance Distribution, and various metabolic distributions). Primarily, we consider the discrete distributions predicted by METE. We also explore the parameter space defined by the METE’s state variables and Lagrange multipliers. We aim to provide a comprehensive resource for ecologists who want to understand the derivations and assumptions of the basic mathematical structure of METE. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Graphical abstract

Article
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700 - 16 Jul 2019
Cited by 5
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Article
Multiobjective Optimization of a Plate Heat Exchanger in a Waste Heat Recovery Organic Rankine Cycle System for Natural Gas Engines
Entropy 2019, 21(7), 655; https://doi.org/10.3390/e21070655 - 03 Jul 2019
Cited by 33
Abstract
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. [...] Read more.
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. Indirect evaporation between the exhaust gas and the organic fluid in the parallel plate heat exchanger (ITC2) implied irreversible heat transfer and high investment costs, which were considered as objective functions to be minimized. Energy and exergy balances were applied to the system components, in addition to the phenomenological equations in the ITC2, to calculate global energy indicators, such as the thermal efficiency of the configuration, the heat recovery efficiency, the overall energy conversion efficiency, the absolute increase of engine thermal efficiency, and the reduction of the break-specific fuel consumption of the system, of the system integrated with the gas engine. The results allowed calculation of the plate spacing, plate height, plate width, and chevron angle that minimized the investment cost and entropy generation of the equipment, reaching 22.04 m2 in the heat transfer area, 693.87 kW in the energy transfer by heat recovery from the exhaust gas, and 41.6% in the overall thermal efficiency of the ORC as a bottoming cycle for the engine. This type of result contributes to the inclusion of this technology in the industrial sector as a consequence of the improvement in thermal efficiency and economic viability. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Show Figures

Figure 1

Article
Energy and New Economic Approach for Nearly Zero Energy Hotels
Entropy 2019, 21(7), 639; https://doi.org/10.3390/e21070639 - 28 Jun 2019
Cited by 13
Abstract
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront [...] Read more.
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront goals of EU’s energy policy for 2050. The achievement of nZEBs target for hotels is necessary not only to comply with changing regulations and legislations, but also to foster competitiveness to secure new funding. Indeed, the nZEB hotel status allows for the reduction of operating costs and the increase of energy security, meeting the market and guests’ expectations. Actually, there is not a set national value of nZEBs for hotels to be attained, despite the fact that hotels are among the most energy-intensive buildings. This paper presents the case study of the energy retrofit of an existing historical hotel located in southern Italy (Syracuse) in order to achieve nZEBs status. Starting from the energy audit, the paper proposes a step-by-step approach to nZEBs performance, with a perspective on the costs, in order to identify the most effective energy solutions. Such an approach allows useful insights regarding energy and economic–financial strategies for achieving nZEBs standards to highlighted. Moreover, the results of this paper provide, to stakeholders, useful information for quantifying the technical convenience and economic profitability to reach an nZEBs target in order to prevent the expenses necessary by future energy retrofit programs. Full article
Show Figures

Figure 1

Article
Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples
Entropy 2019, 21(6), 623; https://doi.org/10.3390/e21060623 - 25 Jun 2019
Cited by 2
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual [...] Read more.
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

Article
Changed Temporal Structure of Neuromuscular Control, Rather Than Changed Intersegment Coordination, Explains Altered Stabilographic Regularity after a Moderate Perturbation of the Postural Control System
Entropy 2019, 21(6), 614; https://doi.org/10.3390/e21060614 - 21 Jun 2019
Cited by 6
Abstract
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment [...] Read more.
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment movements (coordinative complexity; H2). The current study used rapid, voluntary head-shaking to perturb the postural control system, thus producing changes in COP regularity, to then assess the two hypotheses. Sixteen healthy participants (age 26.5 ± 3.5; seven females), whose postural movements were tracked via 39 reflective markers, performed trials in which they first stood quietly on a force plate for 30 s, then shook their head for 10 s, finally stood quietly for another 90 s. A principal component analysis (PCA) performed on the kinematic data extracted the main postural movement components. Temporal regularity was determined by calculating SaEn on the time series of these movement components. Coordinative complexity was determined by assessing the relative explained variance of the first five components. H1 was supported, but H2 was not. These results suggest that moderate perturbations of the postural control system produce altered temporal structures of the main postural movement components, but do not necessarily change the coordinative structure of intersegment movements. Full article
(This article belongs to the Section Complexity)
Show Figures

Graphical abstract

Article
Structural Characteristics of Two-Sender Index Coding
Entropy 2019, 21(6), 615; https://doi.org/10.3390/e21060615 - 21 Jun 2019
Cited by 4
Abstract
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique [...] Read more.
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

Article
Machine Learning Techniques to Identify Antimicrobial Resistance in the Intensive Care Unit
Entropy 2019, 21(6), 603; https://doi.org/10.3390/e21060603 - 18 Jun 2019
Cited by 8
Abstract
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and [...] Read more.
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and economic burden for the patient, for the system, and for society in general. Because of the critical health status of patients in the intensive care unit (ICU), time is critical to identify bacteria and their resistance to antibiotics. Since common antibiotics resistance tests require between 24 and 48 h after the culture is collected, we propose to apply machine learning (ML) techniques to determine whether a bacterium will be resistant to different families of antimicrobials. For this purpose, clinical and demographic features from the patient, as well as data from cultures and antibiograms are considered. From a population point of view, we also show graphically the relationship between different bacteria and families of antimicrobials by performing correspondence analysis. Results of the ML techniques evidence non-linear relationships helping to identify antimicrobial resistance at the ICU, with performance dependent on the family of antimicrobials. A change in the trend of antimicrobial resistance is also evidenced. Full article
Show Figures

Figure 1

Article
A New Deep Learning Based Multi-Spectral Image Fusion Method
Entropy 2019, 21(6), 570; https://doi.org/10.3390/e21060570 - 05 Jun 2019
Cited by 11
Abstract
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency [...] Read more.
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Non-Thermal Quantum Engine in Transmon Qubits
Entropy 2019, 21(6), 545; https://doi.org/10.3390/e21060545 - 29 May 2019
Cited by 16
Abstract
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical [...] Read more.
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical findings are indispensable in developing a comprehensive framework. To this end, we theoretically propose an experimentally realistic quantum engine that uses transmon qubits as working substance. We solve the dynamics analytically and calculate its efficiency. Full article
Show Figures

Graphical abstract

Article
EEG Characterization of the Alzheimer’s Disease Continuum by Means of Multiscale Entropies
Entropy 2019, 21(6), 544; https://doi.org/10.3390/e21060544 - 28 May 2019
Cited by 18
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and complexity can be studied applying entropy-based measures throughout multiple temporal scales. In this regard, multiscale sample entropy (MSE) and refined multiscale spectral entropy (rMSSE) were calculated from electroencephalographic (EEG) data. Five minutes of resting-state EEG activity were recorded from 51 healthy controls, 51 mild cognitive impaired (MCI) subjects, 51 mild AD patients (ADMIL), 50 moderate AD patients (ADMOD), and 50 severe AD patients (ADSEV). Our results show statistically significant differences (p-values < 0.05, FDR-corrected Kruskal–Wallis test) between the five groups at each temporal scale. Additionally, average slope values and areas under MSE and rMSSE curves revealed significant changes in complexity mainly for controls vs. MCI, MCI vs. ADMIL and ADMOD vs. ADSEV comparisons (p-values < 0.05, FDR-corrected Mann–Whitney U-test). These findings indicate that MSE and rMSSE reflect the neuronal disturbances associated with the development of dementia, and may contribute to the development of new tools to track the AD progression. Full article
(This article belongs to the Special Issue Entropy Applications in EEG/MEG)
Show Figures

Figure 1

Article
Is Independence Necessary for a Discontinuous Phase Transition within the q-Voter Model?
Entropy 2019, 21(5), 521; https://doi.org/10.3390/e21050521 - 23 May 2019
Cited by 10
Abstract
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an [...] Read more.
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an independent behavior, and for the model with anticonformity only continuous phase transitions are possible. However, this claim was derived from the model, in which the size of the influence group needed for the conformity was the same as the size of the group needed for the anticonformity. Here, we abandon this assumption on the equality of two types of social response and introduce the generalized model, in which the size of the influence group needed for the conformity q c and the size of the influence group needed for the anticonformity q a are independent variables and in general q c q a . We investigate the model on the complete graph, similarly as it was done for the original q-voter model with anticonformity, and we show that such a generalized model displays both types of phase transitions depending on parameters q c and q a . Full article
Show Figures

Figure 1

Article
Quantum Probes for Ohmic Environments at Thermal Equilibrium
Entropy 2019, 21(5), 486; https://doi.org/10.3390/e21050486 - 12 May 2019
Cited by 9
Abstract
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence [...] Read more.
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence or to tailor quantum information protocols. Recently, the use of quantum probes in characterizing ohmic environments at zero-temperature has been discussed, showing that a single qubit provides precise estimation of the cutoff frequency. On the other hand, thermal noise often spoil quantum probing schemes, and for this reason we here extend the analysis to a complex system at thermal equilibrium. In particular, we discuss the interplay between thermal fluctuations and time evolution in determining the precision attainable by quantum probes. Our results show that the presence of thermal fluctuations degrades the precision for low values of the cutoff frequency, i.e., values of the order ω c T (in natural units). For larger values of ω c , decoherence is mostly due to the structure of environment, rather than thermal fluctuations, such that quantum probing by a single qubit is still an effective estimation procedure. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

Article
Solidification Microstructures of the Ingots Obtained by Arc Melting and Cold Crucible Levitation Melting in TiNbTaZr Medium-Entropy Alloy and TiNbTaZrX (X = V, Mo, W) High-Entropy Alloys
Entropy 2019, 21(5), 483; https://doi.org/10.3390/e21050483 - 10 May 2019
Cited by 18
Abstract
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position [...] Read more.
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position of the ingots and the alloy system. In addition, no significant difference in the solidification microstructure was observed in TiZrNbTaMo bio-HEAs between the arc-melted (AM) ingots and cold crucible levitation melted (CCLM) ingots. A cold shut was observed in the AM ingots, but not in the CCLM ingots. The interdendrite regions tended to be enriched in Ti and Zr in the TiNbTaZr MEA and TiNbTaZrX (X = V, Mo, and W) HEAs. The distribution coefficients during solidification, which were estimated by thermodynamic calculations, could explain the distribution of the constituent elements in the dendrite and interdendrite regions. The thermodynamic calculations indicated that an increase in the concentration of the low melting-temperature V (2183 K) leads to a monotonic decrease in the liquidus temperature (TL), and that increases in the concentration of high melting-temperature Mo (2896 K) and W (3695 K) lead to a monotonic increase in TL in TiNbTaZrXx (X = V, Mo, and W) (x =  0 − 2) HEAs. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Graphical abstract

Article
3D CNN-Based Speech Emotion Recognition Using K-Means Clustering and Spectrograms
Entropy 2019, 21(5), 479; https://doi.org/10.3390/e21050479 - 08 May 2019
Cited by 34
Abstract
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames [...] Read more.
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames of the same length. Next, we extract an 88-dimensional vector of audio features including Mel Frequency Cepstral Coefficients (MFCC), pitch, and intensity for each of the respective frames. In parallel, the spectrogram of each frame is generated. In the final preprocessing step, by applying k-means clustering on the extracted features of all frames of each audio signal, we select k most discriminant frames, namely keyframes, to summarize the speech signal. Then, the sequence of the corresponding spectrograms of keyframes is encapsulated in a 3D tensor. These tensors are used to train and test a 3D Convolutional Neural network using a 10-fold cross-validation approach. The proposed 3D CNN has two convolutional layers and one fully connected layer. Experiments are conducted on the Surrey Audio-Visual Expressed Emotion (SAVEE), Ryerson Multimedia Laboratory (RML), and eNTERFACE’05 databases. The results are superior to the state-of-the-art methods reported in the literature. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Human Behaviour Analysis)
Show Figures

Graphical abstract

Article
Distributed Hypothesis Testing with Privacy Constraints
Entropy 2019, 21(5), 478; https://doi.org/10.3390/e21050478 - 07 May 2019
Cited by 7
Abstract
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual [...] Read more.
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example. Full article
Show Figures

Figure 1

Article
What Caused What? A Quantitative Account of Actual Causation Using Dynamical Causal Networks
Entropy 2019, 21(5), 459; https://doi.org/10.3390/e21050459 - 02 May 2019
Cited by 17
Abstract
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which [...] Read more.
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the “what caused what?” question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Graphical abstract

Article
A Method for Diagnosing Gearboxes of Means of Transport Using Multi-Stage Filtering and Entropy
Entropy 2019, 21(5), 441; https://doi.org/10.3390/e21050441 - 27 Apr 2019
Cited by 15
Abstract
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process [...] Read more.
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process signals. The presented method enables the extraction of vibration symptoms, which are symptoms of gear damage, from a complex vibration signal of a gearbox. The combination of multi-stage filtering and entropy enables the elimination of fast-changing vibration impulses, which interfere with the damage diagnosis process, and make it possible to obtain a synthetic signal that provides information about the state of the gearing. The paper demonstrates the usefulness of the developed method in the diagnosis of a gearbox in which two types of gearing damage were simulated: tooth chipping and damage to the working surface of the teeth. The research shows that the application of the proposed method of vibration of signal processing enables observation of the qualitative and quantitative changes in the entropy signal after denoising, which are unambiguous symptoms of the diagnosed damage. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty
Entropy 2019, 21(4), 375; https://doi.org/10.3390/e21040375 - 06 Apr 2019
Cited by 8
Abstract
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational [...] Read more.
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures. Full article
Show Figures

Graphical abstract

Article
Increase in Mutual Information During Interaction with the Environment Contributes to Perception
Entropy 2019, 21(4), 365; https://doi.org/10.3390/e21040365 - 04 Apr 2019
Cited by 8
Abstract
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted [...] Read more.
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted for by a reduction in entropy within the probability distributions of binary states of neurons in distributed neural circuits, given the knowledge about the characteristics of stimuli in physical surroundings. This reduction in the total entropy of multiple pairs of circuits in networks, by an amount equal to the increase of mutual information, occurs as sensory information is processed successively from lower to higher cortical areas or between different areas at the same hierarchical level, but belonging to different networks. The increase in mutual information is partly accounted for by temporal coupling as well as synaptic connections as proposed by Bahmer and Gupta (Front. Neurosci. 2018). We propose that robust increases in mutual information, measuring the association between the characteristics of sensory inputs’ and neural circuits’ connectivity patterns, are partly responsible for perception and successful motor interactions with physical surroundings. The increase in mutual information, given the knowledge about environmental sensory stimuli and the type of motor response produced, is responsible for the coupling between action and perception. In addition, the processing of sensory inputs within neural circuits, with no prior knowledge of the occurrence of a sensory stimulus, increases Shannon information. Consequently, the increase in surprise serves to increase the evidence of the sensory model of physical surroundings Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Show Figures

Figure 1

Article
First-Stage Prostate Cancer Identification on Histopathological Images: Hand-Driven versus Automatic Learning
Entropy 2019, 21(4), 356; https://doi.org/10.3390/e21040356 - 02 Apr 2019
Cited by 7
Abstract
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, [...] Read more.
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands. Full article
Show Figures

Figure 1

Article
Entanglement 25 Years after Quantum Teleportation: Testing Joint Measurements in Quantum Networks
Entropy 2019, 21(3), 325; https://doi.org/10.3390/e21030325 - 26 Mar 2019
Cited by 24
Abstract
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the [...] Read more.
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the resource of quantum information science and technology. However, entanglement is exploited twice in quantum teleportation. Firstly, entanglement is the “quantum teleportation channel”, i.e., entanglement between distant systems. Second, entanglement appears in the eigenvectors of the joint measurement that Alice, the sender, has to perform jointly on the quantum state to be teleported and her half of the “quantum teleportation channel”, i.e., entanglement enabling entirely new kinds of quantum measurements. I emphasize how poorly this second kind of entanglement is understood. In particular, I use quantum networks in which each party connected to several nodes performs a joint measurement to illustrate that the quantumness of such joint measurements remains elusive, escaping today’s available tools to detect and quantify it. Full article
Show Figures

Figure 1

Article
Image Encryption Based on Pixel-Level Diffusion with Dynamic Filtering and DNA-Level Permutation with 3D Latin Cubes
Entropy 2019, 21(3), 319; https://doi.org/10.3390/e21030319 - 24 Mar 2019
Cited by 48
Abstract
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach [...] Read more.
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach consists of five stages: (1) a newly proposed 5D hyperchaotic system with two positive Lyapunov exponents is applied to generate a pseudorandom sequence; (2) for each pixel in an image, a filtering operation with different templates called dynamic filtering is conducted to diffuse the image; (3) DNA encoding is applied to the diffused image and then the DNA-level image is transformed into several 3D DNA-level cubes; (4) Latin cube is operated on each DNA-level cube; and (5) all the DNA cubes are integrated and decoded to a 2D cipher image. Extensive experiments are conducted on public testing images, and the results show that the proposed DFDLC can achieve state-of-the-art results in terms of several evaluation criteria. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

Article
Probability Distributions with Singularities
Entropy 2019, 21(3), 312; https://doi.org/10.3390/e21030312 - 21 Mar 2019
Cited by 7
Abstract
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing [...] Read more.
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing the singularity and other topics such as the condensation of fluctuations, the relationships with ordinary phase-transitions, the giant response associated to anomalous fluctuations, and the interplay with fluctuation relations. Full article
Show Figures

Figure 1

Article
Using Permutations for Hierarchical Clustering of Time Series
Entropy 2019, 21(3), 306; https://doi.org/10.3390/e21030306 - 21 Mar 2019
Cited by 2
Abstract
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these [...] Read more.
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these distances to both simulated theoretical and real data series. For simulated time series the distances show good clustering results, both in the case of linear and non-linear dependencies. The effect of the embedding dimension and the linkage method are also analyzed. Finally, several real data series are properly clustered using the proposed method. Full article
Show Figures

Figure 1

Article
Limiting Uncertainty Relations in Laser-Based Measurements of Position and Velocity Due to Quantum Shot Noise
Entropy 2019, 21(3), 264; https://doi.org/10.3390/e21030264 - 08 Mar 2019
Cited by 4
Abstract
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison [...] Read more.
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison of different measurement principles with respect to the ultimate limit due to quantum shot noise is rare. For this purpose, the Cramér-Rao bound is described as a universal information theoretic tool to calculate the minimal achievable measurement uncertainty for different measurement techniques, and a review of the respective lower bounds for laser-based measurements of position, displacement, strain and velocity at particles and surfaces is presented. As a result, the calculated Cramér-Rao bounds of different measurement principles have similar forms for each measurand including an indirect proportionality with respect to the number of photons and, in case of the position measurement for instance, the wave number squared. Furthermore, an uncertainty principle between the position uncertainty and the wave vector uncertainty was identified, i.e., the measurement uncertainty is minimized by maximizing the wave vector uncertainty. Additionally, physically complementary measurement approaches such as interferometry and time-of-flight positions measurements as well as time-of-flight and Doppler particle velocity measurements are shown to attain the same fundamental limit. Since most of the laser-based measurements perform similar with respect to the quantum shot noise, the realized measurement systems behave differently only due to the available optoelectronic components for the concrete measurement task. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Show Figures

Figure 1

Article
Informed Weighted Non-Negative Matrix Factorization Using αβ-Divergence Applied to Source Apportionment
Entropy 2019, 21(3), 253; https://doi.org/10.3390/e21030253 - 06 Mar 2019
Cited by 4
Abstract
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row [...] Read more.
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row sum-to-one property of one matrix factor. In this contribution, we extend our previous work which partly involved some of these aspects to α β -divergence cost functions. We derive new update rules which are extendthe previous ones and take into account the available information. Experiments conducted for several operating conditions on realistic simulated mixtures of particulate matter sources show the relevance of these approaches. Results from a real dataset campaign are also presented and validated with expert knowledge. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

Article
Bayesian Compressive Sensing of Sparse Signals with Unknown Clustering Patterns
Entropy 2019, 21(3), 247; https://doi.org/10.3390/e21030247 - 05 Mar 2019
Cited by 10
Abstract
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or [...] Read more.
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or clumpy behavior, along each column, as well as joint sparsity across the columns. In this paper, we propose a new sparse Bayesian learning (SBL) method that incorporates a total variation-like prior as a measure of the overall clustering pattern in the solution. We further incorporate a parameter in this prior to account for the emphasis on the amount of clumpiness in the supports of the solution to improve the recovery performance of sparse signals with an unknown clustering pattern. This parameter does not exist in the other existing algorithms and is learned via our hierarchical SBL algorithm. While the proposed algorithm is constructed for the MMVs, it can also be applied to the single measurement vector (SMV) problems. Simulation results show the effectiveness of our algorithm compared to other algorithms for both SMV and MMVs. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Centroid-Based Clustering with αβ-Divergences
Entropy 2019, 21(2), 196; https://doi.org/10.3390/e21020196 - 19 Feb 2019
Cited by 7
Abstract
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in [...] Read more.
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in the traditional hard k-means algorithm. In this article, we consider the problem of centroid-based clustering using the family of α β -divergences, which is governed by two parameters, α and β . We propose a new iterative algorithm, α β -k-means, giving closed-form solutions for the computation of the sided centroids. The algorithm can be fine-tuned by means of this pair of values, yielding a wide range of the most frequently used divergences. Moreover, it is guaranteed to converge to local minima for a wide range of values of the pair ( α , β ). Our theoretical contribution has been validated by several experiments performed with synthetic and real data and exploring the ( α , β ) plane. The numerical results obtained confirm the quality of the algorithm and its suitability to be used in several practical applications. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

Article
Complex Dynamics in a Memcapacitor-Based Circuit
Entropy 2019, 21(2), 188; https://doi.org/10.3390/e21020188 - 16 Feb 2019
Cited by 18
Abstract
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex [...] Read more.
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex system. The basins of attraction, multistability, bifurcations, Lyapunov exponents, and initial-condition-triggered similar bifurcation are analyzed. Finally, the memcapacitor-based chaotic oscillator is realized via circuit implementation with experimental results presented. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
Entropy Generation Rate Minimization for Methanol Synthesis via a CO2 Hydrogenation Reactor
Entropy 2019, 21(2), 174; https://doi.org/10.3390/e21020174 - 13 Feb 2019
Cited by 28
Abstract
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation [...] Read more.
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation rate (EGR) as the objective function is optimized by using finite time thermodynamic and optimal control theory. The exterior wall temperature (EWR) is taken as the control variable, and the fixed methanol yield and conservation equations are taken as the constraints in the optimization problem. Compared with the reference reactor with a constant EWR, the total EGR of the optimal reactor decreases by 20.5%, and the EGR caused by the heat transfer decreases by 68.8%. In the optimal reactor, the total EGRs mainly distribute in the first 30% reactor length, and the EGRs caused by the chemical reaction accounts for more than 84% of the total EGRs. The selectivity of CH3OH can be enhanced by increasing the inlet molar flow rate of CO, and the CO2 conversion rate can be enhanced by removing H2O from the reaction system. The results obtained herein are in favor of optimal designs of practical tubular MSCH reactors. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Show Figures

Figure 1

Article
The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery
Entropy 2019, 21(2), 170; https://doi.org/10.3390/e21020170 - 12 Feb 2019
Cited by 12
Abstract
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection [...] Read more.
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection of embedding dimension and time delay. To complete the automatic parameter selection of MPE, a novel parameter optimization strategy of MPE is proposed, namely optimized multi-scale permutation entropy (OMPE). In the OMPE method, an improved Cao method is proposed to adaptively select the embedding dimension. Meanwhile, the time delay is determined based on mutual information. To verify the effectiveness of OMPE method, a simulated signal and two experimental signals are used for validation. Results demonstrate that the proposed OMPE method has a better feature extraction ability comparing with existing MPE methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

Article
Entropy Analysis and Neural Network-Based Adaptive Control of a Non-Equilibrium Four-Dimensional Chaotic System with Hidden Attractors
Entropy 2019, 21(2), 156; https://doi.org/10.3390/e21020156 - 07 Feb 2019
Cited by 32
Abstract
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate [...] Read more.
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate entropy, sample entropy, and Fuzzy entropy. In order to stabilize the proposed chaotic system, an adaptive radial-basis function neural network (RBF-NN)–based control method is proposed to represent the model of the uncertain nonlinear dynamics of the system. The Lyapunov direct method-based stability analysis of the proposed approach guarantees that all of the closed-loop signals are semi-globally uniformly ultimately bounded. Also, adaptive learning laws are proposed to tune the weight coefficients of the RBF-NN. The proposed adaptive control approach requires neither the prior information about the uncertain dynamics nor the parameters value of the considered system. Results of simulation validate the performance of the proposed control method. Full article
Show Figures

Figure 1

Article
The Radial Propagation of Heat in Strongly Driven Non-Equilibrium Fusion Plasmas
Entropy 2019, 21(2), 148; https://doi.org/10.3390/e21020148 - 05 Feb 2019
Cited by 3
Abstract
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” [...] Read more.
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” are detected that tend to form near rational magnetic surfaces, thought to be associated with zonal flows. Occasionally, heat transport “jumps” over these barriers, and this “jumping” behavior seems to increase in intensity when the heating power is raised, suggesting an explanation for the ubiquitous phenomenon of “power degradation” observed in magnetically confined plasmas. Reinterpreting the analysis results in terms of a continuous time random walk, “fast” and “slow” transport channels can be discerned. The cited results can partially be understood in the framework of a resistive Magneto-HydroDynamic model. The picture that emerges shows that plasma self-organization and competing transport mechanisms are essential ingredients for a fuller understanding of heat transport in fusion plasmas. Full article
Show Figures

Graphical abstract

Article
Are Virtual Particles Less Real?
Entropy 2019, 21(2), 141; https://doi.org/10.3390/e21020141 - 02 Feb 2019
Cited by 9
Abstract
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic [...] Read more.
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic calculations and the diagrammatic aids that are often used in them is clarified. It is pointed out that the distinction between virtual particles and others and, therefore, judgments regarding their reality have been made on basis of these methods rather than on their physical characteristics. As such, it has obscured the question of their existence. It is here argued that the most influential arguments against the existence of virtual particles but not other particles fail because they either are arguments against the existence of particles in general rather than virtual particles per se, or are dependent on the imposition of classical intuitions on quantum systems, or are simply beside the point. Several reasons are then provided for considering virtual particles real, such as their descriptive, explanatory, and predictive value, and a clearer characterization of virtuality—one in terms of intermediate states—that also applies beyond perturbation theory is provided. It is also pointed out that in the role of force mediators, they serve to preclude action-at-a-distance between interacting particles. For these reasons, it is concluded that virtual particles are as real as other quantum particles. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
Article
Entropy Generation Analysis and Thermodynamic Optimization of Jet Impingement Cooling Using Large Eddy Simulation
Entropy 2019, 21(2), 129; https://doi.org/10.3390/e21020129 - 30 Jan 2019
Cited by 11
Abstract
In this work, entropy generation analysis is applied to characterize and optimize a turbulent impinging jet on a heated solid surface. In particular, the influence of plate inclinations and Reynolds numbers on the turbulent heat and fluid flow properties and its impact on [...] Read more.
In this work, entropy generation analysis is applied to characterize and optimize a turbulent impinging jet on a heated solid surface. In particular, the influence of plate inclinations and Reynolds numbers on the turbulent heat and fluid flow properties and its impact on the thermodynamic performance of such flow arrangements are numerically investigated. For this purpose, novel model equations are derived in the frame of Large Eddy Simulation (LES) that allows calculation of local entropy generation rates in a post-processing phase including the effect of unresolved subgrid-scale irreversibilities. From this LES-based study, distinctive features of heat and flow dynamics of the impinging fluid are detected and optimal operating designs for jet impingement cooling are identified. It turned out that (1) the location of the stagnation point and that of the maximal Nusselt number differ in the case of plate inclination; (2) predominantly the impinged wall acts as a strong source of irreversibility; and (3) a flow arrangement with a jet impinging normally on the heated surface allows the most efficient use of energy which is associated with lowest exergy lost. Furthermore, it is found that increasing the Reynolds number intensifies the heat transfer and upgrades the second law efficiency of such thermal systems. Thereby, the thermal efficiency enhancement can overwhelm the frictional exergy loss. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Show Figures

Graphical abstract

Article
PT Symmetry, Non-Gaussian Path Integrals, and the Quantum Black–Scholes Equation
Entropy 2019, 21(2), 105; https://doi.org/10.3390/e21020105 - 23 Jan 2019
Cited by 4
Abstract
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to [...] Read more.
The Accardi–Boukas quantum Black–Scholes framework, provides a means by which one can apply the Hudson–Parthasarathy quantum stochastic calculus to problems in finance. Solutions to these equations can be modelled using nonlocal diffusion processes, via a Kramers–Moyal expansion, and this provides useful tools to understand their behaviour. In this paper we develop further links between quantum stochastic processes, and nonlocal diffusions, by inverting the question, and showing how certain nonlocal diffusions can be written as quantum stochastic processes. We then go on to show how one can use path integral formalism, and PT symmetric quantum mechanics, to build a non-Gaussian kernel function for the Accardi–Boukas quantum Black–Scholes. Behaviours observed in the real market are a natural model output, rather than something that must be deliberately included. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

Article
Quantifying Data Dependencies with Rényi Mutual Information and Minimum Spanning Trees
Entropy 2019, 21(2), 100; https://doi.org/10.3390/e21020100 - 22 Jan 2019
Cited by 2
Abstract
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and [...] Read more.
In this study, we present a novel method for quantifying dependencies in multivariate datasets, based on estimating the Rényi mutual information by minimum spanning trees (MSTs). The extent to which random variables are dependent is an important question, e.g., for uncertainty quantification and sensitivity analysis. The latter is closely related to the question how strongly dependent the output of, e.g., a computer simulation, is on the individual random input variables. To estimate the Rényi mutual information from data, we use a method due to Hero et al. that relies on computing minimum spanning trees (MSTs) of the data and uses the length of the MST in an estimator for the entropy. To reduce the computational cost of constructing the exact MST for large datasets, we explore methods to compute approximations to the exact MST, and find the multilevel approach introduced recently by Zhong et al. (2015) to be the most accurate. Because the MST computation does not require knowledge (or estimation) of the distributions, our methodology is well-suited for situations where only data are available. Furthermore, we show that, in the case where only the ranking of several dependencies is required rather than their exact value, it is not necessary to compute the Rényi divergence, but only an estimator derived from it. The main contributions of this paper are the introduction of this quantifier of dependency, as well as the novel combination of using approximate methods for MSTs with estimating the Rényi mutual information via MSTs. We applied our proposed method to an artificial test case based on the Ishigami function, as well as to a real-world test case involving an El Nino dataset. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Parallel Lives: A Local-Realistic Interpretation of “Nonlocal” Boxes
Entropy 2019, 21(1), 87; https://doi.org/10.3390/e21010087 - 18 Jan 2019
Cited by 15
Abstract
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in [...] Read more.
We carry out a thought experiment in an imaginary world. Our world is both local and realistic, yet it violates a Bell inequality more than does quantum theory. This serves to debunk the myth that equates local realism with local hidden variables in the simplest possible manner. Along the way, we reinterpret the celebrated 1935 argument of Einstein, Podolsky and Rosen, and come to the conclusion that they were right in their questioning the completeness of the Copenhagen version of quantum theory, provided one believes in a local-realistic universe. Throughout our journey, we strive to explain our views from first principles, without expecting mathematical sophistication nor specialized prior knowledge from the reader. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

Article
Configurational Entropy in Multicomponent Alloys: Matrix Formulation from Ab Initio Based Hamiltonian and Application to the FCC Cr-Fe-Mn-Ni System
Entropy 2019, 21(1), 68; https://doi.org/10.3390/e21010068 - 15 Jan 2019
Cited by 12
Abstract
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, [...] Read more.
Configuration entropy is believed to stabilize disordered solid solution phases in multicomponent systems at elevated temperatures over intermetallic compounds by lowering the Gibbs free energy. Traditionally, the increment of configuration entropy with temperature was computed by time-consuming thermodynamic integration methods. In this work, a new formalism based on a hybrid combination of the Cluster Expansion (CE) Hamiltonian and Monte Carlo simulations is developed to predict the configuration entropy as a function of temperature from multi-body cluster probability in a multi-component system with arbitrary average composition. The multi-body probabilities are worked out by explicit inversion and direct product of a matrix formulation within orthonomal sets of point functions in the clusters obtained from symmetry independent correlation functions. The matrix quantities are determined from semi canonical Monte Carlo simulations with Effective Cluster Interactions (ECIs) derived from Density Functional Theory (DFT) calculations. The formalism is applied to analyze the 4-body cluster probabilities for the quaternary system Cr-Fe-Mn-Ni as a function of temperature and alloy concentration. It is shown that, for two specific compositions (Cr 25Fe 25Mn 25Ni 25 and Cr 18Fe 27Mn 27Ni 28), the high value of probabilities for Cr-Fe-Fe-Fe and Mn-Mn-Ni-Ni are strongly correlated with the presence of the ordered phases L1 2 -CrFe 3 and L1 0-MnNi, respectively. These results are in an excellent agreement with predictions of these ground state structures by ab initio calculations. The general formalism is used to investigate the configuration entropy as a function of temperature and for 285 different alloy compositions. It is found that our matrix formulation of cluster probabilities provides an efficient tool to compute configuration entropy in multi-component alloys in a comparison with the result obtained by the thermodynamic integration method. At high temperatures, it is shown that many-body cluster correlations still play an important role in understanding the configuration entropy before reaching the solid solution limit of high-entroy alloys (HEAs). Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Graphical abstract

Article
The Effect of Cognitive Resource Competition Due to Dual-Tasking on the Irregularity and Control of Postural Movement Components
Entropy 2019, 21(1), 70; https://doi.org/10.3390/e21010070 - 15 Jan 2019
Cited by 8
Abstract
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found [...] Read more.
Postural control research suggests a non-linear, n-shaped relationship between dual-tasking and postural stability. Nevertheless, the extent of this relationship remains unclear. Since kinematic principal component analysis has offered novel approaches to study the control of movement components (PM) and n-shapes have been found in measures of sway irregularity, we hypothesized (H1) that the irregularity of PMs and their respective control, and the control tightness will display the n-shape. Furthermore, according to the minimal intervention principle (H2) different PMs should be affected differently. Finally, (H3) we expected stronger dual-tasking effects in the older population, due to limited cognitive resources. We measured the kinematics of forty-one healthy volunteers (23 aged 26 ± 3; 18 aged 59 ± 4) performing 80 s tandem stances in five conditions (single-task and auditory n-back task; n = 1–4), and computed sample entropies on PM time-series and two novel measures of control tightness. In the PM most critical for stability, the control tightness decreased steadily, and in contrast to H3, decreased further for the younger group. Nevertheless, we found n-shapes in most variables with differing magnitudes, supporting H1 and H2. These results suggest that the control tightness might deteriorate steadily with increased cognitive load in critical movements despite the otherwise eminent n-shaped relationship. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
Double Entropy Joint Distribution Function and Its Application in Calculation of Design Wave Height
Entropy 2019, 21(1), 64; https://doi.org/10.3390/e21010064 - 14 Jan 2019
Cited by 23
Abstract
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy [...] Read more.
Wave height and wave period are important oceanic environmental factors that are used to describe the randomness of a wave. Within the field of ocean engineering, the calculation of design wave height is of great significance. In this paper, a periodic maximum entropy distribution function with four undetermined parameters is derived by means of coordinate transformation and solving conditional variational problems. A double entropy joint distribution function of wave height and wave period is also derived. The function is derived from the maximum entropy wave height function and the maximum entropy periodic function, with the help of structures of the Copula function. The double entropy joint distribution function of wave height and wave period is not limited by weak nonlinearity, nor by normal stochastic process and narrow spectrum. Besides, it can fit the observed data more carefully and be more widely applicable to nonlinear waves in various cases, owing to the many undetermined parameters it contains. The engineering cases show that the recurrence level derived from the double entropy joint distribution function is higher than that from the extreme value distribution using the single variables of wave height or wave period. It is also higher than that from the traditional joint distribution function of wave height and wave period. Full article
Show Figures

Graphical abstract

Article
Unfolding the Complexity of the Global Value Chain: Strength and Entropy in the Single-Layer, Multiplex, and Multi-Layer International Trade Networks
Entropy 2018, 20(12), 909; https://doi.org/10.3390/e20120909 - 28 Nov 2018
Cited by 13
Abstract
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss [...] Read more.
The worldwide trade network has been widely studied through different data sets and network representations with a view to better understanding interactions among countries and products. Here we investigate international trade through the lenses of the single-layer, multiplex, and multi-layer networks. We discuss differences among the three network frameworks in terms of their relative advantages in capturing salient topological features of trade. We draw on the World Input-Output Database to build the three networks. We then uncover sources of heterogeneity in the way strength is allocated among countries and transactions by computing the strength distribution and entropy in each network. Additionally, we trace how entropy evolved, and show how the observed peaks can be associated with the onset of the global economic downturn. Findings suggest how more complex representations of trade, such as the multi-layer network, enable us to disambiguate the distinct roles of intra- and cross-industry transactions in driving the evolution of entropy at a more aggregate level. We discuss our results and the implications of our comparative analysis of networks for research on international trade and other empirical domains across the natural and social sciences. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

Article
Entropic Steering Criteria: Applications to Bipartite and Tripartite Systems
Entropy 2018, 20(10), 763; https://doi.org/10.3390/e20100763 - 05 Oct 2018
Cited by 11
Abstract
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we [...] Read more.
The effect of quantum steering describes a possible action at a distance via local measurements. Whereas many attempts on characterizing steerability have been pursued, answering the question as to whether a given state is steerable or not remains a difficult task. Here, we investigate the applicability of a recently proposed method for building steering criteria from generalized entropic uncertainty relations. This method works for any entropy which satisfy the properties of (i) (pseudo-) additivity for independent distributions; (ii) state independent entropic uncertainty relation (EUR); and (iii) joint convexity of a corresponding relative entropy. Our study extends the former analysis to Tsallis and Rényi entropies on bipartite and tripartite systems. As examples, we investigate the steerability of the three-qubit GHZ and W states. Full article
(This article belongs to the Special Issue Quantum Nonlocality)
Show Figures

Figure 1

Article
Symmetry, Outer Bounds, and Code Constructions: A Computer-Aided Investigation on the Fundamental Limits of Caching
Entropy 2018, 20(8), 603; https://doi.org/10.3390/e20080603 - 13 Aug 2018
Cited by 24
Abstract
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space [...] Read more.
We illustrate how computer-aided methods can be used to investigate the fundamental limits of the caching systems, which are significantly different from the conventional analytical approach usually seen in the information theory literature. The linear programming (LP) outer bound of the entropy space serves as the starting point of this approach; however, our effort goes significantly beyond using it to prove information inequalities. We first identify and formalize the symmetry structure in the problem, which enables us to show the existence of optimal symmetric solutions. A symmetry-reduced linear program is then used to identify the boundary of the memory-transmission-rate tradeoff for several small cases, for which we obtain a set of tight outer bounds. General hypotheses on the optimal tradeoff region are formed from these computed data, which are then analytically proven. This leads to a complete characterization of the optimal tradeoff for systems with only two users, and certain partial characterization for systems with only two files. Next, we show that by carefully analyzing the joint entropy structure of the outer bounds for certain cases, a novel code construction can be reverse-engineered, which eventually leads to a general class of codes. Finally, we show that outer bounds can be computed through strategically relaxing the LP in different ways, which can be used to explore the problem computationally. This allows us firstly to deduce generic characteristic of the converse proof, and secondly to compute outer bounds for larger problem cases, despite the seemingly impossible computation scale. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Show Figures

Figure 1

Article
Conditional Gaussian Systems for Multiscale Nonlinear Stochastic Systems: Prediction, State Estimation and Uncertainty Quantification
Entropy 2018, 20(7), 509; https://doi.org/10.3390/e20070509 - 04 Jul 2018
Cited by 16
Abstract
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed [...] Read more.
A conditional Gaussian framework for understanding and predicting complex multiscale nonlinear stochastic systems is developed. Despite the conditional Gaussianity, such systems are nevertheless highly nonlinear and are able to capture the non-Gaussian features of nature. The special structure of the system allows closed analytical formulae for solving the conditional statistics and is thus computationally efficient. A rich gallery of examples of conditional Gaussian systems are illustrated here, which includes data-driven physics-constrained nonlinear stochastic models, stochastically coupled reaction–diffusion models in neuroscience and ecology, and large-scale dynamical models in turbulence, fluids and geophysical flows. Making use of the conditional Gaussian structure, efficient statistically accurate algorithms involving a novel hybrid strategy for different subspaces, a judicious block decomposition and statistical symmetry are developed for solving the Fokker–Planck equation in large dimensions. The conditional Gaussian framework is also applied to develop extremely cheap multiscale data assimilation schemes, such as the stochastic superparameterization, which use particle filters to capture the non-Gaussian statistics on the large-scale part whose dimension is small whereas the statistics of the small-scale part are conditional Gaussian given the large-scale part. Other topics of the conditional Gaussian systems studied here include designing new parameter estimation schemes and understanding model errors. Full article
(This article belongs to the Special Issue Information Theory and Stochastics for Multiscale Nonlinear Systems)
Show Figures

Figure 1

Article
The Gibbs Paradox: Early History and Solutions
Entropy 2018, 20(6), 443; https://doi.org/10.3390/e20060443 - 06 Jun 2018
Cited by 8
Abstract
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical [...] Read more.
This article is a detailed history of the Gibbs paradox, with philosophical morals. It purports to explain the origins of the paradox, to describe and criticize solutions of the paradox from the early times to the present, to use the history of statistical mechanics as a reservoir of ideas for clarifying foundations and removing prejudices, and to relate the paradox to broad misunderstandings of the nature of physical theory. Full article
(This article belongs to the Special Issue Gibbs Paradox 2018)
Show Figures

Figure 1

Article
Criterion of Existence of Power-Law Memory for Economic Processes
Entropy 2018, 20(6), 414; https://doi.org/10.3390/e20060414 - 29 May 2018
Cited by 21
Abstract
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We [...] Read more.
In this paper, we propose criteria for the existence of memory of power-law type (PLT) memory in economic processes. We give the criterion of existence of power-law long-range dependence in time by using the analogy with the concept of the long-range alpha-interaction. We also suggest the criterion of existence of PLT memory for frequency domain by using the concept of non-integer dimensions. For an economic process, for which it is known that an endogenous variable depends on an exogenous variable, the proposed criteria make it possible to identify the presence of the PLT memory. The suggested criteria are illustrated in various examples. The use of the proposed criteria allows apply the fractional calculus to construct dynamic models of economic processes. These criteria can be also used to identify the linear integro-differential operators that can be considered as fractional derivatives and integrals of non-integer orders. Full article
(This article belongs to the Special Issue Power Law Behaviour in Complex Systems)
Article
Quantum Trajectories: Real or Surreal?
Entropy 2018, 20(5), 353; https://doi.org/10.3390/e20050353 - 08 May 2018
Cited by 8
Abstract
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context [...] Read more.
The claim of Kocsis et al. to have experimentally determined “photon trajectories” calls for a re-examination of the meaning of “quantum trajectories”. We will review the arguments that have been assumed to have established that a trajectory has no meaning in the context of quantum mechanics. We show that the conclusion that the Bohm trajectories should be called “surreal” because they are at “variance with the actual observed track” of a particle is wrong as it is based on a false argument. We also present the results of a numerical investigation of a double Stern-Gerlach experiment which shows clearly the role of the spin within the Bohm formalism and discuss situations where the appearance of the quantum potential is open to direct experimental exploration. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

Article
Password Security as a Game of Entropies
Entropy 2018, 20(5), 312; https://doi.org/10.3390/e20050312 - 25 Apr 2018
Cited by 10
Abstract
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability [...] Read more.
We consider a formal model of password security, in which two actors engage in a competition of optimal password choice against potential attacks. The proposed model is a multi-objective two-person game. Player 1 seeks an optimal password choice policy, optimizing matters of memorability of the password (measured by Shannon entropy), opposed to the difficulty for player 2 of guessing it (measured by min-entropy), and the cognitive efforts of player 1 tied to changing the password (measured by relative entropy, i.e., Kullback–Leibler divergence). The model and contribution are thus twofold: (i) it applies multi-objective game theory to the password security problem; and (ii) it introduces different concepts of entropy to measure the quality of a password choice process under different angles (and not a given password itself, since this cannot be quality-assessed in terms of entropy). We illustrate our approach with an example from everyday life, namely we analyze the password choices of employees. Full article
(This article belongs to the Special Issue Information Theory in Game Theory)
Article
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297 - 18 Apr 2018
Cited by 29
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The [...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Show Figures

Figure 1

Article
Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
Entropy 2018, 20(4), 274; https://doi.org/10.3390/e20040274 - 12 Apr 2018
Cited by 3
Abstract
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that [...] Read more.
Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

Article
Distance Entropy Cartography Characterises Centrality in Complex Networks
Entropy 2018, 20(4), 268; https://doi.org/10.3390/e20040268 - 11 Apr 2018
Cited by 19
Abstract
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network [...] Read more.
We introduce distance entropy as a measure of homogeneity in the distribution of path lengths between a given node and its neighbours in a complex network. Distance entropy defines a new centrality measure whose properties are investigated for a variety of synthetic network models. By coupling distance entropy information with closeness centrality, we introduce a network cartography which allows one to reduce the degeneracy of ranking based on closeness alone. We apply this methodology to the empirical multiplex lexical network encoding the linguistic relationships known to English speaking toddlers. We show that the distance entropy cartography better predicts how children learn words compared to closeness centrality. Our results highlight the importance of distance entropy for gaining insights from distance patterns in complex networks. Full article
(This article belongs to the Special Issue Graph and Network Entropies)
Show Figures

Figure 1

Article
Transductive Feature Selection Using Clustering-Based Sample Entropy for Temperature Prediction in Weather Forecasting
Entropy 2018, 20(4), 264; https://doi.org/10.3390/e20040264 - 10 Apr 2018
Cited by 10
Abstract
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. [...] Read more.
Entropy measures have been a major interest of researchers to measure the information content of a dynamical system. One of the well-known methodologies is sample entropy, which is a model-free approach and can be deployed to measure the information transfer in time series. Sample entropy is based on the conditional entropy where a major concern is the number of past delays in the conditional term. In this study, we deploy a lag-specific conditional entropy to identify the informative past values. Moreover, considering the seasonality structure of data, we propose a clustering-based sample entropy to exploit the temporal information. Clustering-based sample entropy is based on the sample entropy definition while considering the clustering information of the training data and the membership of the test point to the clusters. In this study, we utilize the proposed method for transductive feature selection in black-box weather forecasting and conduct the experiments on minimum and maximum temperature prediction in Brussels for 1–6 days ahead. The results reveal that considering the local structure of the data can improve the feature selection performance. In addition, despite the large reduction in the number of features, the performance is competitive with the case of using all features. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Leggett-Garg Inequalities for Quantum Fluctuating Work
Entropy 2018, 20(3), 200; https://doi.org/10.3390/e20030200 - 16 Mar 2018
Cited by 7
Abstract
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a [...] Read more.
The Leggett-Garg inequalities serve to test whether or not quantum correlations in time can be explained within a classical macrorealistic framework. We apply this test to thermodynamics and derive a set of Leggett-Garg inequalities for the statistics of fluctuating work done on a quantum system unitarily driven in time. It is shown that these inequalities can be violated in a driven two-level system, thereby demonstrating that there exists no general macrorealistic description of quantum work. These violations are shown to emerge within the standard Two-Projective-Measurement scheme as well as for alternative definitions of fluctuating work that are based on weak measurement. Our results elucidate the influences of temporal correlations on work extraction in the quantum regime and highlight a key difference between quantum and classical thermodynamics. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Show Figures

Figure 1

Article
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
Entropy 2018, 20(3), 173; https://doi.org/10.3390/e20030173 - 06 Mar 2018
Cited by 15
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. [...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

Article
A Variational Formulation of Nonequilibrium Thermodynamics for Discrete Open Systems with Mass and Heat Transfer
Entropy 2018, 20(3), 163; https://doi.org/10.3390/e20030163 - 04 Mar 2018
Cited by 12
Abstract
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent [...] Read more.
We propose a variational formulation for the nonequilibrium thermodynamics of discrete open systems, i.e., discrete systems which can exchange mass and heat with the exterior. Our approach is based on a general variational formulation for systems with time-dependent nonlinear nonholonomic constraints and time-dependent Lagrangian. For discrete open systems, the time-dependent nonlinear constraint is associated with the rate of internal entropy production of the system. We show that this constraint on the solution curve systematically yields a constraint on the variations to be used in the action functional. The proposed variational formulation is intrinsic and provides the same structure for a wide class of discrete open systems. We illustrate our theory by presenting examples of open systems experiencing mechanical interactions, as well as internal diffusion, internal heat transfer, and their cross-effects. Our approach yields a systematic way to derive the complete evolution equations for the open systems, including the expression of the internal entropy production of the system, independently on its complexity. It might be especially useful for the study of the nonequilibrium thermodynamics of biophysical systems. Full article
(This article belongs to the Special Issue Phenomenological Thermodynamics of Irreversible Processes)
Show Figures

Figure 1

Article
Minimising the Kullback–Leibler Divergence for Model Selection in Distributed Nonlinear Systems
Entropy 2018, 20(2), 51; https://doi.org/10.3390/e20020051 - 23 Jan 2018
Cited by 15
Abstract
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of [...] Read more.
The Kullback–Leibler (KL) divergence is a fundamental measure of information geometry that is used in a variety of contexts in artificial intelligence. We show that, when system dynamics are given by distributed nonlinear systems, this measure can be decomposed as a function of two information-theoretic measures, transfer entropy and stochastic interaction. More specifically, these measures are applicable when selecting a candidate model for a distributed system, where individual subsystems are coupled via latent variables and observed through a filter. We represent this model as a directed acyclic graph (DAG) that characterises the unidirectional coupling between subsystems. Standard approaches to structure learning are not applicable in this framework due to the hidden variables; however, we can exploit the properties of certain dynamical systems to formulate exact methods based on differential topology. We approach the problem by using reconstruction theorems to derive an analytical expression for the KL divergence of a candidate DAG from the observed dataset. Using this result, we present a scoring function based on transfer entropy to be used as a subroutine in a structure learning algorithm. We then demonstrate its use in recovering the structure of coupled Lorenz and Rössler systems. Full article
(This article belongs to the Special Issue New Trends in Statistical Physics of Complex Systems)
Show Figures

Figure 1

Article
Low Computational Cost for Sample Entropy
Entropy 2018, 20(1), 61; https://doi.org/10.3390/e20010061 - 13 Jan 2018
Cited by 19
Abstract
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used [...] Read more.
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy. Full article
Show Figures

Figure 1

Article
Transfer Entropy as a Tool for Hydrodynamic Model Validation
Entropy 2018, 20(1), 58; https://doi.org/10.3390/e20010058 - 12 Jan 2018
Cited by 10
Abstract
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, [...] Read more.
The validation of numerical models is an important component of modeling to ensure reliability of model outputs under prescribed conditions. In river deltas, robust validation of models is paramount given that models are used to forecast land change and to track water, solid, and solute transport through the deltaic network. We propose using transfer entropy (TE) to validate model results. TE quantifies the information transferred between variables in terms of strength, timescale, and direction. Using water level data collected in the distributary channels and inter-channel islands of Wax Lake Delta, Louisiana, USA, along with modeled water level data generated for the same locations using Delft3D, we assess how well couplings between external drivers (river discharge, tides, wind) and modeled water levels reproduce the observed data couplings. We perform this operation through time using ten-day windows. Modeled and observed couplings compare well; their differences reflect the spatial parameterization of wind and roughness in the model, which prevents the model from capturing high frequency fluctuations of water level. The model captures couplings better in channels than on islands, suggesting that mechanisms of channel-island connectivity are not fully represented in the model. Overall, TE serves as an additional validation tool to quantify the couplings of the system of interest at multiple spatial and temporal scales. Full article
(This article belongs to the Special Issue Transfer Entropy II)
Show Figures

Figure 1

Article
Information Entropy Suggests Stronger Nonlinear Associations between Hydro-Meteorological Variables and ENSO
Entropy 2018, 20(1), 38; https://doi.org/10.3390/e20010038 - 09 Jan 2018
Cited by 7
Abstract
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to [...] Read more.
Understanding the teleconnections between hydro-meteorological data and the El Niño–Southern Oscillation cycle (ENSO) is an important step towards developing flood early warning systems. In this study, the concept of mutual information (MI) was applied using marginal and joint information entropy to quantify the linear and non-linear relationship between annual streamflow, extreme precipitation indices over Mekong river basin, and ENSO. We primarily used Pearson correlation as a linear association metric for comparison with mutual information. The analysis was performed at four hydro-meteorological stations located on the mainstream Mekong river basin. It was observed that the nonlinear correlation information is comparatively higher between the large-scale climate index and local hydro-meteorology data in comparison to the traditional linear correlation information. The spatial analysis was carried out using all the grid points in the river basin, which suggests a spatial dependence structure between precipitation extremes and ENSO. Overall, this study suggests that mutual information approach can further detect more meaningful connections between large-scale climate indices and hydro-meteorological variables at different spatio-temporal scales. Application of nonlinear mutual information metric can be an efficient tool to better understand hydro-climatic variables dynamics resulting in improved climate-informed adaptation strategies. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Show Figures

Figure 1

Article
Searching for Chaos Evidence in Eye Movement Signals
Entropy 2018, 20(1), 32; https://doi.org/10.3390/e20010032 - 07 Jan 2018
Cited by 14
Abstract
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic [...] Read more.
Most naturally-occurring physical phenomena are examples of nonlinear dynamic systems, the functioning of which attracts many researchers seeking to unveil their nature. The research presented in this paper is aimed at exploring eye movement dynamic features in terms of the existence of chaotic nature. Nonlinear time series analysis methods were used for this purpose. Two time series features were studied: fractal dimension and entropy, by utilising the embedding theory. The methods were applied to the data collected during the experiment with “jumping point” stimulus. Eye movements were registered by means of the Jazz-novo eye tracker. One thousand three hundred and ninety two (1392) time series were defined, based on the horizontal velocity of eye movements registered during imposed, prolonged fixations. In order to conduct detailed analysis of the signal and identify differences contributing to the observed patterns of behaviour in time scale, fractal dimension and entropy were evaluated in various time series intervals. The influence of the noise contained in the data and the impact of the utilized filter on the obtained results were also studied. The low pass filter was used for the purpose of noise reduction with a 50 Hz cut-off frequency, estimated by means of the Fourier transform and all concerned methods were applied to time series before and after noise reduction. These studies provided some premises, which allow perceiving eye movements as observed chaotic data: characteristic of a space-time separation plot, low and non-integer time series dimension, and the time series entropy characteristic for chaotic systems. Full article
(This article belongs to the Special Issue Research Frontier in Chaos Theory and Complex Networks)
Show Figures

Figure 1

Article
Exact Renormalization Groups As a Form of Entropic Dynamics
Entropy 2018, 20(1), 25; https://doi.org/10.3390/e20010025 - 04 Jan 2018
Cited by 8
Abstract
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is [...] Read more.
The Renormalization Group (RG) is a set of methods that have been instrumental in tackling problems involving an infinite number of degrees of freedom, such as, for example, in quantum field theory and critical phenomena. What all these methods have in common—which is what explains their success—is that they allow a systematic search for those degrees of freedom that happen to be relevant to the phenomena in question. In the standard approaches the RG transformations are implemented by either coarse graining or through a change of variables. When these transformations are infinitesimal, the formalism can be described as a continuous dynamical flow in a fictitious time parameter. It is generally the case that these exact RG equations are functional diffusion equations. In this paper we show that the exact RG equations can be derived using entropic methods. The RG flow is then described as a form of entropic dynamics of field configurations. Although equivalent to other versions of the RG, in this approach the RG transformations receive a purely inferential interpretation that establishes a clear link to information theory. Full article
Article
Multiscale Information Theory and the Marginal Utility of Information
Entropy 2017, 19(6), 273; https://doi.org/10.3390/e19060273 - 13 Jun 2017
Cited by 17
Abstract
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior [...] Read more.
Complex systems display behavior at a range of scales. Large-scale behaviors can emerge from the correlated or dependent behavior of individual small-scale components. To capture this observation in a rigorous and general way, we introduce a formalism for multiscale information theory. Dependent behavior among system components results in overlapping or shared information. A system’s structure is revealed in the sharing of information across the system’s dependencies, each of which has an associated scale. Counting information according to its scale yields the quantity of scale-weighted information, which is conserved when a system is reorganized. In the interest of flexibility we allow information to be quantified using any function that satisfies two basic axioms. Shannon information and vector space dimension are examples. We discuss two quantitative indices that summarize system structure: an existing index, the complexity profile, and a new index, the marginal utility of information. Using simple examples, we show how these indices capture the multiscale structure of complex systems in a quantitative way. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Application of Biological Domain Knowledge Based Feature Selection on Gene Expression Data
Entropy 2021, 23(1), 2; https://doi.org/10.3390/e23010002 - 22 Dec 2020
Cited by 3
Abstract
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under [...] Read more.
In the last two decades, there have been massive advancements in high throughput technologies, which resulted in the exponential growth of public repositories of gene expression datasets for various phenotypes. It is possible to unravel biomarkers by comparing the gene expression levels under different conditions, such as disease vs. control, treated vs. not treated, drug A vs. drug B, etc. This problem refers to a well-studied problem in the machine learning domain, i.e., the feature selection problem. In biological data analysis, most of the computational feature selection methodologies were taken from other fields, without considering the nature of the biological data. Thus, integrative approaches that utilize the biological knowledge while performing feature selection are necessary for this kind of data. The main idea behind the integrative gene selection process is to generate a ranked list of genes considering both the statistical metrics that are applied to the gene expression data, and the biological background information which is provided as external datasets. One of the main goals of this review is to explore the existing methods that integrate different types of information in order to improve the identification of the biomolecular signatures of diseases and the discovery of new potential targets for treatment. These integrative approaches are expected to aid the prediction, diagnosis, and treatment of diseases, as well as to enlighten us on disease state dynamics, mechanisms of their onset and progression. The integration of various types of biological information will necessitate the development of novel techniques for integration and data analysis. Another aim of this review is to boost the bioinformatics community to develop new approaches for searching and determining significant groups/clusters of features based on one or more biological grouping functions. Full article
(This article belongs to the Special Issue Statistical Inference from High Dimensional Data)
Show Figures

Figure 1

Review
An Overview of Key Technologies in Physical Layer Security
Entropy 2020, 22(11), 1261; https://doi.org/10.3390/e22111261 - 06 Nov 2020
Cited by 1
Abstract
The open nature of radio propagation enables ubiquitous wireless communication. This allows for seamless data transmission. However, unauthorized users may pose a threat to the security of the data being transmitted to authorized users. This gives rise to network vulnerabilities such as hacking, [...] Read more.
The open nature of radio propagation enables ubiquitous wireless communication. This allows for seamless data transmission. However, unauthorized users may pose a threat to the security of the data being transmitted to authorized users. This gives rise to network vulnerabilities such as hacking, eavesdropping, and jamming of the transmitted information. Physical layer security (PLS) has been identified as one of the promising security approaches to safeguard the transmission from eavesdroppers in a wireless network. It is an alternative to the computationally demanding and complex cryptographic algorithms and techniques. PLS has continually received exponential research interest owing to the possibility of exploiting the characteristics of the wireless channel. One of the main characteristics includes the random nature of the transmission channel. The aforesaid nature makes it possible for confidential and authentic signal transmission between the sender and the receiver in the physical layer. We start by introducing the basic theories of PLS, including the wiretap channel, information-theoretic security, and a brief discussion of the cryptography security technique. Furthermore, an overview of multiple-input multiple-output (MIMO) communication is provided. The main focus of our review is based on the existing key-less PLS optimization techniques, their limitations, and challenges. The paper also looks into the promising key research areas in addressing these shortfalls. Lastly, a comprehensive overview of some of the recent PLS research in 5G and 6G technologies of wireless communication networks is provided. Full article
Show Figures

Figure 1

Review
Role of Entropy in Colloidal Self-Assembly
Entropy 2020, 22(8), 877; https://doi.org/10.3390/e22080877 - 10 Aug 2020
Cited by 1
Abstract
Entropy plays a key role in the self-assembly of colloidal particles. Specifically, in the case of hard particles, which do not interact or overlap with each other during the process of self-assembly, the free energy is minimized due to an increase in the [...] Read more.
Entropy plays a key role in the self-assembly of colloidal particles. Specifically, in the case of hard particles, which do not interact or overlap with each other during the process of self-assembly, the free energy is minimized due to an increase in the entropy of the system. Understanding the contribution of entropy and engineering it is increasingly becoming central to modern colloidal self-assembly research, because the entropy serves as a guide to design a wide variety of self-assembled structures for many technological and biomedical applications. In this work, we highlight the importance of entropy in different theoretical and experimental self-assembly studies. We discuss the role of shape entropy and depletion interactions in colloidal self-assembly. We also highlight the effect of entropy in the formation of open and closed crystalline structures, as well as describe recent advances in engineering entropy to achieve targeted self-assembled structures. Full article
(This article belongs to the Special Issue Thermodynamics and Entropy for Self-Assembly and Self-Organization)
Show Figures

Figure 1

Review
Phase-Coherent Dynamics of Quantum Devices with Local Interactions
Entropy 2020, 22(8), 847; https://doi.org/10.3390/e22080847 - 31 Jul 2020
Cited by 4
Abstract
This review illustrates how Local Fermi Liquid (LFL) theories describe the strongly correlated and coherent low-energy dynamics of quantum dot devices. This approach consists in an effective elastic scattering theory, accounting exactly for strong correlations. Here, we focus on the mesoscopic capacitor and [...] Read more.
This review illustrates how Local Fermi Liquid (LFL) theories describe the strongly correlated and coherent low-energy dynamics of quantum dot devices. This approach consists in an effective elastic scattering theory, accounting exactly for strong correlations. Here, we focus on the mesoscopic capacitor and recent experiments achieving a Coulomb-induced quantum state transfer. Extending to out-of-equilibrium regimes, aimed at triggered single electron emission, we illustrate how inelastic effects become crucial, requiring approaches beyond LFLs, shedding new light on past experimental data by showing clear interaction effects in the dynamics of mesoscopic capacitors. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

Review
Thermodynamics in Ecology—An Introductory Review
Entropy 2020, 22(8), 820; https://doi.org/10.3390/e22080820 - 27 Jul 2020
Cited by 6
Abstract
How to predict the evolution of ecosystems is one of the numerous questions asked of ecologists by managers and politicians. To answer this we will need to give a scientific definition to concepts like sustainability, integrity, resilience and ecosystem health. This is not [...] Read more.
How to predict the evolution of ecosystems is one of the numerous questions asked of ecologists by managers and politicians. To answer this we will need to give a scientific definition to concepts like sustainability, integrity, resilience and ecosystem health. This is not an easy task, as modern ecosystem theory exemplifies. Ecosystems show a high degree of complexity, based upon a high number of compartments, interactions and regulations. The last two decades have offered proposals for interpretation of ecosystems within a framework of thermodynamics. The entrance point of such an understanding of ecosystems was delivered more than 50 years ago through Schrödinger’s and Prigogine’s interpretations of living systems as “negentropy feeders” and “dissipative structures”, respectively. Combining these views from the far from equilibrium thermodynamics to traditional classical thermodynamics, and ecology is obviously not going to happen without problems. There seems little reason to doubt that far from equilibrium systems, such as organisms or ecosystems, also have to obey fundamental physical principles such as mass conservation, first and second law of thermodynamics. Both have been applied in ecology since the 1950s and lately the concepts of exergy and entropy have been introduced. Exergy has recently been proposed, from several directions, as a useful indicator of the state, structure and function of the ecosystem. The proposals take two main directions, one concerned with the exergy stored in the ecosystem, the other with the exergy degraded and entropy formation. The implementation of exergy in ecology has often been explained as a translation of the Darwinian principle of “survival of the fittest” into thermodynamics. The fittest ecosystem, being the one able to use and store fluxes of energy and materials in the most efficient manner. The major problem in the transfer to ecology is that thermodynamic properties can only be calculated and not measured. Most of the supportive evidence comes from aquatic ecosystems. Results show that natural and culturally induced changes in the ecosystems, are accompanied by a variations in exergy. In brief, ecological succession is followed by an increase of exergy. This paper aims to describe the state-of-the-art in implementation of thermodynamics into ecology. This includes a brief outline of the history and the derivation of the thermodynamic functions used today. Examples of applications and results achieved up to now are given, and the importance to management laid out. Some suggestions for essential future research agendas of issues that needs resolution are given. Full article
(This article belongs to the Special Issue Evolution and Thermodynamics)
Show Figures

Figure 1

Review
What Is So Special about Quantum Clicks?
Entropy 2020, 22(6), 602; https://doi.org/10.3390/e22060602 - 28 May 2020
Cited by 18
Abstract
This is an elaboration of the “extra” advantage of the performance of quantized physical systems over classical ones, both in terms of single outcomes as well as probabilistic predictions. From a formal point of view, it is based on entities related to (dual) [...] Read more.
This is an elaboration of the “extra” advantage of the performance of quantized physical systems over classical ones, both in terms of single outcomes as well as probabilistic predictions. From a formal point of view, it is based on entities related to (dual) vectors in (dual) Hilbert spaces, as compared to the Boolean algebra of subsets of a set and the additive measures they support. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Review
On the Evidence of Thermodynamic Self-Organization during Fatigue: A Review
Entropy 2020, 22(3), 372; https://doi.org/10.3390/e22030372 - 24 Mar 2020
Abstract
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new [...] Read more.
In this review paper, the evidence and application of thermodynamic self-organization are reviewed for metals typically with single crystals subjected to cyclic loading. The theory of self-organization in thermodynamic processes far from equilibrium is a cutting-edge theme for the development of a new generation of materials. It could be interpreted as the formation of globally coherent patterns, configurations and orderliness through local interactivities by “cascade evolution of dissipative structures”. Non-equilibrium thermodynamics, entropy, and dissipative structures connected to self-organization phenomenon (patterning, orderliness) are briefly discussed. Some example evidences are reviewed in detail to show how thermodynamics self-organization can emerge from a non-equilibrium process; fatigue. Evidences including dislocation density evolution, stored energy, temperature, and acoustic signals can be considered as the signature of self-organization. Most of the attention is given to relate an analogy between persistent slip bands (PSBs) and self-organization in metals with single crystals. Some aspects of the stability of dislocations during fatigue of single crystals are discussed using the formulation of excess entropy generation. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Graphical abstract

Review
New Invariant Expressions in Chemical Kinetics
Entropy 2020, 22(3), 373; https://doi.org/10.3390/e22030373 - 24 Mar 2020
Cited by 4
Abstract
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of [...] Read more.
This paper presents a review of our original results obtained during the last decade. These results have been found theoretically for classical mass-action-law models of chemical kinetics and justified experimentally. In contrast with the traditional invariances, they relate to a special battery of kinetic experiments, not a single experiment. Two types of invariances are distinguished and described in detail: thermodynamic invariants, i.e., special combinations of kinetic dependences that yield the equilibrium constants, or simple functions of the equilibrium constants; and “mixed” kinetico-thermodynamic invariances, functions both of equilibrium constants and non-thermodynamic ratios of kinetic coefficients. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Graphical abstract

Review
Thermodynamic Limits and Optimality of Microbial Growth
Entropy 2020, 22(3), 277; https://doi.org/10.3390/e22030277 - 28 Feb 2020
Cited by 3
Abstract
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. [...] Read more.
Understanding microbial growth with the use of mathematical models has a long history that dates back to the pioneering work of Jacques Monod in the 1940s. Monod’s famous growth law expressed microbial growth rate as a simple function of the limiting nutrient concentration. However, to explain growth laws from underlying principles is extremely challenging. In the second half of the 20th century, numerous experimental approaches aimed at precisely measuring heat production during microbial growth to determine the entropy balance in a growing cell and to quantify the exported entropy. This has led to the development of thermodynamic theories of microbial growth, which have generated fundamental understanding and identified the principal limitations of the growth process. Although these approaches ignored metabolic details and instead considered microbial metabolism as a black box, modern theories heavily rely on genomic resources to describe and model metabolism in great detail to explain microbial growth. Interestingly, however, thermodynamic constraints are often included in modern modeling approaches only in a rather superficial fashion, and it appears that recent modeling approaches and classical theories are rather disconnected fields. To stimulate a closer interaction between these fields, we here review various theoretical approaches that aim at describing microbial growth based on thermodynamics and outline the resulting thermodynamic limits and optimality principles. We start with classical black box models of cellular growth, and continue with recent metabolic modeling approaches that include thermodynamics, before we place these models in the context of fundamental considerations based on non-equilibrium statistical mechanics. We conclude by identifying conceptual overlaps between the fields and suggest how the various types of theories and models can be integrated. We outline how concepts from one approach may help to inform or constrain another, and we demonstrate how genome-scale models can be used to infer key black box parameters, such as the energy of formation or the degree of reduction of biomass. Such integration will allow understanding to what extent microbes can be viewed as thermodynamic machines, and how close they operate to theoretical optima. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Review
Some Notes on Counterfactuals in Quantum Mechanics
Entropy 2020, 22(3), 266; https://doi.org/10.3390/e22030266 - 26 Feb 2020
Abstract
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe [...] Read more.
Counterfactuals, i.e., events that could have occurred but eventually did not, play a unique role in quantum mechanics in that they exert causal effects despite their non-occurrence. They are therefore vital for a better understanding of quantum mechanics (QM) and possibly the universe as a whole. In earlier works, we have studied counterfactuals both conceptually and experimentally. A fruitful framework termed quantum oblivion has emerged, referring to situations where one particle seems to "forget" its interaction with other particles despite the latter being visibly affected. This framework proved to have significant explanatory power, which we now extend to tackle additional riddles. The time-symmetric causality employed by the Two State-Vector Formalism (TSVF) reveals a subtle realm ruled by “weak values,” already demonstrated by numerous experiments. They offer a realistic, simple and intuitively appealing explanation to the unique role of quantum non-events, as well as to the foundations of QM. In this spirit, we performed a weak value analysis of quantum oblivion and suggest some new avenues for further research. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

Review
Quantum Phonon Transport in Nanomaterials: Combining Atomistic with Non-Equilibrium Green’s Function Techniques
Entropy 2019, 21(8), 735; https://doi.org/10.3390/e21080735 - 27 Jul 2019
Cited by 6
Abstract
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this [...] Read more.
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this problem, by presenting selected applications of the PHONON tool to the description of phonon transport in nanostructured materials. The PHONON tool is a module developed as part of the Density-Functional Tight-Binding (DFTB) software platform. We discuss the anisotropic phonon band structure of selected puckered two-dimensional materials, helical and horizontal doping effects in the phonon thermal conductivity of boron nitride-carbon heteronanotubes, phonon filtering in molecular junctions, and a novel computational methodology to investigate time-dependent phonon transport at the atomistic level. These examples illustrate the versatility of our implementation of phonon transport in combination with density functional-based methods to address specific nanoscale functionalities, thus potentially allowing for designing novel thermal devices. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Graphical abstract

Review
Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
Entropy 2019, 21(6), 541; https://doi.org/10.3390/e21060541 - 28 May 2019
Cited by 70
Abstract
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete [...] Read more.
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete guideline of the theory and application of the algorithms, intended to explain their characteristics in detail to researchers from different fields. While initially developed for physiological applications, both algorithms have been used in other fields such as medicine, telecommunications, economics or Earth sciences. In this paper, we explain the theoretical aspects involving Information Theory and Chaos Theory, provide simple source codes for their computation, and illustrate the techniques with a step by step example of how to use the algorithms properly. This paper is not intended to be an exhaustive review of all previous applications of the algorithms but rather a comprehensive tutorial where no previous knowledge is required to understand the methodology. Full article
(This article belongs to the Special Issue Approximate, Sample and Multiscale Entropy)
Show Figures

Figure 1

Review
Physical Layer Key Generation in 5G and Beyond Wireless Communications: Challenges and Opportunities
Entropy 2019, 21(5), 497; https://doi.org/10.3390/e21050497 - 15 May 2019
Cited by 27
Abstract
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution [...] Read more.
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution is one of the essential primitives. However, traditional cryptography-based key distribution protocols might be challenged in the 5G and beyond communications because of special features such as device-to-device and heterogeneous communications, and ultra-low latency requirements. Channel reciprocity-based key generation (CRKG) is an emerging physical layer-based technique to establish secret keys between devices. This article reviews CRKG when the 5G and beyond networks employ three candidate technologies: duplex modes, massive multiple-input multiple-output (MIMO) and mmWave communications. We identify the opportunities and challenges for CRKG and provide corresponding solutions. To further demonstrate the feasibility of CRKG in practical communication systems, we overview existing prototypes with different IoT protocols and examine their performance in real-world environments. This article shows the feasibility and promising performances of CRKG with the potential to be commercialized. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

Review
Classical (Local and Contextual) Probability Model for Bohm–Bell Type Experiments: No-Signaling as Independence of Random Variables
Entropy 2019, 21(2), 157; https://doi.org/10.3390/e21020157 - 08 Feb 2019
Cited by 25
Abstract
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper [...] Read more.
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper is devoted to the conditional probability model with conditioning on the selection of the pairs of experimental settings. From the viewpoint of quantum foundations, this is a local contextual hidden-variables model. Following the recent works of Dzhafarov and collaborators, we apply our conditional probability approach to characterize (no-)signaling. Consideration of the Bohm–Bell experimental scheme in the presence of signaling is important for applications outside quantum mechanics, e.g., in psychology and social science. The main message of this paper (rooted to Ballentine) is that quantum probabilities and more generally probabilities related to the Bohm–Bell type experiments (not only in physics, but also in psychology, sociology, game theory, economics, and finances) can be classically represented as conditional probabilities. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
Review
Levitated Nanoparticles for Microscopic Thermodynamics—A Review
Entropy 2018, 20(5), 326; https://doi.org/10.3390/e20050326 - 28 Apr 2018
Cited by 35
Abstract
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we [...] Read more.
Levitated Nanoparticles have received much attention for their potential to perform quantum mechanical experiments even at room temperature. However, even in the regime where the particle dynamics are purely classical, there is a lot of interesting physics that can be explored. Here we review the application of levitated nanoparticles as a new experimental platform to explore stochastic thermodynamics in small systems. Full article
(This article belongs to the Special Issue Mesoscopic Thermodynamics and Dynamics)
Show Figures

Figure 1

Review
Information Theoretic Approaches for Motor-Imagery BCI Systems: Review and Experimental Comparison
Entropy 2018, 20(1), 7; https://doi.org/10.3390/e20010007 - 02 Jan 2018
Cited by 17
Abstract
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a [...] Read more.
Brain computer interfaces (BCIs) have been attracting a great interest in recent years. The common spatial patterns (CSP) technique is a well-established approach to the spatial filtering of the electroencephalogram (EEG) data in BCI applications. Even though CSP was originally proposed from a heuristic viewpoint, it can be also built on very strong foundations using information theory. This paper reviews the relationship between CSP and several information-theoretic approaches, including the Kullback–Leibler divergence, the Beta divergence and the Alpha-Beta log-det (AB-LD)divergence. We also revise other approaches based on the idea of selecting those features that are maximally informative about the class labels. The performance of all the methods will be also compared via experiments. Full article
(This article belongs to the Special Issue Information Theory Applied to Physiological Signals)
Show Figures

Figure 1

Other

Jump to: Research, Review

Concept Paper
Introduction to Extreme Seeking Entropy
Entropy 2020, 22(1), 93; https://doi.org/10.3390/e22010093 - 12 Jan 2020
Cited by 3
Abstract
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses [...] Read more.
Recently, the concept of evaluating an unusually large learning effort of an adaptive system to detect novelties in the observed data was introduced. The present paper introduces a new measure of the learning effort of an adaptive system. The proposed method also uses adaptable parameters. Instead of a multi-scale enhanced approach, the generalized Pareto distribution is employed to estimate the probability of unusual updates, as well as for detecting novelties. This measure was successfully tested in various scenarios with (i) synthetic data, (ii) real time series datasets, and multiple adaptive filters and learning algorithms. The results of these experiments are presented. Full article
Show Figures

Figure 1

Perspective
Entropy and Information within Intrinsically Disordered Protein Regions
Entropy 2019, 21(7), 662; https://doi.org/10.3390/e21070662 - 06 Jul 2019
Cited by 12
Abstract
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to [...] Read more.
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to extract information from IDR sequences. We argue that the information in IDR sequences cannot be fully revealed through positional conservation, which largely measures stable structural contacts and interaction motifs. Instead, considerations of evolutionary conservation of molecular features can reveal the full extent of information in IDRs. Experimental quantification of the large conformational entropy of IDRs is challenging but can be approximated through the extent of conformational sampling measured by a combination of NMR spectroscopy and lower-resolution structural biology techniques, which can be further interpreted with simulations. Conformational entropy and other biophysical features can be modulated by post-translational modifications that provide functional advantages to IDRs by tuning their energy landscapes and enabling a variety of functional interactions and modes of regulation. The diverse mosaic of functional states of IDRs and their conformational features within complexes demands novel metrics of information, which will reflect the complicated sequence-conformational ensemble-function relationship of IDRs. Full article
Show Figures

Graphical abstract

Back to TopTop