Editor's Choice Articles

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessEditor’s ChoiceArticle
Unifying Aspects of Generalized Calculus
Entropy 2020, 22(10), 1180; https://doi.org/10.3390/e22101180 - 19 Oct 2020
Cited by 2
Abstract
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate [...] Read more.
Non-Newtonian calculus naturally unifies various ideas that have occurred over the years in the field of generalized thermostatistics, or in the borderland between classical and quantum information theory. The formalism, being very general, is as simple as the calculus we know from undergraduate courses of mathematics. Its theoretical potential is huge, and yet it remains unknown or unappreciated. Full article
(This article belongs to the Special Issue The Statistical Foundations of Entropy)
Show Figures

Figure 1

Open AccessFeature PaperEditor’s ChoiceArticle
Quantum-Gravity Stochastic Effects on the de Sitter Event Horizon
Entropy 2020, 22(6), 696; https://doi.org/10.3390/e22060696 - 22 Jun 2020
Cited by 3
Abstract
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity [...] Read more.
The stochastic character of the cosmological constant arising from the non-linear quantum-vacuum Bohm interaction in the framework of the manifestly-covariant theory of quantum gravity (CQG theory) is pointed out. This feature is shown to be consistent with the axiomatic formulation of quantum gravity based on the hydrodynamic representation of the same CQG theory developed recently. The conclusion follows by investigating the indeterminacy properties of the probability density function and its representation associated with the quantum gravity state, which corresponds to a hydrodynamic continuity equation that satisfies the unitarity principle. As a result, the corresponding form of stochastic quantum-modified Einstein field equations is obtained and shown to admit a stochastic cosmological de Sitter solution for the space-time metric tensor. The analytical calculation of the stochastic averages of relevant physical observables is obtained. These include in particular the radius of the de Sitter sphere fixing the location of the event horizon and the expression of the Hawking temperature associated with the related particle tunneling effect. Theoretical implications for cosmology and field theories are pointed out. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Mathematical Models of Consciousness
Entropy 2020, 22(6), 609; https://doi.org/10.3390/e22060609 - 30 May 2020
Cited by 1
Abstract
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, [...] Read more.
In recent years, promising mathematical models have been proposed that aim to describe conscious experience and its relation to the physical domain. Whereas the axioms and metaphysical ideas of these theories have been carefully motivated, their mathematical formalism has not. In this article, we aim to remedy this situation. We give an account of what warrants mathematical representation of phenomenal experience, derive a general mathematical framework that takes into account consciousness’ epistemic context, and study which mathematical structures some of the key characteristics of conscious experience imply, showing precisely where mathematical approaches allow to go beyond what the standard methodology can do. The result is a general mathematical framework for models of consciousness that can be employed in the theory-building process. Full article
(This article belongs to the Special Issue Models of Consciousness)
Open AccessEditor’s ChoiceArticle
Differential Parametric Formalism for the Evolution of Gaussian States: Nonunitary Evolution and Invariant States
Entropy 2020, 22(5), 586; https://doi.org/10.3390/e22050586 - 23 May 2020
Cited by 3
Abstract
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in [...] Read more.
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in generic form the differential equations for the covariance matrix, the mean values, and the density matrix parameters of a multipartite Gaussian state, unitarily evolving according to a Hamiltonian H ^ . We also present the corresponding differential equations, which describe the nonunitary evolution of the subsystems. The resulting nonlinear equations are used to solve the dynamics of the system instead of the Schrödinger equation. The formalism elaborated allows us to define new specific invariant and quasi-invariant states, as well as states with invariant covariance matrices, i.e., states were only the mean values evolve according to the classical Hamilton equations. By using density matrices in the position and in the tomographic-probability representations, we study examples of these properties. As examples, we present novel invariant states for the two-mode frequency converter and quasi-invariant states for the bipartite parametric amplifier. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
A Method to Present and Analyze Ensembles of Information Sources
Entropy 2020, 22(5), 580; https://doi.org/10.3390/e22050580 - 21 May 2020
Abstract
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with [...] Read more.
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided. Full article
(This article belongs to the Special Issue Information Theory in Computational Neuroscience)
Show Figures

Figure 1

Open AccessFeature PaperEditor’s ChoiceArticle
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564 - 18 May 2020
Cited by 4
Abstract
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees [...] Read more.
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories. Full article
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Re-Thinking the World with Neutral Monism:Removing the Boundaries Between Mind, Matter, and Spacetime
Entropy 2020, 22(5), 551; https://doi.org/10.3390/e22050551 - 14 May 2020
Cited by 1
Abstract
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious [...] Read more.
Herein we are not interested in merely using dynamical systems theory, graph theory, information theory, etc., to model the relationship between brain dynamics and networks, and various states and degrees of conscious processes. We are interested in the question of how phenomenal conscious experience and fundamental physics are most deeply related. Any attempt to mathematically and formally model conscious experience and its relationship to physics must begin with some metaphysical assumption in mind about the nature of conscious experience, the nature of matter and the nature of the relationship between them. These days the most prominent metaphysical fixed points are strong emergence or some variant of panpsychism. In this paper we will detail another distinct metaphysical starting point known as neutral monism. In particular, we will focus on a variant of the neutral monism of William James and Bertrand Russell. Rather than starting with physics as fundamental, as both strong emergence and panpsychism do in their own way, our goal is to suggest how one might derive fundamental physics from neutral monism. Thus, starting with two axioms grounded in our characterization of neutral monism, we will sketch out a derivation of and explanation for some key features of relativity and quantum mechanics that suggest a unity between those two theories that is generally unappreciated. Our mode of explanation throughout will be of the principle as opposed to constructive variety in something like Einstein’s sense of those terms. We will argue throughout that a bias towards property dualism and a bias toward reductive dynamical and constructive explanation lead to the hard problem and the explanatory gap in consciousness studies, and lead to serious unresolved problems in fundamental physics, such as the measurement problem and the mystery of entanglement in quantum mechanics and lack of progress in producing an empirically well-grounded theory of quantum gravity. We hope to show that given our take on neutral monism and all that follows from it, the aforementioned problems can be satisfactorily resolved leaving us with a far more intuitive and commonsense model of the relationship between conscious experience and physics. Full article
(This article belongs to the Special Issue Models of Consciousness)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Prediction and Variable Selection in High-Dimensional Misspecified Binary Classification
Entropy 2020, 22(5), 543; https://doi.org/10.3390/e22050543 - 13 May 2020
Cited by 2
Abstract
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic [...] Read more.
In this paper, we consider prediction and variable selection in the misspecified binary classification models under the high-dimensional scenario. We focus on two approaches to classification, which are computationally efficient, but lead to model misspecification. The first one is to apply penalized logistic regression to the classification data, which possibly do not follow the logistic model. The second method is even more radical: we just treat class labels of objects as they were numbers and apply penalized linear regression. In this paper, we investigate thoroughly these two approaches and provide conditions, which guarantee that they are successful in prediction and variable selection. Our results hold even if the number of predictors is much larger than the sample size. The paper is completed by the experimental results. Full article
Open AccessEditor’s ChoiceArticle
Inferring What to Do (And What Not to)
Entropy 2020, 22(5), 536; https://doi.org/10.3390/e22050536 - 11 May 2020
Cited by 3
Abstract
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of [...] Read more.
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). Full article
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Non-Hermitian Hamiltonians and Quantum Transport in Multi-Terminal Conductors
Entropy 2020, 22(4), 459; https://doi.org/10.3390/e22040459 - 17 Apr 2020
Abstract
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression [...] Read more.
We study the transport properties of multi-terminal Hermitian structures within the non-equilibrium Green’s function formalism in a tight-binding approximation. We show that non-Hermitian Hamiltonians naturally appear in the description of coherent tunneling and are indispensable for the derivation of a general compact expression for the lead-to-lead transmission coefficients of an arbitrary multi-terminal system. This expression can be easily analyzed, and a robust set of conditions for finding zero and unity transmissions (even in the presence of extra electrodes) can be formulated. Using the proposed formalism, a detailed comparison between three- and two-terminal systems is performed, and it is shown, in particular, that transmission at bound states in the continuum does not change with the third electrode insertion. The main conclusions are illustratively exemplified by some three-terminal toy models. For instance, the influence of the tunneling coupling to the gate electrode is discussed for a model of quantum interference transistor. The results of this paper will be of high interest, in particular, within the field of quantum design of molecular electronic devices. Full article
(This article belongs to the Special Issue Quantum Dynamics with Non-Hermitian Hamiltonians)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Towards a Unified Theory of Learning and Information
Entropy 2020, 22(4), 438; https://doi.org/10.3390/e22040438 - 13 Apr 2020
Cited by 1
Abstract
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and [...] Read more.
In this paper, we introduce the notion of “learning capacity” for algorithms that learn from data, which is analogous to the Shannon channel capacity for communication systems. We show how “learning capacity” bridges the gap between statistical learning theory and information theory, and we will use it to derive generalization bounds for finite hypothesis spaces, differential privacy, and countable domains, among others. Moreover, we prove that under the Axiom of Choice, the existence of an empirical risk minimization (ERM) rule that has a vanishing learning capacity is equivalent to the assertion that the hypothesis space has a finite Vapnik–Chervonenkis (VC) dimension, thus establishing an equivalence relation between two of the most fundamental concepts in statistical learning theory and information theory. In addition, we show how the learning capacity of an algorithm provides important qualitative results, such as on the relation between generalization and algorithmic stability, information leakage, and data processing. Finally, we conclude by listing some open problems and suggesting future directions of research. Full article
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Measuring the Tangle of Three-Qubit States
Entropy 2020, 22(4), 436; https://doi.org/10.3390/e22040436 - 11 Apr 2020
Cited by 4
Abstract
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters [...] Read more.
We present a quantum circuit that transforms an unknown three-qubit state into its canonical form, up to relative phases, given many copies of the original state. The circuit is made of three single-qubit parametrized quantum gates, and the optimal values for the parameters are learned in a variational fashion. Once this transformation is achieved, direct measurement of outcome probabilities in the computational basis provides an estimate of the tangle, which quantifies genuine tripartite entanglement. We perform simulations on a set of random states under different noise conditions to asses the validity of the method. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Sensitivity Analysis of Selected Parameters in the Order Picking Process Simulation Model, with Randomly Generated Orders
Entropy 2020, 22(4), 423; https://doi.org/10.3390/e22040423 - 09 Apr 2020
Cited by 8
Abstract
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data [...] Read more.
Sensitivity analysis of selected parameters in simulation models of logistics facilities is one of the key aspects in functioning of self-conscious and efficient management. In order to develop simulation models adequate of real logistics facilities’ processes, it is important to input actual data connected to material flows on entry to models, whereas most models assume unified load units as default. To provide such data, pseudorandom number generators (PRNGs) are used. The original generator described in the paper was employed in order to generate picking lists for order picking process (OPP). This ensures building a hypothetical, yet close to reality process in terms of unpredictable customers’ orders. Models with applied PRNGs ensure more detailed and more understandable representation of OPPs in comparison to analytical models. Therefore, the author’s motivation was to present the original model as a tool for enterprises’ managers who might control OPP, devices and means of transport employed therein. The outcomes and implications of the contribution are connected to presentation of selected possibilities in OPP analyses, which might be developed and solved within the model. The presented model has some limitations. One of them is assumption that one mean of transport per one aisle is taken into consideration. Another limitation is the indirectly randomization of certain model’s parameters. Full article
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
On Geometry of Information Flow for Causal Inference
Entropy 2020, 22(4), 396; https://doi.org/10.3390/e22040396 - 30 Mar 2020
Abstract
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. [...] Read more.
Causal inference is perhaps one of the most fundamental concepts in science, beginning originally from the works of some of the ancient philosophers, through today, but also weaved strongly in current work from statisticians, machine learning experts, and scientists from many other fields. This paper takes the perspective of information flow, which includes the Nobel prize winning work on Granger-causality, and the recently highly popular transfer entropy, these being probabilistic in nature. Our main contribution will be to develop analysis tools that will allow a geometric interpretation of information flow as a causal inference indicated by positive transfer entropy. We will describe the effective dimensionality of an underlying manifold as projected into the outcome space that summarizes information flow. Therefore, contrasting the probabilistic and geometric perspectives, we will introduce a new measure of causal inference based on the fractal correlation dimension conditionally applied to competing explanations of future forecasts, which we will write G e o C y x . This avoids some of the boundedness issues that we show exist for the transfer entropy, T y x . We will highlight our discussions with data developed from synthetic models of successively more complex nature: these include the Hénon map example, and finally a real physiological example relating breathing and heart rate function. Full article
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Spectral Structure and Many-Body Dynamics of Ultracold Bosons in a Double-Well
Entropy 2020, 22(4), 382; https://doi.org/10.3390/e22040382 - 26 Mar 2020
Cited by 1
Abstract
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior [...] Read more.
We examine the spectral structure and many-body dynamics of two and three repulsively interacting bosons trapped in a one-dimensional double-well, for variable barrier height, inter-particle interaction strength, and initial conditions. By exact diagonalization of the many-particle Hamiltonian, we specifically explore the dynamical behavior of the particles launched either at the single-particle ground state or saddle-point energy, in a time-independent potential. We complement these results by a characterization of the cross-over from diabatic to quasi-adiabatic evolution under finite-time switching of the potential barrier, via the associated time evolution of a single particle’s von Neumann entropy. This is achieved with the help of the multiconfigurational time-dependent Hartree method for indistinguishable particles (MCTDH-X)—which also allows us to extrapolate our results for increasing particle numbers. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Endoreversible Modeling of a Hydraulic Recuperation System
Entropy 2020, 22(4), 383; https://doi.org/10.3390/e22040383 - 26 Mar 2020
Cited by 12
Abstract
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. [...] Read more.
Hybrid drive systems able to recover and reuse braking energy of the vehicle can reduce fuel consumption, air pollution and operating costs. Among them, hydraulic recuperation systems are particularly suitable for commercial vehicles, especially if they are already equipped with a hydraulic system. Thus far, the investigation of such systems has been limited to individual components or optimizing their control. In this paper, we focus on thermodynamic effects and their impact on the overall systems energy saving potential using endoreversible thermodynamics as the ideal framework for modeling. The dynamical behavior of the hydraulic recuperation system as well as energy savings are estimated using real data of a vehicle suitable for application. Here, energy savings accelerating the vehicle around 10% and a reduction in energy transferred to the conventional disc brakes around 58% are predicted. We further vary certain design and loss parameters—such as accumulator volume, displacement of the hydraulic unit, heat transfer coefficients or pipe diameter—and discuss their influence on the energy saving potential of the system. It turns out that heat transfer coefficients and pipe diameter are of less importance than accumulator volume and displacement of the hydraulic unit. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Theory, Analysis, and Applications of the Entropic Lattice Boltzmann Model for Compressible Flows
Entropy 2020, 22(3), 370; https://doi.org/10.3390/e22030370 - 24 Mar 2020
Cited by 2
Abstract
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both [...] Read more.
The entropic lattice Boltzmann method for the simulation of compressible flows is studied in detail and new opportunities for extending operating range are explored. We address limitations on the maximum Mach number and temperature range allowed for a given lattice. Solutions to both these problems are presented by modifying the original lattices without increasing the number of discrete velocities and without altering the numerical algorithm. In order to increase the Mach number, we employ shifted lattices while the magnitude of lattice speeds is increased in order to extend the temperature range. Accuracy and efficiency of the shifted lattices are demonstrated with simulations of the supersonic flow field around a diamond-shaped and NACA0012 airfoil, the subsonic, transonic, and supersonic flow field around the Busemann biplane, and the interaction of vortices with a planar shock wave. For the lattices with extended temperature range, the model is validated with the simulation of the Richtmyer–Meshkov instability. We also discuss some key ideas of how to reduce the number of discrete speeds in three-dimensional simulations by pruning of the higher-order lattices, and introduce a new construction of the corresponding guided equilibrium by entropy minimization. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
BiEntropy, TriEntropy and Primality
Entropy 2020, 22(3), 311; https://doi.org/10.3390/e22030311 - 10 Mar 2020
Abstract
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small [...] Read more.
The order and disorder of binary representations of the natural numbers < 28 is measured using the BiEntropy function. Significant differences are detected between the primes and the non-primes. The BiEntropic prime density is shown to be quadratic with a very small Gaussian distributed error. The work is repeated in binary using a Monte Carlo simulation of a sample of natural numbers < 232 and in trinary for all natural numbers < 39 with similar but cubic results. We found a significant relationship between BiEntropy and TriEntropy such that we can discriminate between the primes and numbers divisible by six. We discuss the theoretical basis of these results and show how they generalise to give a tight bound on the variance of Pi(x)–Li(x) for all x. This bound is much tighter than the bound given by Von Koch in 1901 as an equivalence for proof of the Riemann Hypothesis. Since the primes are Gaussian due to a simple induction on the binary derivative, this implies that the twin primes conjecture is true. We also provide absolutely convergent asymptotes for the numbers of Fermat and Mersenne primes in the appendices. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Open AccessFeature PaperEditor’s ChoiceArticle
Two Faced Janus of Quantum Nonlocality
Entropy 2020, 22(3), 303; https://doi.org/10.3390/e22030303 - 06 Mar 2020
Cited by 8
Abstract
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the [...] Read more.
This paper is a new step towards understanding why “quantum nonlocality” is a misleading concept. Metaphorically speaking, “quantum nonlocality” is Janus faced. One face is an apparent nonlocality of the Lüders projection and another face is Bell nonlocality (a wrong conclusion that the violation of Bell type inequalities implies the existence of mysterious instantaneous influences between distant physical systems). According to the Lüders projection postulate, a quantum measurement performed on one of the two distant entangled physical systems modifies their compound quantum state instantaneously. Therefore, if the quantum state is considered to be an attribute of the individual physical system and if one assumes that experimental outcomes are produced in a perfectly random way, one quickly arrives at the contradiction. It is a primary source of speculations about a spooky action at a distance. Bell nonlocality as defined above was explained and rejected by several authors; thus, we concentrate in this paper on the apparent nonlocality of the Lüders projection. As already pointed out by Einstein, the quantum paradoxes disappear if one adopts the purely statistical interpretation of quantum mechanics (QM). In the statistical interpretation of QM, if probabilities are considered to be objective properties of random experiments we show that the Lüders projection corresponds to the passage from joint probabilities describing all set of data to some marginal conditional probabilities describing some particular subsets of data. If one adopts a subjective interpretation of probabilities, such as QBism, then the Lüders projection corresponds to standard Bayesian updating of the probabilities. The latter represents degrees of beliefs of local agents about outcomes of individual measurements which are placed or which will be placed at distant locations. In both approaches, probability-transformation does not happen in the physical space, but only in the information space. Thus, all speculations about spooky interactions or spooky predictions at a distance are simply misleading. Coming back to Bell nonlocality, we recall that in a recent paper we demonstrated, using exclusively the quantum formalism, that CHSH inequalities may be violated for some quantum states only because of the incompatibility of quantum observables and Bohr’s complementarity. Finally, we explain that our criticism of quantum nonlocality is in the spirit of Hertz-Boltzmann methodology of scientific theories. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Exploring the Phase Space of Multi-Principal-Element Alloys and Predicting the Formation of Bulk Metallic Glasses
Entropy 2020, 22(3), 292; https://doi.org/10.3390/e22030292 - 02 Mar 2020
Cited by 1
Abstract
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work [...] Read more.
Multi-principal-element alloys share a set of thermodynamic and structural parameters that, in their range of adopted values, correlate to the tendency of the alloys to assume a solid solution, whether as a crystalline or an amorphous phase. Based on empirical correlations, this work presents a computational method for the prediction of possible glass-forming compositions for a chosen alloys system as well as the calculation of their critical cooling rates. The obtained results compare well to experimental data for Pd-Ni-P, micro-alloyed Pd-Ni-P, Cu-Mg-Ca, and Cu-Zr-Ti. Furthermore, a random-number-generator-based algorithm is employed to explore glass-forming candidate alloys with a minimum critical cooling rate, reducing the number of datapoints necessary to find suitable glass-forming compositions. A comparison with experimental results for the quaternary Ti-Zr-Cu-Ni system shows a promising overlap of calculation and experiment, implying that it is a reasonable method to find candidates for glass-forming alloys with a sufficiently low critical cooling rate to allow the formation of bulk metallic glasses. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis
Entropy 2020, 22(3), 280; https://doi.org/10.3390/e22030280 - 28 Feb 2020
Cited by 10
Abstract
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection [...] Read more.
Failure mode and effects analysis (FMEA), as a commonly used risk management method, has been extensively applied to the engineering domain. A vital parameter in FMEA is the risk priority number (RPN), which is the product of occurrence (O), severity (S), and detection (D) of a failure mode. To deal with the uncertainty in the assessments given by domain experts, a novel Deng entropy weighted risk priority number (DEWRPN) for FMEA is proposed in the framework of Dempster–Shafer evidence theory (DST). DEWRPN takes into consideration the relative importance in both risk factors and FMEA experts. The uncertain degree of objective assessments coming from experts are measured by the Deng entropy. An expert’s weight is comprised of the three risk factors’ weights obtained independently from expert’s assessments. In DEWRPN, the strategy of assigning weight for each expert is flexible and compatible to the real decision-making situation. The entropy-based relative weight symbolizes the relative importance. In detail, the higher the uncertain degree of a risk factor from an expert is, the lower the weight of the corresponding risk factor will be and vice versa. We utilize Deng entropy to construct the exponential weight of each risk factor as well as an expert’s relative importance on an FMEA item in a state-of-the-art way. A case study is adopted to verify the practicability and effectiveness of the proposed model. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Maxwell’s Demon in Quantum Mechanics
Entropy 2020, 22(3), 269; https://doi.org/10.3390/e22030269 - 27 Feb 2020
Cited by 1
Abstract
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow [...] Read more.
Maxwell’s Demon is a thought experiment devised by J. C. Maxwell in 1867 in order to show that the Second Law of thermodynamics is not universal, since it has a counter-example. Since the Second Law is taken by many to provide an arrow of time, the threat to its universality threatens the account of temporal directionality as well. Various attempts to “exorcise” the Demon, by proving that it is impossible for one reason or another, have been made throughout the years, but none of them were successful. We have shown (in a number of publications) by a general state-space argument that Maxwell’s Demon is compatible with classical mechanics, and that the most recent solutions, based on Landauer’s thesis, are not general. In this paper we demonstrate that Maxwell’s Demon is also compatible with quantum mechanics. We do so by analyzing a particular (but highly idealized) experimental setup and proving that it violates the Second Law. Our discussion is in the framework of standard quantum mechanics; we give two separate arguments in the framework of quantum mechanics with and without the projection postulate. We address in our analysis the connection between measurement and erasure interactions and we show how these notions are applicable in the microscopic quantum mechanical structure. We discuss what might be the quantum mechanical counterpart of the classical notion of “macrostates”, thus explaining why our Quantum Demon setup works not only at the micro level but also at the macro level, properly understood. One implication of our analysis is that the Second Law cannot provide a universal lawlike basis for an account of the arrow of time; this account has to be sought elsewhere. Full article
(This article belongs to the Special Issue Time and Entropy)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Entropy of Conduction Electrons from Transport Experiments
Entropy 2020, 22(2), 244; https://doi.org/10.3390/e22020244 - 21 Feb 2020
Cited by 1
Abstract
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence [...] Read more.
The entropy of conduction electrons was evaluated utilizing the thermodynamic definition of the Seebeck coefficient as a tool. This analysis was applied to two different kinds of scientific questions that can—if at all—be only partially addressed by other methods. These are the field-dependence of meta-magnetic phase transitions and the electronic structure in strongly disordered materials, such as alloys. We showed that the electronic entropy change in meta-magnetic transitions is not constant with the applied magnetic field, as is usually assumed. Furthermore, we traced the evolution of the electronic entropy with respect to the chemical composition of an alloy series. Insights about the strength and kind of interactions appearing in the exemplary materials can be identified in the experiments. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Thermodynamic and Transport Properties of Equilibrium Debye Plasmas
Entropy 2020, 22(2), 237; https://doi.org/10.3390/e22020237 - 20 Feb 2020
Cited by 1
Abstract
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity [...] Read more.
The thermodynamic and transport properties of weakly non-ideal, high-density partially ionized hydrogen plasma are investigated, accounting for quantum effects due to the change in the energy spectrum of atomic hydrogen when the electron–proton interaction is considered embedded in the surrounding particles. The complexity of the rigorous approach led to the development of simplified models, able to include the neighbor-effects on the isolated system while remaining consistent with the traditional thermodynamic approach. High-density conditions have been simulated assuming particle interactions described by a screened Coulomb potential. Full article
(This article belongs to the Special Issue Simulation with Entropy Thermodynamics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Global Geometry of Bayesian Statistics
Entropy 2020, 22(2), 240; https://doi.org/10.3390/e22020240 - 20 Feb 2020
Abstract
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact [...] Read more.
In the previous work of the author, a non-trivial symmetry of the relative entropy in the information geometry of normal distributions was discovered. The same symmetry also appears in the symplectic/contact geometry of Hilbert modular cusps. Further, it was observed that a contact Hamiltonian flow presents a certain Bayesian inference on normal distributions. In this paper, we describe Bayesian statistics and the information geometry in the language of current geometry in order to spread our interest in statistics through general geometers and topologists. Then, we foliate the space of multivariate normal distributions by symplectic leaves to generalize the above result of the author. This foliation arises from the Cholesky decomposition of the covariance matrices. Full article
(This article belongs to the Special Issue Information Geometry III)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Entropy, Information, and Symmetry; Ordered Is Symmetrical, II: System of Spins in the Magnetic Field
Entropy 2020, 22(2), 235; https://doi.org/10.3390/e22020235 - 19 Feb 2020
Cited by 2
Abstract
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and [...] Read more.
The second part of this paper develops an approach suggested in Entropy 2020, 22(1), 11; which relates ordering in physical systems to symmetrizing. Entropy is frequently interpreted as a quantitative measure of “chaos” or “disorder”. However, the notions of “chaos” and “disorder” are vague and subjective, to a great extent. This leads to numerous misinterpretations of entropy. We propose that the disorder is viewed as an absence of symmetry and identify “ordering” with symmetrizing of a physical system; in other words, introducing the elements of symmetry into an initially disordered physical system. We explore the initially disordered system of elementary magnets exerted to the external magnetic field H . Imposing symmetry restrictions diminishes the entropy of the system and decreases its temperature. The general case of the system of elementary magnets demonstrating j-fold symmetry is studied. The T j = T j interrelation takes place, where T and T j are the temperatures of non-symmetrized and j-fold-symmetrized systems of the magnets, correspondingly. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceCommunication
The Brevity Law as a Scaling Law, and a Possible Origin of Zipf’s Law for Word Frequencies
Entropy 2020, 22(2), 224; https://doi.org/10.3390/e22020224 - 17 Feb 2020
Cited by 3
Abstract
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This [...] Read more.
An important body of quantitative linguistics is constituted by a series of statistical laws about language usage. Despite the importance of these linguistic laws, some of them are poorly formulated, and, more importantly, there is no unified framework that encompasses all them. This paper presents a new perspective to establish a connection between different statistical linguistic laws. Characterizing each word type by two random variables—length (in number of characters) and absolute frequency—we show that the corresponding bivariate joint probability distribution shows a rich and precise phenomenology, with the type-length and the type-frequency distributions as its two marginals, and the conditional distribution of frequency at fixed length providing a clear formulation for the brevity-frequency phenomenon. The type-length distribution turns out to be well fitted by a gamma distribution (much better than with the previously proposed lognormal), and the conditional frequency distributions at fixed length display power-law-decay behavior with a fixed exponent α 1.4 and a characteristic-frequency crossover that scales as an inverse power δ 2.8 of length, which implies the fulfillment of a scaling law analogous to those found in the thermodynamics of critical phenomena. As a by-product, we find a possible model-free explanation for the origin of Zipf’s law, which should arise as a mixture of conditional frequency distributions governed by the crossover length-dependent frequency. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Generalised Measures of Multivariate Information Content
Entropy 2020, 22(2), 216; https://doi.org/10.3390/e22020216 - 14 Feb 2020
Cited by 3
Abstract
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted [...] Read more.
The entropy of a pair of random variables is commonly depicted using a Venn diagram. This representation is potentially misleading, however, since the multivariate mutual information can be negative. This paper presents new measures of multivariate information content that can be accurately depicted using Venn diagrams for any number of random variables. These measures complement the existing measures of multivariate mutual information and are constructed by considering the algebraic structure of information sharing. It is shown that the distinct ways in which a set of marginal observers can share their information with a non-observing third party corresponds to the elements of a free distributive lattice. The redundancy lattice from partial information decomposition is then subsequently and independently derived by combining the algebraic structures of joint and shared information content. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Finite-Time Thermodynamic Model for Evaluating Heat Engines in Ocean Thermal Energy Conversion
Entropy 2020, 22(2), 211; https://doi.org/10.3390/e22020211 - 13 Feb 2020
Cited by 17
Abstract
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of [...] Read more.
Ocean thermal energy conversion (OTEC) converts the thermal energy stored in the ocean temperature difference between warm surface seawater and cold deep seawater into electricity. The necessary temperature difference to drive OTEC heat engines is only 15–25 K, which will theoretically be of low thermal efficiency. Research has been conducted to propose unique systems that can increase the thermal efficiency. This thermal efficiency is generally applied for the system performance metric, and researchers have focused on using the higher available temperature difference of heat engines to improve this efficiency without considering the finite flow rate and sensible heat of seawater. In this study, our model shows a new concept of thermodynamics for OTEC. The first step is to define the transferable thermal energy in the OTEC as the equilibrium state and the dead state instead of the atmospheric condition. Second, the model shows the available maximum work, the new concept of exergy, by minimizing the entropy generation while considering external heat loss. The maximum thermal energy and exergy allow the normalization of the first and second laws of thermal efficiencies. These evaluation methods can be applied to optimized OTEC systems and their effectiveness is confirmed. Full article
(This article belongs to the Special Issue Entropy in Renewable Energy Systems)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
On the Irrationality of Being in Two Minds
Entropy 2020, 22(2), 174; https://doi.org/10.3390/e22020174 - 04 Feb 2020
Cited by 2
Abstract
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology [...] Read more.
This article presents a general framework that allows irrational decision making to be theoretically investigated and simulated. Rationality in human decision making under uncertainty is normatively prescribed by the axioms of probability theory in order to maximize utility. However, substantial literature from psychology and cognitive science shows that human decisions regularly deviate from these axioms. Bistable probabilities are proposed as a principled and straight forward means for modeling (ir)rational decision making, which occurs when a decision maker is in “two minds”. We show that bistable probabilities can be formalized by positive-operator-valued projections in quantum mechanics. We found that (1) irrational decision making necessarily involves a wider spectrum of causal relationships than rational decision making, (2) the accessible information turns out to be greater in irrational decision making when compared to rational decision making, and (3) irrational decision making is quantum-like because it violates the Bell–Wigner polytope. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
Nonlinear Fokker–Planck Equation Approach to Systems of Interacting Particles: Thermostatistical Features Related to the Range of the Interactions
Entropy 2020, 22(2), 163; https://doi.org/10.3390/e22020163 - 31 Jan 2020
Cited by 3
Abstract
Nonlinear Fokker–Planck equations (NLFPEs) constitute useful effective descriptions of some interacting many-body systems. Important instances of these nonlinear evolution equations are closely related to the thermostatistics based on the S q power-law entropic functionals. Most applications of the connection between the NLFPE and the S q entropies have focused on systems interacting through short-range forces. In the present contribution we re-visit the NLFPE approach to interacting systems in order to clarify the role played by the range of the interactions, and to explore the possibility of developing similar treatments for systems with long-range interactions, such as those corresponding to Newtonian gravitation. In particular, we consider a system of particles interacting via forces following the inverse square law and performing overdamped motion, that is described by a density obeying an integro-differential evolution equation that admits exact time-dependent solutions of the q-Gaussian form. These q-Gaussian solutions, which constitute a signature of S q -thermostatistics, evolve in a similar but not identical way to the solutions of an appropriate nonlinear, power-law Fokker–Planck equation. Full article
(This article belongs to the Special Issue Entropy and Gravitation)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Evolution of Neuroaesthetic Variables in Portrait Paintings throughout the Renaissance
Entropy 2020, 22(2), 146; https://doi.org/10.3390/e22020146 - 26 Jan 2020
Cited by 2
Abstract
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied [...] Read more.
To compose art, artists rely on a set of sensory evaluations performed fluently by the brain. The outcome of these evaluations, which we call neuroaesthetic variables, helps to compose art with high aesthetic value. In this study, we probed whether these variables varied across art periods despite relatively unvaried neural function. We measured several neuroaesthetic variables in portrait paintings from the Early and High Renaissance, and from Mannerism. The variables included symmetry, balance, and contrast (chiaroscuro), as well as intensity and spatial complexities measured by two forms of normalized entropy. The results showed that the degree of symmetry remained relatively constant during the Renaissance. However, the balance of portraits decayed abruptly at the end of the Early Renaissance, that is, at the closing of the 15th century. Intensity and spatial complexities, and thus entropies, of portraits also fell in such manner around the same time. Our data also showed that the decline of complexity and entropy could be attributed to the rise of chiaroscuro. With few exceptions, the values of aesthetic variables from the top of artists of the Renaissance resembled those of their peers. We conclude that neuroaesthetic variables have flexibility to change in brains of artists (and observers). Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Adapting Logic to Physics: The Quantum-Like Eigenlogic Program
Entropy 2020, 22(2), 139; https://doi.org/10.3390/e22020139 - 24 Jan 2020
Cited by 4
Abstract
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear [...] Read more.
Considering links between logic and physics is important because of the fast development of quantum information technologies in our everyday life. This paper discusses a new method in logic inspired from quantum theory using operators, named Eigenlogic. It expresses logical propositions using linear algebra. Logical functions are represented by operators and logical truth tables correspond to the eigenvalue structure. It extends the possibilities of classical logic by changing the semantics from the Boolean binary alphabet { 0 , 1 } using projection operators to the binary alphabet { + 1 , 1 } employing reversible involution operators. Also, many-valued logical operators are synthesized, for whatever alphabet, using operator methods based on Lagrange interpolation and on the Cayley–Hamilton theorem. Considering a superposition of logical input states one gets a fuzzy logic representation where the fuzzy membership function is the quantum probability given by the Born rule. Historical parallels from Boole, Post, Poincaré and Combinatory Logic are presented in relation to probability theory, non-commutative quaternion algebra and Turing machines. An extension to first order logic is proposed inspired by Grover’s algorithm. Eigenlogic is essentially a logic of operators and its truth-table logical semantics is provided by the eigenvalue structure which is shown to be related to the universality of logical quantum gates, a fundamental role being played by non-commutativity and entanglement. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Electric Double Layers with Surface Charge Regulation Using Density Functional Theory
Entropy 2020, 22(2), 132; https://doi.org/10.3390/e22020132 - 22 Jan 2020
Cited by 3
Abstract
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in [...] Read more.
Surprisingly, the local structure of electrolyte solutions in electric double layers is primarily determined by the solvent. This is initially unexpected as the solvent is usually a neutral species and not a subject to dominant Coulombic interactions. Part of the solvent dominance in determining the local structure is simply due to the much larger number of solvent molecules in a typical electrolyte solution.The dominant local packing of solvent then creates a space left for the charged species. Our classical density functional theory work demonstrates that the solvent structural effect strongly couples to the surface chemistry, which governs the charge and potential. In this article we address some outstanding questions relating double layer modeling. Firstly, we address the role of ion-ion correlations that go beyond mean field correlations. Secondly we consider the effects of a density dependent dielectric constant which is crucial in the description of a electrolyte-vapor interface. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Eigenvalues of Two-State Quantum Walks Induced by the Hadamard Walk
Entropy 2020, 22(1), 127; https://doi.org/10.3390/e22010127 - 20 Jan 2020
Cited by 2
Abstract
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks [...] Read more.
Existence of the eigenvalues of the discrete-time quantum walks is deeply related to localization of the walks. We revealed, for the first time, the distributions of the eigenvalues given by the splitted generating function method (the SGF method) of the space-inhomogeneous quantum walks in one dimension we had treated in our previous studies. Especially, we clarified the characteristic parameter dependence for the distributions of the eigenvalues with the aid of numerical simulation. Full article
(This article belongs to the Special Issue Quantum Walks and Related Issues)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Energy and Exergy Evaluation of a Two-Stage Axial Vapour Compressor on the LNG Carrier
Entropy 2020, 22(1), 115; https://doi.org/10.3390/e22010115 - 17 Jan 2020
Cited by 4
Abstract
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while [...] Read more.
Data from a two-stage axial vapor cryogenic compressor on the dual-fuel diesel–electric (DFDE) liquefied natural gas (LNG) carrier were measured and analyzed to investigate compressor energy and exergy efficiency in real exploitation conditions. The running parameters of the two-stage compressor were collected while changing the main propeller shafts rpm. As the compressor supply of vaporized gas to the main engines increases, so does the load and rpm in propulsion electric motors, and vice versa. The results show that when the main engine load varied from 46 to 56 rpm at main propulsion shafts increased mass flow rate of vaporized LNG at a two-stage compressor has an influence on compressor performance. Compressor average energy efficiency is around 50%, while the exergy efficiency of the compressor is significantly lower in all measured ranges and on average is around 34%. The change in the ambient temperature from 0 to 50 °C also influences the compressor’s exergy efficiency. Higher exergy efficiency is achieved at lower ambient temperatures. As temperature increases, overall compressor exergy efficiency decreases by about 7% on average over the whole analyzed range. The proposed new concept of energy-saving and increasing the compressor efficiency based on pre-cooling of the compressor second stage is also analyzed. The temperature at the second stage was varied in the range from 0 to −50 °C, which results in power savings up to 26 kW for optimal running regimes. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Determining the Bulk Parameters of Plasma Electrons from Pitch-Angle Distribution Measurements
Entropy 2020, 22(1), 103; https://doi.org/10.3390/e22010103 - 16 Jan 2020
Cited by 5
Abstract
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution [...] Read more.
Electrostatic analysers measure the flux of plasma particles in velocity space and determine their velocity distribution function. There are occasions when science objectives require high time-resolution measurements, and the instrument operates in short measurement cycles, sampling only a portion of the velocity distribution function. One such high-resolution measurement strategy consists of sampling the two-dimensional pitch-angle distributions of the plasma particles, which describes the velocities of the particles with respect to the local magnetic field direction. Here, we investigate the accuracy of plasma bulk parameters from such high-resolution measurements. We simulate electron observations from the Solar Wind Analyser’s (SWA) Electron Analyser System (EAS) on board Solar Orbiter. We show that fitting analysis of the synthetic datasets determines the plasma temperature and kappa index of the distribution within 10% of their actual values, even at large heliocentric distances where the expected solar wind flux is very low. Interestingly, we show that although measurement points with zero counts are not statistically significant, they provide information about the particle distribution function which becomes important when the particle flux is low. We also examine the convergence of the fitting algorithm for expected plasma conditions and discuss the sources of statistical and systematic uncertainties. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Quantifying Athermality and Quantum Induced Deviations from Classical Fluctuation Relations
Entropy 2020, 22(1), 111; https://doi.org/10.3390/e22010111 - 16 Jan 2020
Abstract
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the [...] Read more.
In recent years, a quantum information theoretic framework has emerged for incorporating non-classical phenomena into fluctuation relations. Here, we elucidate this framework by exploring deviations from classical fluctuation relations resulting from the athermality of the initial thermal system and quantum coherence of the system’s energy supply. In particular, we develop Crooks-like equalities for an oscillator system which is prepared either in photon added or photon subtracted thermal states and derive a Jarzynski-like equality for average work extraction. We use these equalities to discuss the extent to which adding or subtracting a photon increases the informational content of a state, thereby amplifying the suppression of free energy increasing process. We go on to derive a Crooks-like equality for an energy supply that is prepared in a pure binomial state, leading to a non-trivial contribution from energy and coherence on the resultant irreversibility. We show how the binomial state equality fits in relation to a previously derived coherent state equality and offers a richer feature-set. Full article
Show Figures

Graphical abstract

Open AccessFeature PaperEditor’s ChoiceArticle
Learning in Feedforward Neural Networks Accelerated by Transfer Entropy
Entropy 2020, 22(1), 102; https://doi.org/10.3390/e22010102 - 16 Jan 2020
Cited by 2
Abstract
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially [...] Read more.
Current neural networks architectures are many times harder to train because of the increasing size and complexity of the used datasets. Our objective is to design more efficient training algorithms utilizing causal relationships inferred from neural networks. The transfer entropy (TE) was initially introduced as an information transfer measure used to quantify the statistical coherence between events (time series). Later, it was related to causality, even if they are not the same. There are only few papers reporting applications of causality or TE in neural networks. Our contribution is an information-theoretical method for analyzing information transfer between the nodes of feedforward neural networks. The information transfer is measured by the TE of feedback neural connections. Intuitively, TE measures the relevance of a connection in the network and the feedback amplifies this connection. We introduce a backpropagation type training algorithm that uses TE feedback connections to improve its performance. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
The Convex Information Bottleneck Lagrangian
Entropy 2020, 22(1), 98; https://doi.org/10.3390/e22010098 - 14 Jan 2020
Cited by 3
Abstract
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the [...] Read more.
The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) β I ( X ; T ) ) for many values of β [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq IB ( T ; β sq ) = I ( T ; Y ) β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Graphical abstract

Open AccessEditor’s ChoiceArticle
On Heat Transfer Performance of Cooling Systems Using Nanofluid for Electric Motor Applications
Entropy 2020, 22(1), 99; https://doi.org/10.3390/e22010099 - 14 Jan 2020
Cited by 2
Abstract
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid [...] Read more.
This paper studies the fluid flow and heat transfer characteristics of nanofluids as advance coolants for the cooling system of electric motors. Investigations are carried out using numerical analysis for a cooling system with spiral channels. To solve the governing equations, computational fluid dynamics and 3D fluid motion analysis are used. The base fluid is water with a laminar flow. The fluid Reynolds number and turn-number of spiral channels are evaluation parameters. The effect of nanoparticles volume fraction in the base fluid on the heat transfer performance of the cooling system is studied. Increasing the volume fraction of nanoparticles leads to improving the heat transfer performance of the cooling system. On the other hand, a high-volume fraction of the nanofluid increases the pressure drop of the coolant fluid and increases the required pumping power. This paper aims at finding a trade-off between effective parameters by studying both fluid flow and heat transfer characteristics of the nanofluid. Full article
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
On Unitary t-Designs from Relaxed Seeds
Entropy 2020, 22(1), 92; https://doi.org/10.3390/e22010092 - 12 Jan 2020
Cited by 2
Abstract
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. [...] Read more.
The capacity to randomly pick a unitary across the whole unitary group is a powerful tool across physics and quantum information. A unitary t-design is designed to tackle this challenge in an efficient way, yet constructions to date rely on heavy constraints. In particular, they are composed of ensembles of unitaries which, for technical reasons, must contain inverses and whose entries are algebraic. In this work, we reduce the requirements for generating an ε -approximate unitary t-design. To do so, we first construct a specific n-qubit random quantum circuit composed of a sequence of randomly chosen 2-qubit gates, chosen from a set of unitaries which is approximately universal on U ( 4 ) , yet need not contain unitaries and their inverses nor are in general composed of unitaries whose entries are algebraic; dubbed r e l a x e d seed. We then show that this relaxed seed, when used as a basis for our construction, gives rise to an ε -approximate unitary t-design efficiently, where the depth of our random circuit scales as p o l y ( n , t , l o g ( 1 / ε ) ) , thereby overcoming the two requirements which limited previous constructions. We suspect the result found here is not optimal and can be improved; particularly because the number of gates in the relaxed seeds introduced here grows with n and t. We conjecture that constant sized seeds such as those which are usually present in the literature are sufficient. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Development of Novel Lightweight Dual-Phase Al-Ti-Cr-Mn-V Medium-Entropy Alloys with High Strength and Ductility
Entropy 2020, 22(1), 74; https://doi.org/10.3390/e22010074 - 06 Jan 2020
Abstract
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 [...] Read more.
A novel lightweight Al-Ti-Cr-Mn-V medium-entropy alloy (MEA) system was developed using a nonequiatiomic approach and alloys were produced through arc melting and drop casting. These alloys comprised a body-centered cubic (BCC) and face-centered cubic (FCC) dual phase with a density of approximately 4.5 g/cm3. However, the fraction of the BCC phase and morphology of the FCC phase can be controlled by incorporating other elements. The results of compression tests indicated that these Al-Ti-Cr-Mn-V alloys exhibited a prominent compression strength (~1940 MPa) and ductility (~30%). Moreover, homogenized samples maintained a high compression strength of 1900 MPa and similar ductility (30%). Due to the high specific compressive strength (0.433 GPa·g/cm3) and excellent combination of strength and ductility, the cast lightweight Al-Ti-Cr-Mn-V MEAs are a promising alloy system for application in transportation and energy industries. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Energy Disaggregation Using Elastic Matching Algorithms
Entropy 2020, 22(1), 71; https://doi.org/10.3390/e22010071 - 06 Jan 2020
Cited by 7
Abstract
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant [...] Read more.
In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic matching algorithms were evaluated across different datasets and the experimental results showed that the minimum variance matching algorithm outperforms all other evaluated matching algorithms. The best performing minimum variance matching algorithm improved the energy disaggregation accuracy by 2.7% when compared to the baseline dynamic time warping algorithm. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Bounds on Mixed State Entanglement
Entropy 2020, 22(1), 62; https://doi.org/10.3390/e22010062 - 01 Jan 2020
Cited by 2
Abstract
In the general framework of d 1 × d 2 mixed states, we derive an explicit bound for bipartite negative partial transpose (NPT) entanglement based on the mixedness characterization of the physical system. The derived result is very general, being based only on the assumption of finite dimensionality. In addition, it turns out to be of experimental interest since some purity-measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to the monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Open AccessFeature PaperEditor’s ChoiceArticle
Nonlinear Information Bottleneck
Entropy 2019, 21(12), 1181; https://doi.org/10.3390/e21121181 - 30 Nov 2019
Cited by 10
Abstract
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. [...] Read more.
Information bottleneck (IB) is a technique for extracting information in one random variable X that is relevant for predicting another random variable Y. IB works by encoding X in a compressed “bottleneck” random variable M from which Y can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete X and Y with small state spaces, and continuous X and Y with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous X and Y, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed “variational IB” method on several real-world datasets. Full article
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
OTEC Maximum Net Power Output Using Carnot Cycle and Application to Simplify Heat Exchanger Selection
Entropy 2019, 21(12), 1143; https://doi.org/10.3390/e21121143 - 22 Nov 2019
Cited by 16
Abstract
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly [...] Read more.
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly by focusing on improving cycle performances or central elements of OTEC, such as heat exchangers. It is difficult to choose a suitable heat exchanger for OTEC with the separate evaluations of the heat transfer coefficient and pressure drop that are usually found in the literature. Accordingly, this paper presents a method to evaluate heat exchangers for OTEC. On the basis of finite-time thermodynamics, the maximum net power output for different heat exchangers using both heat transfer performance and pressure drop was assessed and compared. This method was successfully applied to three heat exchangers. The most suitable heat exchanger was found to lead to a maximum net power output 158% higher than the output of the least suitable heat exchanger. For a difference of 3.7% in the net power output, a difference of 22% in the Reynolds numbers was found. Therefore, those numbers also play a significant role in the choice of heat exchangers as they affect the pumping power required for seawater flowing. A sensitivity analysis showed that seawater temperature does not affect the choice of heat exchangers, even though the net power output was found to decrease by up to 10% with every temperature difference drop of 1 °C. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Topological Information Data Analysis
Entropy 2019, 21(9), 869; https://doi.org/10.3390/e21090869 - 06 Sep 2019
Cited by 9
Abstract
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information introduced in a serie of studies. In particular, we show that the vanishing of all I k for 2 k n of n random variables is equivalent to their statistical independence. Pursuing the work of Hu Kuo Ting and Te Sun Han, we show that information functions provide co-ordinates for binary variables, and that they are analytically independent from the probability simplex for any set of finite variables. The maximal positive I k identifies the variables that co-vary the most in the population, whereas the minimal negative I k identifies synergistic clusters and the variables that differentiate–segregate the most in the population. Finite data size effects and estimation biases severely constrain the effective computation of the information topology on data, and we provide simple statistical tests for the undersampling bias and the k-dependences. We give an example of application of these methods to genetic expression and unsupervised cell-type classification. The methods unravel biologically relevant subtypes, with a sample size of 41 genes and with few errors. It establishes generic basic methods to quantify the epigenetic information storage and a unified epigenetic unsupervised learning formalism. We propose that higher-order statistical interactions and non-identically distributed variables are constitutive characteristics of biological systems that should be estimated in order to unravel their significant statistical structure and diversity. The topological information data analysis presented here allows for precisely estimating this higher-order structure characteristic of biological systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1