entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6051 KiB  
Article
OTEC Maximum Net Power Output Using Carnot Cycle and Application to Simplify Heat Exchanger Selection
by Kevin Fontaine, Takeshi Yasunaga and Yasuyuki Ikegami
Entropy 2019, 21(12), 1143; https://doi.org/10.3390/e21121143 - 22 Nov 2019
Cited by 29 | Viewed by 7893
Abstract
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly [...] Read more.
Ocean thermal energy conversion (OTEC) uses the natural thermal gradient in the sea. It has been investigated to make it competitive with conventional power plants, as it has huge potential and can produce energy steadily throughout the year. This has been done mostly by focusing on improving cycle performances or central elements of OTEC, such as heat exchangers. It is difficult to choose a suitable heat exchanger for OTEC with the separate evaluations of the heat transfer coefficient and pressure drop that are usually found in the literature. Accordingly, this paper presents a method to evaluate heat exchangers for OTEC. On the basis of finite-time thermodynamics, the maximum net power output for different heat exchangers using both heat transfer performance and pressure drop was assessed and compared. This method was successfully applied to three heat exchangers. The most suitable heat exchanger was found to lead to a maximum net power output 158% higher than the output of the least suitable heat exchanger. For a difference of 3.7% in the net power output, a difference of 22% in the Reynolds numbers was found. Therefore, those numbers also play a significant role in the choice of heat exchangers as they affect the pumping power required for seawater flowing. A sensitivity analysis showed that seawater temperature does not affect the choice of heat exchangers, even though the net power output was found to decrease by up to 10% with every temperature difference drop of 1 °C. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

38 pages, 4465 KiB  
Article
Topological Information Data Analysis
by Pierre Baudot, Monica Tapia, Daniel Bennequin and Jean-Marc Goaillard
Entropy 2019, 21(9), 869; https://doi.org/10.3390/e21090869 - 6 Sep 2019
Cited by 43 | Viewed by 9884
Abstract
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information [...] Read more.
This paper presents methods that quantify the structure of statistical interactions within a given data set, and were applied in a previous article. It establishes new results on the k-multivariate mutual-information ( I k ) inspired by the topological formulation of Information introduced in a serie of studies. In particular, we show that the vanishing of all I k for 2 k n of n random variables is equivalent to their statistical independence. Pursuing the work of Hu Kuo Ting and Te Sun Han, we show that information functions provide co-ordinates for binary variables, and that they are analytically independent from the probability simplex for any set of finite variables. The maximal positive I k identifies the variables that co-vary the most in the population, whereas the minimal negative I k identifies synergistic clusters and the variables that differentiate–segregate the most in the population. Finite data size effects and estimation biases severely constrain the effective computation of the information topology on data, and we provide simple statistical tests for the undersampling bias and the k-dependences. We give an example of application of these methods to genetic expression and unsupervised cell-type classification. The methods unravel biologically relevant subtypes, with a sample size of 41 genes and with few errors. It establishes generic basic methods to quantify the epigenetic information storage and a unified epigenetic unsupervised learning formalism. We propose that higher-order statistical interactions and non-identically distributed variables are constitutive characteristics of biological systems that should be estimated in order to unravel their significant statistical structure and diversity. The topological information data analysis presented here allows for precisely estimating this higher-order structure characteristic of biological systems. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 6736 KiB  
Article
Image Encryption Scheme with Compressed Sensing Based on New Three-Dimensional Chaotic System
by Yaqin Xie, Jiayin Yu, Shiyu Guo, Qun Ding and Erfu Wang
Entropy 2019, 21(9), 819; https://doi.org/10.3390/e21090819 - 22 Aug 2019
Cited by 54 | Viewed by 4658
Abstract
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation [...] Read more.
In this paper, a new three-dimensional chaotic system is proposed for image encryption. The core of the encryption algorithm is the combination of chaotic system and compressed sensing, which can complete image encryption and compression at the same time. The Lyapunov exponent, bifurcation diagram and complexity of the new three-dimensional chaotic system are analyzed. The performance analysis shows that the chaotic system has two positive Lyapunov exponents and high complexity. In the encryption scheme, a new chaotic system is used as the measurement matrix for compressed sensing, and Arnold is used to scrambling the image further. The proposed method has better reconfiguration ability in the compressible range of the algorithm compared with other methods. The experimental results show that the proposed encryption scheme has good encryption effect and image compression capability. Full article
Show Figures

Figure 1

25 pages, 3053 KiB  
Article
Distinguishing between Clausius, Boltzmann and Pauling Entropies of Frozen Non-Equilibrium States
by Rainer Feistel
Entropy 2019, 21(8), 799; https://doi.org/10.3390/e21080799 - 15 Aug 2019
Cited by 11 | Viewed by 7631
Abstract
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do [...] Read more.
In conventional textbook thermodynamics, entropy is a quantity that may be calculated by different methods, for example experimentally from heat capacities (following Clausius) or statistically from numbers of microscopic quantum states (following Boltzmann and Planck). It had turned out that these methods do not necessarily provide mutually consistent results, and for equilibrium systems their difference was explained by introducing a residual zero-point entropy (following Pauling), apparently violating the Nernst theorem. At finite temperatures, associated statistical entropies which count microstates that do not contribute to a body’s heat capacity, differ systematically from Clausius entropy, and are of particular relevance as measures for metastable, frozen-in non-equilibrium structures and for symbolic information processing (following Shannon). In this paper, it is suggested to consider Clausius, Boltzmann, Pauling and Shannon entropies as distinct, though related, physical quantities with different key properties, in order to avoid confusion by loosely speaking about just “entropy” while actually referring to different kinds of it. For instance, zero-point entropy exclusively belongs to Boltzmann rather than Clausius entropy, while the Nernst theorem holds rigorously for Clausius rather than Boltzmann entropy. The discussion of those terms is underpinned by a brief historical review of the emergence of corresponding fundamental thermodynamic concepts. Full article
(This article belongs to the Special Issue Crystallization Thermodynamics)
Show Figures

Graphical abstract

16 pages, 754 KiB  
Article
Comparing Information Metrics for a Coupled Ornstein–Uhlenbeck Process
by James Heseltine and Eun-jin Kim
Entropy 2019, 21(8), 775; https://doi.org/10.3390/e21080775 - 8 Aug 2019
Cited by 22 | Viewed by 4370
Abstract
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion [...] Read more.
It is often the case when studying complex dynamical systems that a statistical formulation can provide the greatest insight into the underlying dynamics. When discussing the behavior of such a system which is evolving in time, it is useful to have the notion of a metric between two given states. A popular measure of information change in a system under perturbation has been the relative entropy of the states, as this notion allows us to quantify the difference between states of a system at different times. In this paper, we investigate the relaxation problem given by a single and coupled Ornstein–Uhlenbeck (O-U) process and compare the information length with entropy-based metrics (relative entropy, Jensen divergence) as well as others. By measuring the total information length in the long time limit, we show that it is only the information length that preserves the linear geometry of the O-U process. In the coupled O-U process, the information length is shown to be capable of detecting changes in both components of the system even when other metrics would detect almost nothing in one of the components. We show in detail that the information length is sensitive to the evolution of subsystems. Full article
(This article belongs to the Special Issue Statistical Mechanics and Mathematical Physics)
Show Figures

Figure 1

18 pages, 1620 KiB  
Article
Power, Efficiency and Fluctuations in a Quantum Point Contact as Steady-State Thermoelectric Heat Engine
by Sara Kheradsoud, Nastaran Dashti, Maciej Misiorny, Patrick P. Potts, Janine Splettstoesser and Peter Samuelsson
Entropy 2019, 21(8), 777; https://doi.org/10.3390/e21080777 - 8 Aug 2019
Cited by 34 | Viewed by 5165
Abstract
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation [...] Read more.
The trade-off between large power output, high efficiency and small fluctuations in the operation of heat engines has recently received interest in the context of thermodynamic uncertainty relations (TURs). Here we provide a concrete illustration of this trade-off by theoretically investigating the operation of a quantum point contact (QPC) with an energy-dependent transmission function as a steady-state thermoelectric heat engine. As a starting point, we review and extend previous analysis of the power production and efficiency. Thereafter the power fluctuations and the bound jointly imposed on the power, efficiency, and fluctuations by the TURs are analyzed as additional performance quantifiers. We allow for arbitrary smoothness of the transmission probability of the QPC, which exhibits a close to step-like dependence in energy, and consider both the linear and the non-linear regime of operation. It is found that for a broad range of parameters, the power production reaches nearly its theoretical maximum value, with efficiencies more than half of the Carnot efficiency and at the same time with rather small fluctuations. Moreover, we show that by demanding a non-zero power production, in the linear regime a stronger TUR can be formulated in terms of the thermoelectric figure of merit. Interestingly, this bound holds also in a wide parameter regime beyond linear response for our QPC device. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

40 pages, 955 KiB  
Article
Two Measures of Dependence
by Amos Lapidoth and Christoph Pfister
Entropy 2019, 21(8), 778; https://doi.org/10.3390/e21080778 - 8 Aug 2019
Cited by 14 | Viewed by 4076
Abstract
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first [...] Read more.
Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

29 pages, 4850 KiB  
Review
Quantum Phonon Transport in Nanomaterials: Combining Atomistic with Non-Equilibrium Green’s Function Techniques
by Leonardo Medrano Sandonas, Rafael Gutierrez, Alessandro Pecchia, Alexander Croy and Gianaurelio Cuniberti
Entropy 2019, 21(8), 735; https://doi.org/10.3390/e21080735 - 27 Jul 2019
Cited by 15 | Viewed by 6494
Abstract
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this [...] Read more.
A crucial goal for increasing thermal energy harvesting will be to progress towards atomistic design strategies for smart nanodevices and nanomaterials. This requires the combination of computationally efficient atomistic methodologies with quantum transport based approaches. Here, we review our recent work on this problem, by presenting selected applications of the PHONON tool to the description of phonon transport in nanostructured materials. The PHONON tool is a module developed as part of the Density-Functional Tight-Binding (DFTB) software platform. We discuss the anisotropic phonon band structure of selected puckered two-dimensional materials, helical and horizontal doping effects in the phonon thermal conductivity of boron nitride-carbon heteronanotubes, phonon filtering in molecular junctions, and a novel computational methodology to investigate time-dependent phonon transport at the atomistic level. These examples illustrate the versatility of our implementation of phonon transport in combination with density functional-based methods to address specific nanoscale functionalities, thus potentially allowing for designing novel thermal devices. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Graphical abstract

13 pages, 1267 KiB  
Article
Electron Traversal Times in Disordered Graphene Nanoribbons
by Michael Ridley, Michael A. Sentef and Riku Tuovinen
Entropy 2019, 21(8), 737; https://doi.org/10.3390/e21080737 - 27 Jul 2019
Cited by 12 | Viewed by 4307
Abstract
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be [...] Read more.
Using the partition-free time-dependent Landauer–Büttiker formalism for transient current correlations, we study the traversal times taken for electrons to cross graphene nanoribbon (GNR) molecular junctions. We demonstrate electron traversal signatures that vary with disorder and orientation of the GNR. These findings can be related to operational frequencies of GNR-based devices and their consequent rational design. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

16 pages, 308 KiB  
Article
Empirical Estimation of Information Measures: A Literature Guide
by Sergio Verdú
Entropy 2019, 21(8), 720; https://doi.org/10.3390/e21080720 - 24 Jul 2019
Cited by 48 | Viewed by 11142
Abstract
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in [...] Read more.
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures. While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in data science, machine learning, biology, neuroscience, economics, language, and other experimental sciences. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

27 pages, 336 KiB  
Article
Dynamic Maximum Entropy Reduction
by Václav Klika, Michal Pavelka, Petr Vágner and Miroslav Grmela
Entropy 2019, 21(7), 715; https://doi.org/10.3390/e21070715 - 22 Jul 2019
Cited by 24 | Viewed by 5544
Abstract
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state [...] Read more.
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state variables. The method is based on explicit recognition of the state and conjugate variables, which can relax towards the respective quasi-equilibria in different ways. Detailed state variables are reduced using the usual principle of maximum entropy (MaxEnt), whereas relaxation of conjugate variables guarantees that the reduced equations are closed. Moreover, an infinite chain of consecutive DynMaxEnt approximations can be constructed. The method is demonstrated on a particle with friction, complex fluids (equipped with conformation and Reynolds stress tensors), hyperbolic heat conduction and magnetohydrodynamics. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
Show Figures

Figure 1

22 pages, 1131 KiB  
Communication
Derivations of the Core Functions of the Maximum Entropy Theory of Ecology
by Alexander B. Brummer and Erica A. Newman
Entropy 2019, 21(7), 712; https://doi.org/10.3390/e21070712 - 21 Jul 2019
Cited by 23 | Viewed by 6401
Abstract
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the [...] Read more.
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the METE framework, “ecological state variables” (representing total area, total species richness, total abundance, and total metabolic energy) describe macroecological properties of an ecosystem. METE incorporates these state variables into constraints on underlying probability distributions. The method of Lagrange multipliers and maximization of information entropy (MaxEnt) lead to predicted functional forms of distributions of interest. We demonstrate how information entropy is maximized for the general case of a distribution, which has empirical information that provides constraints on the overall predictions. We then show how METE’s two core functions are derived. These functions, called the “Spatial Structure Function” and the “Ecosystem Structure Function” are the core pieces of the theory, from which all the predictions of METE follow (including the Species Area Relationship, the Species Abundance Distribution, and various metabolic distributions). Primarily, we consider the discrete distributions predicted by METE. We also explore the parameter space defined by the METE’s state variables and Lagrange multipliers. We aim to provide a comprehensive resource for ecologists who want to understand the derivations and assumptions of the basic mathematical structure of METE. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Graphical abstract

18 pages, 404 KiB  
Article
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
by Phu Tran Tin, Tan N. Nguyen, Nguyen Q. Sang, Tran Trung Duy, Phuong T. Tran and Miroslav Voznak
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700 - 16 Jul 2019
Cited by 14 | Viewed by 4423
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

24 pages, 1866 KiB  
Perspective
Entropy and Information within Intrinsically Disordered Protein Regions
by Iva Pritišanac, Robert M. Vernon, Alan M. Moses and Julie D. Forman Kay
Entropy 2019, 21(7), 662; https://doi.org/10.3390/e21070662 - 6 Jul 2019
Cited by 34 | Viewed by 10181
Abstract
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to [...] Read more.
Bioinformatics and biophysical studies of intrinsically disordered proteins and regions (IDRs) note the high entropy at individual sequence positions and in conformations sampled in solution. This prevents application of the canonical sequence-structure-function paradigm to IDRs and motivates the development of new methods to extract information from IDR sequences. We argue that the information in IDR sequences cannot be fully revealed through positional conservation, which largely measures stable structural contacts and interaction motifs. Instead, considerations of evolutionary conservation of molecular features can reveal the full extent of information in IDRs. Experimental quantification of the large conformational entropy of IDRs is challenging but can be approximated through the extent of conformational sampling measured by a combination of NMR spectroscopy and lower-resolution structural biology techniques, which can be further interpreted with simulations. Conformational entropy and other biophysical features can be modulated by post-translational modifications that provide functional advantages to IDRs by tuning their energy landscapes and enabling a variety of functional interactions and modes of regulation. The diverse mosaic of functional states of IDRs and their conformational features within complexes demands novel metrics of information, which will reflect the complicated sequence-conformational ensemble-function relationship of IDRs. Full article
Show Figures

Graphical abstract

21 pages, 4720 KiB  
Article
Multiobjective Optimization of a Plate Heat Exchanger in a Waste Heat Recovery Organic Rankine Cycle System for Natural Gas Engines
by Guillermo Valencia, José Núñez and Jorge Duarte
Entropy 2019, 21(7), 655; https://doi.org/10.3390/e21070655 - 3 Jul 2019
Cited by 47 | Viewed by 5462
Abstract
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. [...] Read more.
A multiobjective optimization of an organic Rankine cycle (ORC) evaporator, operating with toluene as the working fluid, is presented in this paper for waste heat recovery (WHR) from the exhaust gases of a 2 MW Jenbacher JMS 612 GS-N.L. gas internal combustion engine. Indirect evaporation between the exhaust gas and the organic fluid in the parallel plate heat exchanger (ITC2) implied irreversible heat transfer and high investment costs, which were considered as objective functions to be minimized. Energy and exergy balances were applied to the system components, in addition to the phenomenological equations in the ITC2, to calculate global energy indicators, such as the thermal efficiency of the configuration, the heat recovery efficiency, the overall energy conversion efficiency, the absolute increase of engine thermal efficiency, and the reduction of the break-specific fuel consumption of the system, of the system integrated with the gas engine. The results allowed calculation of the plate spacing, plate height, plate width, and chevron angle that minimized the investment cost and entropy generation of the equipment, reaching 22.04 m2 in the heat transfer area, 693.87 kW in the energy transfer by heat recovery from the exhaust gas, and 41.6% in the overall thermal efficiency of the ORC as a bottoming cycle for the engine. This type of result contributes to the inclusion of this technology in the industrial sector as a consequence of the improvement in thermal efficiency and economic viability. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Show Figures

Figure 1

15 pages, 15663 KiB  
Article
Energy and New Economic Approach for Nearly Zero Energy Hotels
by Francesco Nocera, Salvatore Giuffrida, Maria Rosa Trovato and Antonio Gagliano
Entropy 2019, 21(7), 639; https://doi.org/10.3390/e21070639 - 28 Jun 2019
Cited by 34 | Viewed by 4508
Abstract
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront [...] Read more.
The paper addresses an important long-standing question in regards to the energy efficiency renovation of existing buildings, in this case hotels, towards nearly zero-energy (nZEBs) status. The renovation of existing hotels to achieve a nearly zero-energy (nZEBs) performance is one of the forefront goals of EU’s energy policy for 2050. The achievement of nZEBs target for hotels is necessary not only to comply with changing regulations and legislations, but also to foster competitiveness to secure new funding. Indeed, the nZEB hotel status allows for the reduction of operating costs and the increase of energy security, meeting the market and guests’ expectations. Actually, there is not a set national value of nZEBs for hotels to be attained, despite the fact that hotels are among the most energy-intensive buildings. This paper presents the case study of the energy retrofit of an existing historical hotel located in southern Italy (Syracuse) in order to achieve nZEBs status. Starting from the energy audit, the paper proposes a step-by-step approach to nZEBs performance, with a perspective on the costs, in order to identify the most effective energy solutions. Such an approach allows useful insights regarding energy and economic–financial strategies for achieving nZEBs standards to highlighted. Moreover, the results of this paper provide, to stakeholders, useful information for quantifying the technical convenience and economic profitability to reach an nZEBs target in order to prevent the expenses necessary by future energy retrofit programs. Full article
Show Figures

Figure 1

20 pages, 1251 KiB  
Article
Estimating the Mutual Information between Two Discrete, Asymmetric Variables with Limited Samples
by Damián G. Hernández and Inés Samengo
Entropy 2019, 21(6), 623; https://doi.org/10.3390/e21060623 - 25 Jun 2019
Cited by 13 | Viewed by 8229
Abstract
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual [...] Read more.
Determining the strength of nonlinear, statistical dependencies between two variables is a crucial matter in many research fields. The established measure for quantifying such relations is the mutual information. However, estimating mutual information from limited samples is a challenging task. Since the mutual information is the difference of two entropies, the existing Bayesian estimators of entropy may be used to estimate information. This procedure, however, is still biased in the severely under-sampled regime. Here, we propose an alternative estimator that is applicable to those cases in which the marginal distribution of one of the two variables—the one with minimal entropy—is well sampled. The other variable, as well as the joint and conditional distributions, can be severely undersampled. We obtain a consistent estimator that presents very low bias, outperforming previous methods even when the sampled data contain few coincidences. As with other Bayesian estimators, our proposal focuses on the strength of the interaction between the two variables, without seeking to model the specific way in which they are related. A distinctive property of our method is that the main data statistics determining the amount of mutual information is the inhomogeneity of the conditional distribution of the low-entropy variable in those states in which the large-entropy variable registers coincidences. Full article
(This article belongs to the Special Issue Bayesian Inference and Information Theory)
Show Figures

Figure 1

14 pages, 1441 KiB  
Article
Changed Temporal Structure of Neuromuscular Control, Rather Than Changed Intersegment Coordination, Explains Altered Stabilographic Regularity after a Moderate Perturbation of the Postural Control System
by Felix Wachholz, Tove Kockum, Thomas Haid and Peter Federolf
Entropy 2019, 21(6), 614; https://doi.org/10.3390/e21060614 - 21 Jun 2019
Cited by 11 | Viewed by 4953
Abstract
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment [...] Read more.
Sample entropy (SaEn) applied on center-of-pressure (COP) data provides a measure for the regularity of human postural control. Two mechanisms could contribute to altered COP regularity: first, an altered temporal structure (temporal regularity) of postural movements (H1); or second, altered coordination between segment movements (coordinative complexity; H2). The current study used rapid, voluntary head-shaking to perturb the postural control system, thus producing changes in COP regularity, to then assess the two hypotheses. Sixteen healthy participants (age 26.5 ± 3.5; seven females), whose postural movements were tracked via 39 reflective markers, performed trials in which they first stood quietly on a force plate for 30 s, then shook their head for 10 s, finally stood quietly for another 90 s. A principal component analysis (PCA) performed on the kinematic data extracted the main postural movement components. Temporal regularity was determined by calculating SaEn on the time series of these movement components. Coordinative complexity was determined by assessing the relative explained variance of the first five components. H1 was supported, but H2 was not. These results suggest that moderate perturbations of the postural control system produce altered temporal structures of the main postural movement components, but do not necessarily change the coordinative structure of intersegment movements. Full article
(This article belongs to the Section Complexity)
Show Figures

Graphical abstract

40 pages, 1870 KiB  
Article
Structural Characteristics of Two-Sender Index Coding
by Chandra Thapa, Lawrence Ong, Sarah J. Johnson and Min Li
Entropy 2019, 21(6), 615; https://doi.org/10.3390/e21060615 - 21 Jun 2019
Cited by 7 | Viewed by 4130
Abstract
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique [...] Read more.
This paper studies index coding with two senders. In this setup, source messages are distributed among the senders possibly with common messages. In addition, there are multiple receivers, with each receiver having some messages a priori, known as side-information, and requesting one unique message such that each message is requested by only one receiver. Index coding in this setup is called two-sender unicast index coding (TSUIC). The main goal is to find the shortest aggregate normalized codelength, which is expressed as the optimal broadcast rate. In this work, firstly, for a given TSUIC problem, we form three independent sub-problems each consisting of the only subset of the messages, based on whether the messages are available only in one of the senders or in both senders. Then, we express the optimal broadcast rate of the TSUIC problem as a function of the optimal broadcast rates of those independent sub-problems. In this way, we discover the structural characteristics of TSUIC. For the proofs of our results, we utilize confusion graphs and coding techniques used in single-sender index coding. To adapt the confusion graph technique in TSUIC, we introduce a new graph-coloring approach that is different from the normal graph coloring, which we call two-sender graph coloring, and propose a way of grouping the vertices to analyze the number of colors used. We further determine a class of TSUIC instances where a certain type of side-information can be removed without affecting their optimal broadcast rates. Finally, we generalize the results of a class of TSUIC problems to multiple senders. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

24 pages, 6063 KiB  
Article
Machine Learning Techniques to Identify Antimicrobial Resistance in the Intensive Care Unit
by Sergio Martínez-Agüero, Inmaculada Mora-Jiménez, Jon Lérida-García, Joaquín Álvarez-Rodríguez and Cristina Soguero-Ruiz
Entropy 2019, 21(6), 603; https://doi.org/10.3390/e21060603 - 18 Jun 2019
Cited by 42 | Viewed by 8859
Abstract
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and [...] Read more.
The presence of bacteria with resistance to specific antibiotics is one of the greatest threats to the global health system. According to the World Health Organization, antimicrobial resistance has already reached alarming levels in many parts of the world, involving a social and economic burden for the patient, for the system, and for society in general. Because of the critical health status of patients in the intensive care unit (ICU), time is critical to identify bacteria and their resistance to antibiotics. Since common antibiotics resistance tests require between 24 and 48 h after the culture is collected, we propose to apply machine learning (ML) techniques to determine whether a bacterium will be resistant to different families of antimicrobials. For this purpose, clinical and demographic features from the patient, as well as data from cultures and antibiograms are considered. From a population point of view, we also show graphically the relationship between different bacteria and families of antimicrobials by performing correspondence analysis. Results of the ML techniques evidence non-linear relationships helping to identify antimicrobial resistance at the ICU, with performance dependent on the family of antimicrobials. A change in the trend of antimicrobial resistance is also evidenced. Full article
Show Figures

Figure 1

16 pages, 7470 KiB  
Article
A New Deep Learning Based Multi-Spectral Image Fusion Method
by Jingchun Piao, Yunfan Chen and Hyunchul Shin
Entropy 2019, 21(6), 570; https://doi.org/10.3390/e21060570 - 5 Jun 2019
Cited by 51 | Viewed by 7974
Abstract
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency [...] Read more.
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

11 pages, 2231 KiB  
Article
Non-Thermal Quantum Engine in Transmon Qubits
by Cleverson Cherubim, Frederico Brito and Sebastian Deffner
Entropy 2019, 21(6), 545; https://doi.org/10.3390/e21060545 - 29 May 2019
Cited by 28 | Viewed by 6510
Abstract
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical [...] Read more.
The design and implementation of quantum technologies necessitates the understanding of thermodynamic processes in the quantum domain. In stark contrast to macroscopic thermodynamics, at the quantum scale processes generically operate far from equilibrium and are governed by fluctuations. Thus, experimental insight and empirical findings are indispensable in developing a comprehensive framework. To this end, we theoretically propose an experimentally realistic quantum engine that uses transmon qubits as working substance. We solve the dynamics analytically and calculate its efficiency. Full article
Show Figures

Graphical abstract

37 pages, 801 KiB  
Review
Approximate Entropy and Sample Entropy: A Comprehensive Tutorial
by Alfonso Delgado-Bonal and Alexander Marshak
Entropy 2019, 21(6), 541; https://doi.org/10.3390/e21060541 - 28 May 2019
Cited by 465 | Viewed by 36992
Abstract
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete [...] Read more.
Approximate Entropy and Sample Entropy are two algorithms for determining the regularity of series of data based on the existence of patterns. Despite their similarities, the theoretical ideas behind those techniques are different but usually ignored. This paper aims to be a complete guideline of the theory and application of the algorithms, intended to explain their characteristics in detail to researchers from different fields. While initially developed for physiological applications, both algorithms have been used in other fields such as medicine, telecommunications, economics or Earth sciences. In this paper, we explain the theoretical aspects involving Information Theory and Chaos Theory, provide simple source codes for their computation, and illustrate the techniques with a step by step example of how to use the algorithms properly. This paper is not intended to be an exhaustive review of all previous applications of the algorithms but rather a comprehensive tutorial where no previous knowledge is required to understand the methodology. Full article
(This article belongs to the Special Issue Approximate, Sample and Multiscale Entropy)
Show Figures

Figure 1

15 pages, 2696 KiB  
Article
EEG Characterization of the Alzheimer’s Disease Continuum by Means of Multiscale Entropies
by Aarón Maturana-Candelas, Carlos Gómez, Jesús Poza, Nadia Pinto and Roberto Hornero
Entropy 2019, 21(6), 544; https://doi.org/10.3390/e21060544 - 28 May 2019
Cited by 47 | Viewed by 6239
Abstract
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and [...] Read more.
Alzheimer’s disease (AD) is a neurodegenerative disorder with high prevalence, known for its highly disabling symptoms. The aim of this study was to characterize the alterations in the irregularity and the complexity of the brain activity along the AD continuum. Both irregularity and complexity can be studied applying entropy-based measures throughout multiple temporal scales. In this regard, multiscale sample entropy (MSE) and refined multiscale spectral entropy (rMSSE) were calculated from electroencephalographic (EEG) data. Five minutes of resting-state EEG activity were recorded from 51 healthy controls, 51 mild cognitive impaired (MCI) subjects, 51 mild AD patients (ADMIL), 50 moderate AD patients (ADMOD), and 50 severe AD patients (ADSEV). Our results show statistically significant differences (p-values < 0.05, FDR-corrected Kruskal–Wallis test) between the five groups at each temporal scale. Additionally, average slope values and areas under MSE and rMSSE curves revealed significant changes in complexity mainly for controls vs. MCI, MCI vs. ADMIL and ADMOD vs. ADSEV comparisons (p-values < 0.05, FDR-corrected Mann–Whitney U-test). These findings indicate that MSE and rMSSE reflect the neuronal disturbances associated with the development of dementia, and may contribute to the development of new tools to track the AD progression. Full article
(This article belongs to the Special Issue Entropy Applications in EEG/MEG)
Show Figures

Figure 1

18 pages, 771 KiB  
Article
Is Independence Necessary for a Discontinuous Phase Transition within the q-Voter Model?
by Angelika Abramiuk, Jakub Pawłowski and Katarzyna Sznajd-Weron
Entropy 2019, 21(5), 521; https://doi.org/10.3390/e21050521 - 23 May 2019
Cited by 18 | Viewed by 4768
Abstract
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an [...] Read more.
We ask a question about the possibility of a discontinuous phase transition and the related social hysteresis within the q-voter model with anticonformity. Previously, it was claimed that within the q-voter model the social hysteresis can emerge only because of an independent behavior, and for the model with anticonformity only continuous phase transitions are possible. However, this claim was derived from the model, in which the size of the influence group needed for the conformity was the same as the size of the group needed for the anticonformity. Here, we abandon this assumption on the equality of two types of social response and introduce the generalized model, in which the size of the influence group needed for the conformity q c and the size of the influence group needed for the anticonformity q a are independent variables and in general q c q a . We investigate the model on the complete graph, similarly as it was done for the original q-voter model with anticonformity, and we show that such a generalized model displays both types of phase transitions depending on parameters q c and q a . Full article
Show Figures

Figure 1

16 pages, 868 KiB  
Review
Physical Layer Key Generation in 5G and Beyond Wireless Communications: Challenges and Opportunities
by Guyue Li, Chen Sun, Junqing Zhang, Eduard Jorswieck, Bin Xiao and Aiqun Hu
Entropy 2019, 21(5), 497; https://doi.org/10.3390/e21050497 - 15 May 2019
Cited by 87 | Viewed by 11943
Abstract
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution [...] Read more.
The fifth generation (5G) and beyond wireless communications will transform many exciting applications and trigger massive data connections with private, confidential, and sensitive information. The security of wireless communications is conventionally established by cryptographic schemes and protocols in which the secret key distribution is one of the essential primitives. However, traditional cryptography-based key distribution protocols might be challenged in the 5G and beyond communications because of special features such as device-to-device and heterogeneous communications, and ultra-low latency requirements. Channel reciprocity-based key generation (CRKG) is an emerging physical layer-based technique to establish secret keys between devices. This article reviews CRKG when the 5G and beyond networks employ three candidate technologies: duplex modes, massive multiple-input multiple-output (MIMO) and mmWave communications. We identify the opportunities and challenges for CRKG and provide corresponding solutions. To further demonstrate the feasibility of CRKG in practical communication systems, we overview existing prototypes with different IoT protocols and examine their performance in real-world environments. This article shows the feasibility and promising performances of CRKG with the potential to be commercialized. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

11 pages, 1052 KiB  
Article
Quantum Probes for Ohmic Environments at Thermal Equilibrium
by Fahimeh Salari Sehdaran, Matteo Bina, Claudia Benedetti and Matteo G. A. Paris
Entropy 2019, 21(5), 486; https://doi.org/10.3390/e21050486 - 12 May 2019
Cited by 23 | Viewed by 3907
Abstract
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence [...] Read more.
It is often the case that the environment of a quantum system may be described as a bath of oscillators with an ohmic density of states. In turn, the precise characterization of these classes of environments is a crucial tool to engineer decoherence or to tailor quantum information protocols. Recently, the use of quantum probes in characterizing ohmic environments at zero-temperature has been discussed, showing that a single qubit provides precise estimation of the cutoff frequency. On the other hand, thermal noise often spoil quantum probing schemes, and for this reason we here extend the analysis to a complex system at thermal equilibrium. In particular, we discuss the interplay between thermal fluctuations and time evolution in determining the precision attainable by quantum probes. Our results show that the presence of thermal fluctuations degrades the precision for low values of the cutoff frequency, i.e., values of the order ω c T (in natural units). For larger values of ω c , decoherence is mostly due to the structure of environment, rather than thermal fluctuations, such that quantum probing by a single qubit is still an effective estimation procedure. Full article
(This article belongs to the Special Issue Open Quantum Systems (OQS) for Quantum Technologies)
Show Figures

Figure 1

18 pages, 10608 KiB  
Article
Solidification Microstructures of the Ingots Obtained by Arc Melting and Cold Crucible Levitation Melting in TiNbTaZr Medium-Entropy Alloy and TiNbTaZrX (X = V, Mo, W) High-Entropy Alloys
by Takeshi Nagase, Kiyoshi Mizuuchi and Takayoshi Nakano
Entropy 2019, 21(5), 483; https://doi.org/10.3390/e21050483 - 10 May 2019
Cited by 74 | Viewed by 9223
Abstract
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position [...] Read more.
The solidification microstructures of the TiNbTaZr medium-entropy alloy and TiNbTaZrX (X = V, Mo, and W) high-entropy alloys (HEAs), including the TiNbTaZrMo bio-HEA, were investigated. Equiaxed dendrite structures were observed in the ingots that were prepared by arc melting, regardless of the position of the ingots and the alloy system. In addition, no significant difference in the solidification microstructure was observed in TiZrNbTaMo bio-HEAs between the arc-melted (AM) ingots and cold crucible levitation melted (CCLM) ingots. A cold shut was observed in the AM ingots, but not in the CCLM ingots. The interdendrite regions tended to be enriched in Ti and Zr in the TiNbTaZr MEA and TiNbTaZrX (X = V, Mo, and W) HEAs. The distribution coefficients during solidification, which were estimated by thermodynamic calculations, could explain the distribution of the constituent elements in the dendrite and interdendrite regions. The thermodynamic calculations indicated that an increase in the concentration of the low melting-temperature V (2183 K) leads to a monotonic decrease in the liquidus temperature (TL), and that increases in the concentration of high melting-temperature Mo (2896 K) and W (3695 K) lead to a monotonic increase in TL in TiNbTaZrXx (X = V, Mo, and W) (x =  0 − 2) HEAs. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Graphical abstract

17 pages, 1899 KiB  
Article
3D CNN-Based Speech Emotion Recognition Using K-Means Clustering and Spectrograms
by Noushin Hajarolasvadi and Hasan Demirel
Entropy 2019, 21(5), 479; https://doi.org/10.3390/e21050479 - 8 May 2019
Cited by 137 | Viewed by 11896
Abstract
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames [...] Read more.
Detecting human intentions and emotions helps improve human–robot interactions. Emotion recognition has been a challenging research direction in the past decade. This paper proposes an emotion recognition system based on analysis of speech signals. Firstly, we split each speech signal into overlapping frames of the same length. Next, we extract an 88-dimensional vector of audio features including Mel Frequency Cepstral Coefficients (MFCC), pitch, and intensity for each of the respective frames. In parallel, the spectrogram of each frame is generated. In the final preprocessing step, by applying k-means clustering on the extracted features of all frames of each audio signal, we select k most discriminant frames, namely keyframes, to summarize the speech signal. Then, the sequence of the corresponding spectrograms of keyframes is encapsulated in a 3D tensor. These tensors are used to train and test a 3D Convolutional Neural network using a 10-fold cross-validation approach. The proposed 3D CNN has two convolutional layers and one fully connected layer. Experiments are conducted on the Surrey Audio-Visual Expressed Emotion (SAVEE), Ryerson Multimedia Laboratory (RML), and eNTERFACE’05 databases. The results are superior to the state-of-the-art methods reported in the literature. Full article
(This article belongs to the Special Issue Statistical Machine Learning for Human Behaviour Analysis)
Show Figures

Graphical abstract

27 pages, 432 KiB  
Article
Distributed Hypothesis Testing with Privacy Constraints
by Atefeh Gilani, Selma Belhadj Amor, Sadaf Salehkalaibar and Vincent Y. F. Tan
Entropy 2019, 21(5), 478; https://doi.org/10.3390/e21050478 - 7 May 2019
Cited by 16 | Viewed by 4074
Abstract
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual [...] Read more.
We revisit the distributed hypothesis testing (or hypothesis testing with communication constraints) problem from the viewpoint of privacy. Instead of observing the raw data directly, the transmitter observes a sanitized or randomized version of it. We impose an upper bound on the mutual information between the raw and randomized data. Under this scenario, the receiver, which is also provided with side information, is required to make a decision on whether the null or alternative hypothesis is in effect. We first provide a general lower bound on the type-II exponent for an arbitrary pair of hypotheses. Next, we show that if the distribution under the alternative hypothesis is the product of the marginals of the distribution under the null (i.e., testing against independence), then the exponent is known exactly. Moreover, we show that the strong converse property holds. Using ideas from Euclidean information theory, we also provide an approximate expression for the exponent when the communication rate is low and the privacy level is high. Finally, we illustrate our results with a binary and a Gaussian example. Full article
Show Figures

Figure 1

48 pages, 1647 KiB  
Article
What Caused What? A Quantitative Account of Actual Causation Using Dynamical Causal Networks
by Larissa Albantakis, William Marshall, Erik Hoel and Giulio Tononi
Entropy 2019, 21(5), 459; https://doi.org/10.3390/e21050459 - 2 May 2019
Cited by 43 | Viewed by 13932
Abstract
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which [...] Read more.
Actual causation is concerned with the question: “What caused what?” Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system’s causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the “what caused what?” question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Graphical abstract

13 pages, 3101 KiB  
Article
A Method for Diagnosing Gearboxes of Means of Transport Using Multi-Stage Filtering and Entropy
by Tomasz Figlus
Entropy 2019, 21(5), 441; https://doi.org/10.3390/e21050441 - 27 Apr 2019
Cited by 25 | Viewed by 3439
Abstract
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process [...] Read more.
The paper presents a method of processing vibration signals which was designed to detect damage to wheels of gearboxes for means of transport. This method uses entropy calculation, and multi-stage filtering calculated by means of digital filters and the Walsh–Hadamard transform to process signals. The presented method enables the extraction of vibration symptoms, which are symptoms of gear damage, from a complex vibration signal of a gearbox. The combination of multi-stage filtering and entropy enables the elimination of fast-changing vibration impulses, which interfere with the damage diagnosis process, and make it possible to obtain a synthetic signal that provides information about the state of the gearing. The paper demonstrates the usefulness of the developed method in the diagnosis of a gearbox in which two types of gearing damage were simulated: tooth chipping and damage to the working surface of the teeth. The research shows that the application of the proposed method of vibration of signal processing enables observation of the qualitative and quantitative changes in the entropy signal after denoising, which are unambiguous symptoms of the diagnosed damage. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

33 pages, 2684 KiB  
Article
Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty
by Sebastian Gottwald and Daniel A. Braun
Entropy 2019, 21(4), 375; https://doi.org/10.3390/e21040375 - 6 Apr 2019
Cited by 26 | Viewed by 8357
Abstract
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational [...] Read more.
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures. Full article
Show Figures

Graphical abstract

20 pages, 1486 KiB  
Article
Increase in Mutual Information During Interaction with the Environment Contributes to Perception
by Daya Shankar Gupta and Andreas Bahmer
Entropy 2019, 21(4), 365; https://doi.org/10.3390/e21040365 - 4 Apr 2019
Cited by 24 | Viewed by 5730
Abstract
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted [...] Read more.
Perception and motor interaction with physical surroundings can be analyzed by the changes in probability laws governing two possible outcomes of neuronal activity, namely the presence or absence of spikes (binary states). Perception and motor interaction with the physical environment are partly accounted for by a reduction in entropy within the probability distributions of binary states of neurons in distributed neural circuits, given the knowledge about the characteristics of stimuli in physical surroundings. This reduction in the total entropy of multiple pairs of circuits in networks, by an amount equal to the increase of mutual information, occurs as sensory information is processed successively from lower to higher cortical areas or between different areas at the same hierarchical level, but belonging to different networks. The increase in mutual information is partly accounted for by temporal coupling as well as synaptic connections as proposed by Bahmer and Gupta (Front. Neurosci. 2018). We propose that robust increases in mutual information, measuring the association between the characteristics of sensory inputs’ and neural circuits’ connectivity patterns, are partly responsible for perception and successful motor interactions with physical surroundings. The increase in mutual information, given the knowledge about environmental sensory stimuli and the type of motor response produced, is responsible for the coupling between action and perception. In addition, the processing of sensory inputs within neural circuits, with no prior knowledge of the occurrence of a sensory stimulus, increases Shannon information. Consequently, the increase in surprise serves to increase the evidence of the sensory model of physical surroundings Full article
(This article belongs to the Special Issue Permutation Entropy & Its Interdisciplinary Applications)
Show Figures

Figure 1

29 pages, 17129 KiB  
Article
First-Stage Prostate Cancer Identification on Histopathological Images: Hand-Driven versus Automatic Learning
by Gabriel García, Adrián Colomer and Valery Naranjo
Entropy 2019, 21(4), 356; https://doi.org/10.3390/e21040356 - 2 Apr 2019
Cited by 19 | Viewed by 6154
Abstract
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, [...] Read more.
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands. Full article
Show Figures

Figure 1

12 pages, 320 KiB  
Article
Entanglement 25 Years after Quantum Teleportation: Testing Joint Measurements in Quantum Networks
by Nicolas Gisin
Entropy 2019, 21(3), 325; https://doi.org/10.3390/e21030325 - 26 Mar 2019
Cited by 60 | Viewed by 8070
Abstract
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the [...] Read more.
Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990s. Today, entanglement is often presented as a resource, the resource of quantum information science and technology. However, entanglement is exploited twice in quantum teleportation. Firstly, entanglement is the “quantum teleportation channel”, i.e., entanglement between distant systems. Second, entanglement appears in the eigenvectors of the joint measurement that Alice, the sender, has to perform jointly on the quantum state to be teleported and her half of the “quantum teleportation channel”, i.e., entanglement enabling entirely new kinds of quantum measurements. I emphasize how poorly this second kind of entanglement is understood. In particular, I use quantum networks in which each party connected to several nodes performs a joint measurement to illustrate that the quantumness of such joint measurements remains elusive, escaping today’s available tools to detect and quantify it. Full article
Show Figures

Figure 1

21 pages, 13252 KiB  
Article
Image Encryption Based on Pixel-Level Diffusion with Dynamic Filtering and DNA-Level Permutation with 3D Latin Cubes
by Taiyong Li, Jiayi Shi, Xinsheng Li, Jiang Wu and Fan Pan
Entropy 2019, 21(3), 319; https://doi.org/10.3390/e21030319 - 24 Mar 2019
Cited by 90 | Viewed by 6125
Abstract
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach [...] Read more.
Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach consists of five stages: (1) a newly proposed 5D hyperchaotic system with two positive Lyapunov exponents is applied to generate a pseudorandom sequence; (2) for each pixel in an image, a filtering operation with different templates called dynamic filtering is conducted to diffuse the image; (3) DNA encoding is applied to the diffused image and then the DNA-level image is transformed into several 3D DNA-level cubes; (4) Latin cube is operated on each DNA-level cube; and (5) all the DNA cubes are integrated and decoded to a 2D cipher image. Extensive experiments are conducted on public testing images, and the results show that the proposed DFDLC can achieve state-of-the-art results in terms of several evaluation criteria. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

14 pages, 446 KiB  
Article
Using Permutations for Hierarchical Clustering of Time Series
by Jose S. Cánovas, Antonio Guillamón and María Carmen Ruiz-Abellón
Entropy 2019, 21(3), 306; https://doi.org/10.3390/e21030306 - 21 Mar 2019
Cited by 5 | Viewed by 4141
Abstract
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these [...] Read more.
Two distances based on permutations are considered to measure the similarity of two time series according to their strength of dependency. The distance measures are used together with different linkages to get hierarchical clustering methods of time series by dependency. We apply these distances to both simulated theoretical and real data series. For simulated time series the distances show good clustering results, both in the case of linear and non-linear dependencies. The effect of the embedding dimension and the linkage method are also analyzed. Finally, several real data series are properly clustered using the proposed method. Full article
Show Figures

Figure 1

19 pages, 1090 KiB  
Article
Probability Distributions with Singularities
by Federico Corberi and Alessandro Sarracino
Entropy 2019, 21(3), 312; https://doi.org/10.3390/e21030312 - 21 Mar 2019
Cited by 11 | Viewed by 4290
Abstract
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing [...] Read more.
In this paper we review some general properties of probability distributions which exhibit a singular behavior. After introducing the matter with several examples based on various models of statistical mechanics, we discuss, with the help of such paradigms, the underlying mathematical mechanism producing the singularity and other topics such as the condensation of fluctuations, the relationships with ordinary phase-transitions, the giant response associated to anomalous fluctuations, and the interplay with fluctuation relations. Full article
Show Figures

Figure 1

19 pages, 878 KiB  
Article
Limiting Uncertainty Relations in Laser-Based Measurements of Position and Velocity Due to Quantum Shot Noise
by Andreas Fischer
Entropy 2019, 21(3), 264; https://doi.org/10.3390/e21030264 - 8 Mar 2019
Cited by 10 | Viewed by 4005
Abstract
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison [...] Read more.
With the ongoing progress of optoelectronic components, laser-based measurement systems allow measurements of position as well as displacement, strain and velocity with unbeatable speed and low measurement uncertainty. The performance limit is often studied for a single measurement setup, but a fundamental comparison of different measurement principles with respect to the ultimate limit due to quantum shot noise is rare. For this purpose, the Cramér-Rao bound is described as a universal information theoretic tool to calculate the minimal achievable measurement uncertainty for different measurement techniques, and a review of the respective lower bounds for laser-based measurements of position, displacement, strain and velocity at particles and surfaces is presented. As a result, the calculated Cramér-Rao bounds of different measurement principles have similar forms for each measurand including an indirect proportionality with respect to the number of photons and, in case of the position measurement for instance, the wave number squared. Furthermore, an uncertainty principle between the position uncertainty and the wave vector uncertainty was identified, i.e., the measurement uncertainty is minimized by maximizing the wave vector uncertainty. Additionally, physically complementary measurement approaches such as interferometry and time-of-flight positions measurements as well as time-of-flight and Doppler particle velocity measurements are shown to attain the same fundamental limit. Since most of the laser-based measurements perform similar with respect to the quantum shot noise, the realized measurement systems behave differently only due to the available optoelectronic components for the concrete measurement task. Full article
(This article belongs to the Special Issue Entropic Uncertainty Relations and Their Applications)
Show Figures

Figure 1

35 pages, 1951 KiB  
Article
Informed Weighted Non-Negative Matrix Factorization Using αβ-Divergence Applied to Source Apportionment
by Gilles Delmaire, Mahmoud Omidvar, Matthieu Puigt, Frédéric Ledoux, Abdelhakim Limem, Gilles Roussel and Dominique Courcot
Entropy 2019, 21(3), 253; https://doi.org/10.3390/e21030253 - 6 Mar 2019
Cited by 7 | Viewed by 5305
Abstract
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row [...] Read more.
In this paper, we propose informed weighted non-negative matrix factorization (NMF) methods using an α β -divergence cost function. The available information comes from the exact knowledge/boundedness of some components of the factorization—which are used to structure the NMF parameterization—together with the row sum-to-one property of one matrix factor. In this contribution, we extend our previous work which partly involved some of these aspects to α β -divergence cost functions. We derive new update rules which are extendthe previous ones and take into account the available information. Experiments conducted for several operating conditions on realistic simulated mixtures of particulate matter sources show the relevance of these approaches. Results from a real dataset campaign are also presented and validated with expert knowledge. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

28 pages, 5871 KiB  
Article
Bayesian Compressive Sensing of Sparse Signals with Unknown Clustering Patterns
by Mohammad Shekaramiz, Todd K. Moon and Jacob H. Gunther
Entropy 2019, 21(3), 247; https://doi.org/10.3390/e21030247 - 5 Mar 2019
Cited by 22 | Viewed by 5450
Abstract
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or [...] Read more.
We consider the sparse recovery problem of signals with an unknown clustering pattern in the context of multiple measurement vectors (MMVs) using the compressive sensing (CS) technique. For many MMVs in practice, the solution matrix exhibits some sort of clustered sparsity pattern, or clumpy behavior, along each column, as well as joint sparsity across the columns. In this paper, we propose a new sparse Bayesian learning (SBL) method that incorporates a total variation-like prior as a measure of the overall clustering pattern in the solution. We further incorporate a parameter in this prior to account for the emphasis on the amount of clumpiness in the supports of the solution to improve the recovery performance of sparse signals with an unknown clustering pattern. This parameter does not exist in the other existing algorithms and is learned via our hierarchical SBL algorithm. While the proposed algorithm is constructed for the MMVs, it can also be applied to the single measurement vector (SMV) problems. Simulation results show the effectiveness of our algorithm compared to other algorithms for both SMV and MMVs. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

19 pages, 956 KiB  
Article
Centroid-Based Clustering with αβ-Divergences
by Auxiliadora Sarmiento, Irene Fondón, Iván Durán-Díaz and Sergio Cruces
Entropy 2019, 21(2), 196; https://doi.org/10.3390/e21020196 - 19 Feb 2019
Cited by 13 | Viewed by 4594
Abstract
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in [...] Read more.
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in the traditional hard k-means algorithm. In this article, we consider the problem of centroid-based clustering using the family of α β -divergences, which is governed by two parameters, α and β . We propose a new iterative algorithm, α β -k-means, giving closed-form solutions for the computation of the sided centroids. The algorithm can be fine-tuned by means of this pair of values, yielding a wide range of the most frequently used divergences. Moreover, it is guaranteed to converge to local minima for a wide range of values of the pair ( α , β ). Our theoretical contribution has been validated by several experiments performed with synthetic and real data and exploring the ( α , β ) plane. The numerical results obtained confirm the quality of the algorithm and its suitability to be used in several practical applications. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Show Figures

Figure 1

14 pages, 5804 KiB  
Article
Complex Dynamics in a Memcapacitor-Based Circuit
by Fang Yuan, Yuxia Li, Guangyi Wang, Gang Dou and Guanrong Chen
Entropy 2019, 21(2), 188; https://doi.org/10.3390/e21020188 - 16 Feb 2019
Cited by 40 | Viewed by 4800
Abstract
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex [...] Read more.
In this paper, a new memcapacitor model and its corresponding circuit emulator are proposed, based on which, a chaotic oscillator is designed and the system dynamic characteristics are investigated, both analytically and experimentally. Extreme multistability and coexisting attractors are observed in this complex system. The basins of attraction, multistability, bifurcations, Lyapunov exponents, and initial-condition-triggered similar bifurcation are analyzed. Finally, the memcapacitor-based chaotic oscillator is realized via circuit implementation with experimental results presented. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

20 pages, 3649 KiB  
Article
Entropy Generation Rate Minimization for Methanol Synthesis via a CO2 Hydrogenation Reactor
by Penglei Li, Lingen Chen, Shaojun Xia and Lei Zhang
Entropy 2019, 21(2), 174; https://doi.org/10.3390/e21020174 - 13 Feb 2019
Cited by 40 | Viewed by 4759
Abstract
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation [...] Read more.
The methanol synthesis via CO2 hydrogenation (MSCH) reaction is a useful CO2 utilization strategy, and this synthesis path has also been widely applied commercially for many years. In this work the performance of a MSCH reactor with the minimum entropy generation rate (EGR) as the objective function is optimized by using finite time thermodynamic and optimal control theory. The exterior wall temperature (EWR) is taken as the control variable, and the fixed methanol yield and conservation equations are taken as the constraints in the optimization problem. Compared with the reference reactor with a constant EWR, the total EGR of the optimal reactor decreases by 20.5%, and the EGR caused by the heat transfer decreases by 68.8%. In the optimal reactor, the total EGRs mainly distribute in the first 30% reactor length, and the EGRs caused by the chemical reaction accounts for more than 84% of the total EGRs. The selectivity of CH3OH can be enhanced by increasing the inlet molar flow rate of CO, and the CO2 conversion rate can be enhanced by removing H2O from the reaction system. The results obtained herein are in favor of optimal designs of practical tubular MSCH reactors. Full article
(This article belongs to the Special Issue Entropy Generation Minimization II)
Show Figures

Figure 1

16 pages, 9764 KiB  
Article
The Optimized Multi-Scale Permutation Entropy and Its Application in Compound Fault Diagnosis of Rotating Machinery
by Xianzhi Wang, Shubin Si, Yu Wei and Yongbo Li
Entropy 2019, 21(2), 170; https://doi.org/10.3390/e21020170 - 12 Feb 2019
Cited by 23 | Viewed by 4270
Abstract
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection [...] Read more.
Multi-scale permutation entropy (MPE) is a statistic indicator to detect nonlinear dynamic changes in time series, which has merits of high calculation efficiency, good robust ability, and independence from prior knowledge, etc. However, the performance of MPE is dependent on the parameter selection of embedding dimension and time delay. To complete the automatic parameter selection of MPE, a novel parameter optimization strategy of MPE is proposed, namely optimized multi-scale permutation entropy (OMPE). In the OMPE method, an improved Cao method is proposed to adaptively select the embedding dimension. Meanwhile, the time delay is determined based on mutual information. To verify the effectiveness of OMPE method, a simulated signal and two experimental signals are used for validation. Results demonstrate that the proposed OMPE method has a better feature extraction ability comparing with existing MPE methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

20 pages, 333 KiB  
Review
Classical (Local and Contextual) Probability Model for Bohm–Bell Type Experiments: No-Signaling as Independence of Random Variables
by Andrei Khrennikov and Alexander Alodjants
Entropy 2019, 21(2), 157; https://doi.org/10.3390/e21020157 - 8 Feb 2019
Cited by 41 | Viewed by 5285
Abstract
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper [...] Read more.
We start with a review on classical probability representations of quantum states and observables. We show that the correlations of the observables involved in the Bohm–Bell type experiments can be expressed as correlations of classical random variables. The main part of the paper is devoted to the conditional probability model with conditioning on the selection of the pairs of experimental settings. From the viewpoint of quantum foundations, this is a local contextual hidden-variables model. Following the recent works of Dzhafarov and collaborators, we apply our conditional probability approach to characterize (no-)signaling. Consideration of the Bohm–Bell experimental scheme in the presence of signaling is important for applications outside quantum mechanics, e.g., in psychology and social science. The main message of this paper (rooted to Ballentine) is that quantum probabilities and more generally probabilities related to the Bohm–Bell type experiments (not only in physics, but also in psychology, sociology, game theory, economics, and finances) can be classically represented as conditional probabilities. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
15 pages, 3574 KiB  
Article
Entropy Analysis and Neural Network-Based Adaptive Control of a Non-Equilibrium Four-Dimensional Chaotic System with Hidden Attractors
by Hadi Jahanshahi, Maryam Shahriari-Kahkeshi, Raúl Alcaraz, Xiong Wang, Vijay P. Singh and Viet-Thanh Pham
Entropy 2019, 21(2), 156; https://doi.org/10.3390/e21020156 - 7 Feb 2019
Cited by 84 | Viewed by 5133
Abstract
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate [...] Read more.
Today, four-dimensional chaotic systems are attracting considerable attention because of their special characteristics. This paper presents a non-equilibrium four-dimensional chaotic system with hidden attractors and investigates its dynamical behavior using a bifurcation diagram, as well as three well-known entropy measures, such as approximate entropy, sample entropy, and Fuzzy entropy. In order to stabilize the proposed chaotic system, an adaptive radial-basis function neural network (RBF-NN)–based control method is proposed to represent the model of the uncertain nonlinear dynamics of the system. The Lyapunov direct method-based stability analysis of the proposed approach guarantees that all of the closed-loop signals are semi-globally uniformly ultimately bounded. Also, adaptive learning laws are proposed to tune the weight coefficients of the RBF-NN. The proposed adaptive control approach requires neither the prior information about the uncertain dynamics nor the parameters value of the considered system. Results of simulation validate the performance of the proposed control method. Full article
Show Figures

Figure 1

18 pages, 1663 KiB  
Article
The Radial Propagation of Heat in Strongly Driven Non-Equilibrium Fusion Plasmas
by Boudewijn van Milligen, Benjamin Carreras, Luis García and Javier Nicolau
Entropy 2019, 21(2), 148; https://doi.org/10.3390/e21020148 - 5 Feb 2019
Cited by 13 | Viewed by 3800
Abstract
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” [...] Read more.
Heat transport is studied in strongly heated fusion plasmas, far from thermodynamic equilibrium. The radial propagation of perturbations is studied using a technique based on the transfer entropy. Three different magnetic confinement devices are studied, and similar results are obtained. “Minor transport barriers” are detected that tend to form near rational magnetic surfaces, thought to be associated with zonal flows. Occasionally, heat transport “jumps” over these barriers, and this “jumping” behavior seems to increase in intensity when the heating power is raised, suggesting an explanation for the ubiquitous phenomenon of “power degradation” observed in magnetically confined plasmas. Reinterpreting the analysis results in terms of a continuous time random walk, “fast” and “slow” transport channels can be discerned. The cited results can partially be understood in the framework of a resistive Magneto-HydroDynamic model. The picture that emerges shows that plasma self-organization and competing transport mechanisms are essential ingredients for a fuller understanding of heat transport in fusion plasmas. Full article
Show Figures

Graphical abstract

17 pages, 269 KiB  
Article
Are Virtual Particles Less Real?
by Gregg Jaeger
Entropy 2019, 21(2), 141; https://doi.org/10.3390/e21020141 - 2 Feb 2019
Cited by 24 | Viewed by 12263
Abstract
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic [...] Read more.
The question of whether virtual quantum particles exist is considered here in light of previous critical analysis and under the assumption that there are particles in the world as described by quantum field theory. The relationship of the classification of particles to quantum-field-theoretic calculations and the diagrammatic aids that are often used in them is clarified. It is pointed out that the distinction between virtual particles and others and, therefore, judgments regarding their reality have been made on basis of these methods rather than on their physical characteristics. As such, it has obscured the question of their existence. It is here argued that the most influential arguments against the existence of virtual particles but not other particles fail because they either are arguments against the existence of particles in general rather than virtual particles per se, or are dependent on the imposition of classical intuitions on quantum systems, or are simply beside the point. Several reasons are then provided for considering virtual particles real, such as their descriptive, explanatory, and predictive value, and a clearer characterization of virtuality—one in terms of intermediate states—that also applies beyond perturbation theory is provided. It is also pointed out that in the role of force mediators, they serve to preclude action-at-a-distance between interacting particles. For these reasons, it is concluded that virtual particles are as real as other quantum particles. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
Back to TopTop