entropy-logo

Journal Browser

Journal Browser

Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4151 KiB  
Article
Multifractal Behaviors of Stock Indices and Their Ability to Improve Forecasting in a Volatility Clustering Period
by Shuwen Zhang and Wen Fang
Entropy 2021, 23(8), 1018; https://doi.org/10.3390/e23081018 - 06 Aug 2021
Cited by 13 | Viewed by 2564
Abstract
The financial market is a complex system, which has become more complicated due to the sudden impact of the COVID-19 pandemic in 2020. As a result there may be much higher degree of uncertainty and volatility clustering in stock markets. How does this [...] Read more.
The financial market is a complex system, which has become more complicated due to the sudden impact of the COVID-19 pandemic in 2020. As a result there may be much higher degree of uncertainty and volatility clustering in stock markets. How does this “black swan” event affect the fractal behaviors of the stock market? How to improve the forecasting accuracy after that? Here we study the multifractal behaviors of 5-min time series of CSI300 and S&P500, which represents the two stock markets of China and United States. Using the Overlapped Sliding Window-based Multifractal Detrended Fluctuation Analysis (OSW-MF-DFA) method, we found that the two markets always have multifractal characteristics, and the degree of fractal intensified during the first panic period of pandemic. Based on the long and short-term memory which are described by fractal test results, we use the Gated Recurrent Unit (GRU) neural network model to forecast these indices. We found that during the large volatility clustering period, the prediction accuracy of the time series can be significantly improved by adding the time-varying Hurst index to the GRU neural network. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Figure 1

13 pages, 959 KiB  
Article
Quantum Darwinism in a Composite System: Objectivity versus Classicality
by Barış Çakmak, Özgür E. Müstecaplıoğlu, Mauro Paternostro, Bassano Vacchini and Steve Campbell
Entropy 2021, 23(8), 995; https://doi.org/10.3390/e23080995 - 31 Jul 2021
Cited by 18 | Viewed by 2816
Abstract
We investigate the implications of quantum Darwinism in a composite quantum system with interacting constituents exhibiting a decoherence-free subspace. We consider a two-qubit system coupled to an N-qubit environment via a dephasing interaction. For excitation preserving interactions between the system qubits, an [...] Read more.
We investigate the implications of quantum Darwinism in a composite quantum system with interacting constituents exhibiting a decoherence-free subspace. We consider a two-qubit system coupled to an N-qubit environment via a dephasing interaction. For excitation preserving interactions between the system qubits, an analytical expression for the dynamics is obtained. It demonstrates that part of the system Hilbert space redundantly proliferates its information to the environment, while the remaining subspace is decoupled and preserves clear non-classical signatures. For measurements performed on the system, we establish that a non-zero quantum discord is shared between the composite system and the environment, thus violating the conditions of strong Darwinism. However, due to the asymmetry of quantum discord, the information shared with the environment is completely classical for measurements performed on the environment. Our results imply a dichotomy between objectivity and classicality that emerges when considering composite systems. Full article
(This article belongs to the Special Issue Quantum Darwinism and Friends)
Show Figures

Figure 1

44 pages, 449 KiB  
Article
General Non-Markovian Quantum Dynamics
by Vasily E. Tarasov
Entropy 2021, 23(8), 1006; https://doi.org/10.3390/e23081006 - 31 Jul 2021
Cited by 22 | Viewed by 2273
Abstract
A general approach to the construction of non-Markovian quantum theory is proposed. Non-Markovian equations for quantum observables and states are suggested by using general fractional calculus. In the proposed approach, the non-locality in time is represented by operator kernels of the Sonin type. [...] Read more.
A general approach to the construction of non-Markovian quantum theory is proposed. Non-Markovian equations for quantum observables and states are suggested by using general fractional calculus. In the proposed approach, the non-locality in time is represented by operator kernels of the Sonin type. A wide class of the exactly solvable models of non-Markovian quantum dynamics is suggested. These models describe open (non-Hamiltonian) quantum systems with general form of nonlocality in time. To describe these systems, the Lindblad equations for quantum observable and states are generalized by taking into account a general form of nonlocality. The non-Markovian quantum dynamics is described by using integro-differential equations with general fractional derivatives and integrals with respect to time. The exact solutions of these equations are derived by using the operational calculus that is proposed by Yu. Luchko for general fractional differential equations. Properties of bi-positivity, complete positivity, dissipativity, and generalized dissipativity in general non-Markovian quantum dynamics are discussed. Examples of a quantum oscillator and two-level quantum system with a general form of nonlocality in time are suggested. Full article
(This article belongs to the Special Issue Non-Hamiltonian Dynamics, Open Systems and Entropy)
22 pages, 1936 KiB  
Article
Feature Selection for Recommender Systems with Quantum Computing
by Riccardo Nembrini, Maurizio Ferrari Dacrema and Paolo Cremonesi
Entropy 2021, 23(8), 970; https://doi.org/10.3390/e23080970 - 28 Jul 2021
Cited by 19 | Viewed by 4755
Abstract
The promise of quantum computing to open new unexplored possibilities in several scientific fields has been long discussed, but until recently the lack of a functional quantum computer has confined this discussion mostly to theoretical algorithmic papers. It was only in the last [...] Read more.
The promise of quantum computing to open new unexplored possibilities in several scientific fields has been long discussed, but until recently the lack of a functional quantum computer has confined this discussion mostly to theoretical algorithmic papers. It was only in the last few years that small but functional quantum computers have become available to the broader research community. One paradigm in particular, quantum annealing, can be used to sample optimal solutions for a number of NP-hard optimization problems represented with classical operations research tools, providing an easy access to the potential of this emerging technology. One of the tasks that most naturally fits in this mathematical formulation is feature selection. In this paper, we investigate how to design a hybrid feature selection algorithm for recommender systems that leverages the domain knowledge and behavior hidden in the user interactions data. We represent the feature selection as an optimization problem and solve it on a real quantum computer, provided by D-Wave. The results indicate that the proposed approach is effective in selecting a limited set of important features and that quantum computers are becoming powerful enough to enter the wider realm of applied science. Full article
(This article belongs to the Special Issue Representation Learning: Theory, Applications and Ethical Issues)
Show Figures

Figure 1

17 pages, 348 KiB  
Article
The Violation of Bell-CHSH Inequalities Leads to Different Conclusions Depending on the Description Used
by Aldo F. G. Solis-Labastida, Melina Gastelum and Jorge G. Hirsch
Entropy 2021, 23(7), 872; https://doi.org/10.3390/e23070872 - 08 Jul 2021
Cited by 2 | Viewed by 2381
Abstract
Since the experimental observation of the violation of the Bell-CHSH inequalities, much has been said about the non-local and contextual character of the underlying system. However, the hypothesis from which Bell’s inequalities are derived differ according to the probability space used to write [...] Read more.
Since the experimental observation of the violation of the Bell-CHSH inequalities, much has been said about the non-local and contextual character of the underlying system. However, the hypothesis from which Bell’s inequalities are derived differ according to the probability space used to write them. The violation of Bell’s inequalities can, alternatively, be explained by assuming that the hidden variables do not exist at all, that they exist but their values cannot be simultaneously assigned, that the values can be assigned but joint probabilities cannot be properly defined, or that averages taken in different contexts cannot be combined. All of the above are valid options, selected by different communities to provide support to their particular research program. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness III)
Show Figures

Figure 1

10 pages, 4394 KiB  
Article
Spatial Entanglement of Fermions in One-Dimensional Quantum Dots
by Ivan P. Christov
Entropy 2021, 23(7), 868; https://doi.org/10.3390/e23070868 - 07 Jul 2021
Cited by 5 | Viewed by 1900
Abstract
The time-dependent quantum Monte Carlo method for fermions is introduced and applied in the calculation of the entanglement of electrons in one-dimensional quantum dots with several spin-polarized and spin-compensated electron configurations. The rich statistics of wave functions provided by this method allow one [...] Read more.
The time-dependent quantum Monte Carlo method for fermions is introduced and applied in the calculation of the entanglement of electrons in one-dimensional quantum dots with several spin-polarized and spin-compensated electron configurations. The rich statistics of wave functions provided by this method allow one to build reduced density matrices for each electron, and to quantify the spatial entanglement using measures such as quantum entropy by treating the electrons as identical or distinguishable particles. Our results indicate that the spatial entanglement in parallel-spin configurations is rather small, and is determined mostly by the spatial quantum nonlocality introduced by the ground state. By contrast, in the spin-compensated case, the outermost opposite-spin electrons interact like bosons, which prevails their entanglement, while the inner-shell electrons remain largely at their Hartree–Fock geometry. Our findings are in close correspondence with the numerically exact results, wherever such comparison is possible. Full article
Show Figures

Figure 1

16 pages, 2343 KiB  
Article
Influences of Different Architectures on the Thermodynamic Performance and Network Structure of Aircraft Environmental Control System
by Han Yang, Chunxin Yang, Xingjuan Zhang and Xiugan Yuan
Entropy 2021, 23(7), 855; https://doi.org/10.3390/e23070855 - 03 Jul 2021
Cited by 9 | Viewed by 2587
Abstract
The environmental control system (ECS) is one of the most important systems in the aircraft used to regulate the pressure, temperature and humidity of the air in the cabin. This study investigates the influences of different architectures on the thermal performance and network [...] Read more.
The environmental control system (ECS) is one of the most important systems in the aircraft used to regulate the pressure, temperature and humidity of the air in the cabin. This study investigates the influences of different architectures on the thermal performance and network structure of ECS. The refrigeration and pressurization performances of ECS with four different architectures are analyzed and compared by the endoreversible thermodynamic analysis method, and their external and internal responses have also been discussed. The results show that the connection modes of the heat exchanger have minor effects on the performance of ECSs, but the influence of the air cycle machine is obvious. This study attempts to abstract the ECS as a network structure based on the graph theory, and use entropy in information theory for quantitative evaluation. The results provide a theoretical basis for the design of ECS and facilitate engineers to make reliable decisions. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
Show Figures

Figure 1

42 pages, 5704 KiB  
Article
Geometric Variational Inference
by Philipp Frank, Reimar Leike and Torsten A. Enßlin
Entropy 2021, 23(7), 853; https://doi.org/10.3390/e23070853 - 02 Jul 2021
Cited by 15 | Viewed by 3451
Abstract
Efficiently accessing the information contained in non-linear and high dimensional probability distributions remains a core challenge in modern statistics. Traditionally, estimators that go beyond point estimates are either categorized as Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques. While MCMC methods that utilize [...] Read more.
Efficiently accessing the information contained in non-linear and high dimensional probability distributions remains a core challenge in modern statistics. Traditionally, estimators that go beyond point estimates are either categorized as Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques. While MCMC methods that utilize the geometric properties of continuous probability distributions to increase their efficiency have been proposed, VI methods rarely use the geometry. This work aims to fill this gap and proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric. It is used to construct a coordinate transformation that relates the Riemannian manifold associated with the metric to Euclidean space. The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation by a normal distribution. Furthermore, the algorithmic structure allows for an efficient implementation of geoVI which is demonstrated on multiple examples, ranging from low-dimensional illustrative ones to non-linear, hierarchical Bayesian inverse problems in thousands of dimensions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

11 pages, 1339 KiB  
Article
On Using the BMCSL Equation of State to Renormalize the Onsager Theory Approach to Modeling Hard Prolate Spheroidal Liquid Crystal Mixtures
by Donya Ohadi, David S. Corti and Mark J. Uline
Entropy 2021, 23(7), 846; https://doi.org/10.3390/e23070846 - 30 Jun 2021
Cited by 3 | Viewed by 2307
Abstract
Modifications to the traditional Onsager theory for modeling isotropic–nematic phase transitions in hard prolate spheroidal systems are presented. Pure component systems are used to identify the need to update the Lee–Parsons resummation term. The Lee–Parsons resummation term uses the Carnahan–Starling equation of state [...] Read more.
Modifications to the traditional Onsager theory for modeling isotropic–nematic phase transitions in hard prolate spheroidal systems are presented. Pure component systems are used to identify the need to update the Lee–Parsons resummation term. The Lee–Parsons resummation term uses the Carnahan–Starling equation of state to approximate higher-order virial coefficients beyond the second virial coefficient employed in Onsager’s original theoretical approach. As more exact ways of calculating the excluded volume of two hard prolate spheroids of a given orientation are used, the division of the excluded volume by eight, which is an empirical correction used in the original Lee–Parsons resummation term, must be replaced by six to yield a better match between the theoretical and simulation results. These modifications are also extended to binary mixtures of hard prolate spheroids using the Boublík–Mansoori–Carnahan–Starling–Leland (BMCSL) equation of state. Full article
(This article belongs to the Special Issue Entropic Control of Soft Materials)
Show Figures

Figure 1

21 pages, 608 KiB  
Article
List Decoding of Arıkan’s PAC Codes
by Hanwen Yao, Arman Fazeli and Alexander Vardy
Entropy 2021, 23(7), 841; https://doi.org/10.3390/e23070841 - 30 Jun 2021
Cited by 17 | Viewed by 3010
Abstract
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known [...] Read more.
Polar coding gives rise to the first explicit family of codes that provably achieve capacity with efficient encoding and decoding for a wide range of channels. However, its performance at short blocklengths under standard successive cancellation decoding is far from optimal. A well-known way to improve the performance of polar codes at short blocklengths is CRC precoding followed by successive-cancellation list decoding. This approach, along with various refinements thereof, has largely remained the state of the art in polar coding since it was introduced in 2011. Recently, Arıkan presented a new polar coding scheme, which he called polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes offer a dramatic improvement in performance as compared to CRC-aided list decoding of conventional polar codes. PAC codes are based primarily upon the following main ideas: replacing CRC codes with convolutional precoding (under appropriate rate profiling) and replacing list decoding by sequential decoding. One of our primary goals in this paper is to answer the following question: is sequential decoding essential for the superior performance of PAC codes? We show that similar performance can be achieved using list decoding when the list size L is moderately large (say, L128). List decoding has distinct advantages over sequential decoding in certain scenarios, such as low-SNR regimes or situations where the worst-case complexity/latency is the primary constraint. Another objective is to provide some insights into the remarkable performance of PAC codes. We first observe that both sequential decoding and list decoding of PAC codes closely match ML decoding thereof. We then estimate the number of low weight codewords in PAC codes, and use these estimates to approximate the union bound on their performance. These results indicate that PAC codes are superior to both polar codes and Reed–Muller codes. We also consider random time-varying convolutional precoding for PAC codes, and observe that this scheme achieves the same superior performance with constraint length as low as ν=2. Full article
(This article belongs to the Special Issue Short Packet Communications for 5G and Beyond)
Show Figures

Figure 1

20 pages, 3620 KiB  
Article
Ground-State Properties and Phase Separation of Binary Mixtures in Mesoscopic Ring Lattices
by Vittorio Penna, Alessandra Contestabile and Andrea Richaud
Entropy 2021, 23(7), 821; https://doi.org/10.3390/e23070821 - 28 Jun 2021
Viewed by 2503
Abstract
We investigated the spatial phase separation of the two components forming a bosonic mixture distributed in a four-well lattice with a ring geometry. We studied the ground state of this system, described by means of a binary Bose–Hubbard Hamiltonian, by implementing a well-known [...] Read more.
We investigated the spatial phase separation of the two components forming a bosonic mixture distributed in a four-well lattice with a ring geometry. We studied the ground state of this system, described by means of a binary Bose–Hubbard Hamiltonian, by implementing a well-known coherent-state picture which allowed us to find the semi-classical equations determining the distribution of boson components in the ring lattice. Their fully analytic solutions, in the limit of large boson numbers, provide the boson populations at each well as a function of the interspecies interaction and of other significant model parameters, while allowing to reconstruct the non-trivial architecture of the ground-state four-well phase diagram. The comparison with the L-well (L=2,3) phase diagrams highlights how increasing the number of wells considerably modifies the phase diagram structure and the transition mechanism from the full-mixing to the full-demixing phase controlled by the interspecies interaction. Despite the fact that the phase diagrams for L=2,3,4 share various general properties, we show that, unlike attractive binary mixtures, repulsive mixtures do not feature a transition mechanism which can be extended to an arbitrary lattice of size L. Full article
(This article belongs to the Special Issue The Ubiquity of Entropy II)
Show Figures

Figure 1

22 pages, 2611 KiB  
Article
Data-Driven Analysis of Nonlinear Heterogeneous Reactions through Sparse Modeling and Bayesian Statistical Approaches
by Masaki Ito, Tatsu Kuwatani, Ryosuke Oyanagi and Toshiaki Omori
Entropy 2021, 23(7), 824; https://doi.org/10.3390/e23070824 - 28 Jun 2021
Cited by 3 | Viewed by 2993
Abstract
Heterogeneous reactions are chemical reactions that occur at the interfaces of multiple phases, and often show a nonlinear dynamical behavior due to the effect of the time-variant surface area with complex reaction mechanisms. It is important to specify the kinetics of heterogeneous reactions [...] Read more.
Heterogeneous reactions are chemical reactions that occur at the interfaces of multiple phases, and often show a nonlinear dynamical behavior due to the effect of the time-variant surface area with complex reaction mechanisms. It is important to specify the kinetics of heterogeneous reactions in order to elucidate the microscopic elementary processes and predict the macroscopic future evolution of the system. In this study, we propose a data-driven method based on a sparse modeling algorithm and sequential Monte Carlo algorithm for simultaneously extracting substantial reaction terms and surface models from a number of candidates by using partial observation data. We introduce a sparse modeling approach with non-uniform sparsity levels in order to accurately estimate rate constants, and the sequential Monte Carlo algorithm is employed to estimate time courses of multi-dimensional hidden variables. The results estimated using the proposed method show that the rate constants of dissolution and precipitation reactions that are typical examples of surface heterogeneous reactions, necessary surface models, and reaction terms underlying observable data were successfully estimated from only observable temporal changes in the concentration of the dissolved intermediate products. Full article
Show Figures

Figure 1

13 pages, 328 KiB  
Article
A Semi-Deterministic Random Walk with Resetting
by Javier Villarroel, Miquel Montero and Juan Antonio Vega
Entropy 2021, 23(7), 825; https://doi.org/10.3390/e23070825 - 28 Jun 2021
Cited by 4 | Viewed by 2159
Abstract
We consider a discrete-time random walk (xt) which, at random times, is reset to the starting position and performs a deterministic motion between them. We show that the quantity [...] Read more.
We consider a discrete-time random walk (xt) which, at random times, is reset to the starting position and performs a deterministic motion between them. We show that the quantity Prxt+1=n+1|xt=n,n determines if the system is averse, neutral or inclined towards resetting. It also classifies the stationary distribution. Double barrier probabilities, first passage times and the distribution of the escape time from intervals are determined. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Graphical abstract

22 pages, 880 KiB  
Review
Three-Factor Kinetic Equation of Catalyst Deactivation
by Zoë Gromotka, Gregory Yablonsky, Nickolay Ostrovskii and Denis Constales
Entropy 2021, 23(7), 818; https://doi.org/10.3390/e23070818 - 27 Jun 2021
Cited by 6 | Viewed by 2671
Abstract
The three-factor kinetic equation of catalyst deactivation was obtained in terms of apparent kinetic parameters. The three factors correspond to the main cycle with a linear, detailed mechanism regarding the catalytic intermediates, a cycle of reversible deactivation, and a stage of irreversible deactivation [...] Read more.
The three-factor kinetic equation of catalyst deactivation was obtained in terms of apparent kinetic parameters. The three factors correspond to the main cycle with a linear, detailed mechanism regarding the catalytic intermediates, a cycle of reversible deactivation, and a stage of irreversible deactivation (aging), respectively. The rate of the main cycle is obtained for the fresh catalyst under a quasi-steady-state assumption. The phenomena of reversible and irreversible deactivation are presented as special separate factors (hierarchical separation). In this case, the reversible deactivation factor is a function of the kinetic apparent parameters of the reversible deactivation and of those of the main cycle. The irreversible deactivation factor is a function of the apparent kinetic parameters of the main cycle, of the reversible deactivation, and of the irreversible deactivation. The conditions of such separability are found. The obtained equation is applied successfully to describe the literature data on the reversible catalyst deactivation processes in the dehydration of acetaldehyde over TiO2 anatase and in crotonaldehyde hydrogenation on supported metal catalysts. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
Show Figures

Figure 1

8 pages, 10751 KiB  
Article
Josephson Currents and Gap Enhancement in Graph Arrays of Superconductive Islands
by Massimiliano Lucci, Davide Cassi, Vittorio Merlo, Roberto Russo, Gaetano Salina and Matteo Cirillo
Entropy 2021, 23(7), 811; https://doi.org/10.3390/e23070811 - 25 Jun 2021
Cited by 5 | Viewed by 1744
Abstract
Evidence is reported that topological effects in graph-shaped arrays of superconducting islands can condition superconducting energy gap and transition temperature. The carriers giving rise to the new phase are couples of electrons (Cooper pairs) which, in the superconducting state, behave as predicted for [...] Read more.
Evidence is reported that topological effects in graph-shaped arrays of superconducting islands can condition superconducting energy gap and transition temperature. The carriers giving rise to the new phase are couples of electrons (Cooper pairs) which, in the superconducting state, behave as predicted for bosons in our structures. The presented results have been obtained both on star and double comb-shaped arrays and the coupling between the islands is provided by Josephson junctions whose potential can be tuned by external magnetic field or temperature. Our peculiar technique for probing distribution on the islands is such that the hopping of bosons between the different islands occurs because their thermal energy is of the same order of the Josephson coupling energy between the islands. Both for star and double comb graph topologies the results are in qualitative and quantitative agreement with theoretical predictions. Full article
(This article belongs to the Special Issue Thermodynamics and Superconducting Devices)
Show Figures

Figure 1

13 pages, 3297 KiB  
Article
Hybrid Nanofluids Flows Determined by a Permeable Power-Law Stretching/Shrinking Sheet Modulated by Orthogonal Surface Shear
by Natalia C. Roşca and Ioan Pop
Entropy 2021, 23(7), 813; https://doi.org/10.3390/e23070813 - 25 Jun 2021
Cited by 10 | Viewed by 1438
Abstract
The present paper studies the flow and heat transfer of the hybrid nanofluids flows induced by a permeable power-law stretching/shrinking surface modulated orthogonal surface shear. The governing partial differential equations were converted into non-linear ordinary differential equations by using proper similarity transformations. These [...] Read more.
The present paper studies the flow and heat transfer of the hybrid nanofluids flows induced by a permeable power-law stretching/shrinking surface modulated orthogonal surface shear. The governing partial differential equations were converted into non-linear ordinary differential equations by using proper similarity transformations. These equations were then solved applying a numerical technique, namely bvp4c solver in MATLAB. Results of the flow field, temperature distribution, reduced skin friction coefficient and reduced Nusselt number were deduced. It was found that increasing mass flux parameter slows down the velocity and, hence, decreases the temperature. Furthermore, on enlarging the stretching parameter, the velocity and temperature increases and decreases, respectively. In addition, that the radiation parameter can effectively control the thermal boundary layer. Finally, the temperature decreases when the values of the temperature parameter increases. We apply similarity transformation in order to transform the governing model into a system of ODEs (ordinary differential equations). Numerical solutions for particular values of involved parameters are in very good agreement with previous calculations. The most important and interesting result of this paper is that for both the cases of shrinking and stretching sheet flows exhibit dual solutions in some intervals of the shrinking and stretching parameter. In spite of numerous published papers on the flow and heat transfer over a permeable stretching/shrinking surface in nanofluids and hybrid nanofluids, none of the researchers studied the present problem. Therefore, we believe that the results of the present paper are new, and have many industrial applications. Full article
(This article belongs to the Special Issue Entropy Analysis in Nanofluids and Porous Media)
Show Figures

Figure 1

24 pages, 4538 KiB  
Article
The Carnot Cycle, Reversibility and Entropy
by David Sands
Entropy 2021, 23(7), 810; https://doi.org/10.3390/e23070810 - 25 Jun 2021
Cited by 2 | Viewed by 3638
Abstract
The Carnot cycle and the attendant notions of reversibility and entropy are examined. It is shown how the modern view of these concepts still corresponds to the ideas Clausius laid down in the nineteenth century. As such, they reflect the outmoded idea, current [...] Read more.
The Carnot cycle and the attendant notions of reversibility and entropy are examined. It is shown how the modern view of these concepts still corresponds to the ideas Clausius laid down in the nineteenth century. As such, they reflect the outmoded idea, current at the time, that heat is motion. It is shown how this view of heat led Clausius to develop the entropy of a body based on the work that could be performed in a reversible process rather than the work that is actually performed in an irreversible process. In consequence, Clausius built into entropy a conflict with energy conservation, which is concerned with actual changes in energy. In this paper, reversibility and irreversibility are investigated by means of a macroscopic formulation of internal mechanisms of damping based on rate equations for the distribution of energy within a gas. It is shown that work processes involving a step change in external pressure, however small, are intrinsically irreversible. However, under idealised conditions of zero damping the gas inside a piston expands and traces out a trajectory through the space of equilibrium states. Therefore, the entropy change due to heat flow from the reservoir matches the entropy change of the equilibrium states. This trajectory can be traced out in reverse as the piston reverses direction, but if the external conditions are adjusted appropriately, the gas can be made to trace out a Carnot cycle in P-V space. The cycle is dynamic as opposed to quasi-static as the piston has kinetic energy equal in difference to the work performed internally and externally. Full article
(This article belongs to the Special Issue The Foundations of Thermodynamics)
Show Figures

Figure 1

18 pages, 1363 KiB  
Article
Accuracy-Risk Trade-Off Due to Social Learning in Crowd-Sourced Financial Predictions
by Dhaval Adjodah, Yan Leng, Shi Kai Chong, P. M. Krafft, Esteban Moro and Alex Pentland
Entropy 2021, 23(7), 801; https://doi.org/10.3390/e23070801 - 24 Jun 2021
Cited by 2 | Viewed by 4965
Abstract
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the [...] Read more.
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the trade-off between risk and accuracy at the collective level? Answers to this question will lead to designing and deploying more effective crowd-sourced financial platforms and to minimizing issues stemming from risk such as implied volatility. To investigate this trade-off, we conducted a large online Wisdom of the Crowd study where 2037 participants predicted the prices of real financial assets (S&P 500, WTI Oil and Gold prices). Using the data collected, we modeled the belief update process of participants using models inspired by Bayesian models of cognition. We show that subsets of predictions chosen based on their belief update strategies lie on a Pareto frontier between accuracy and risk, mediated by social learning. We also observe that social learning led to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

19 pages, 2973 KiB  
Article
Psychomotor Predictive Processing
by Stephen Fox
Entropy 2021, 23(7), 806; https://doi.org/10.3390/e23070806 - 24 Jun 2021
Cited by 5 | Viewed by 3760
Abstract
Psychomotor experience can be based on what people predict they will experience, rather than on sensory inputs. It has been argued that disconnects between human experience and sensory inputs can be addressed better through further development of predictive processing theory. In this paper, [...] Read more.
Psychomotor experience can be based on what people predict they will experience, rather than on sensory inputs. It has been argued that disconnects between human experience and sensory inputs can be addressed better through further development of predictive processing theory. In this paper, the scope of predictive processing theory is extended through three developments. First, by going beyond previous studies that have encompassed embodied cognition but have not addressed some fundamental aspects of psychomotor functioning. Second, by proposing a scientific basis for explaining predictive processing that spans objective neuroscience and subjective experience. Third, by providing an explanation of predictive processing that can be incorporated into the planning and operation of systems involving robots and other new technologies. This is necessary because such systems are becoming increasingly common and move us farther away from the hunter-gatherer lifestyles within which our psychomotor functioning evolved. For example, beliefs that workplace robots are threatening can generate anxiety, while wearing hardware, such as augmented reality headsets and exoskeletons, can impede the natural functioning of psychomotor systems. The primary contribution of the paper is the introduction of a new formulation of hierarchical predictive processing that is focused on psychomotor functioning. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

30 pages, 3443 KiB  
Article
Bell Diagonal and Werner State Generation: Entanglement, Non-Locality, Steering and Discord on the IBM Quantum Computer
by Elias Riedel Gårding, Nicolas Schwaller, Chun Lam Chan, Su Yeon Chang, Samuel Bosch, Frederic Gessler, Willy Robert Laborde, Javier Naya Hernandez, Xinyu Si, Marc-André Dupertuis and Nicolas Macris
Entropy 2021, 23(7), 797; https://doi.org/10.3390/e23070797 - 23 Jun 2021
Cited by 14 | Viewed by 4534
Abstract
We propose the first correct special-purpose quantum circuits for preparation of Bell diagonal states (BDS), and implement them on the IBM Quantum computer, characterizing and testing complex aspects of their quantum correlations in the full parameter space. Among the circuits proposed, one involves [...] Read more.
We propose the first correct special-purpose quantum circuits for preparation of Bell diagonal states (BDS), and implement them on the IBM Quantum computer, characterizing and testing complex aspects of their quantum correlations in the full parameter space. Among the circuits proposed, one involves only two quantum bits but requires adapted quantum tomography routines handling classical bits in parallel. The entire class of Bell diagonal states is generated, and several characteristic indicators, namely entanglement of formation and concurrence, CHSH non-locality, steering and discord, are experimentally evaluated over the full parameter space and compared with theory. As a by-product of this work, we also find a remarkable general inequality between “quantum discord” and “asymmetric relative entropy of discord”: the former never exceeds the latter. We also prove that for all BDS the two coincide. Full article
(This article belongs to the Special Issue Entropy in Quantum Systems and Quantum Field Theory (QFT) II)
Show Figures

Figure 1

15 pages, 1134 KiB  
Article
Field Theoretical Approach for Signal Detection in Nearly Continuous Positive Spectra II: Tensorial Data
by Vincent Lahoche, Mohamed Ouerfelli, Dine Ousmane Samary and Mohamed Tamaazousti
Entropy 2021, 23(7), 795; https://doi.org/10.3390/e23070795 - 23 Jun 2021
Cited by 9 | Viewed by 2011
Abstract
The tensorial principal component analysis is a generalization of ordinary principal component analysis focusing on data which are suitably described by tensors rather than matrices. This paper aims at giving the nonperturbative renormalization group formalism, based on a slight generalization of the covariance [...] Read more.
The tensorial principal component analysis is a generalization of ordinary principal component analysis focusing on data which are suitably described by tensors rather than matrices. This paper aims at giving the nonperturbative renormalization group formalism, based on a slight generalization of the covariance matrix, to investigate signal detection for the difficult issue of nearly continuous spectra. Renormalization group allows constructing an effective description keeping only relevant features in the low “energy” (i.e., large eigenvalues) limit and thus providing universal descriptions allowing to associate the presence of the signal with objectives and computable quantities. Among them, in this paper, we focus on the vacuum expectation value. We exhibit experimental evidence in favor of a connection between symmetry breaking and the existence of an intrinsic detection threshold, in agreement with our conclusions for matrices, providing a new step in the direction of a universal statement. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

14 pages, 691 KiB  
Article
The Impact of Linear Filter Preprocessing in the Interpretation of Permutation Entropy
by Antonio Dávalos, Meryem Jabloun, Philippe Ravier and Olivier Buttelli
Entropy 2021, 23(7), 787; https://doi.org/10.3390/e23070787 - 22 Jun 2021
Cited by 6 | Viewed by 2217
Abstract
Permutation Entropy (PE) is a powerful tool for measuring the amount of information contained within a time series. However, this technique is rarely applied directly on raw signals. Instead, a preprocessing step, such as linear filtering, is applied in order to remove noise [...] Read more.
Permutation Entropy (PE) is a powerful tool for measuring the amount of information contained within a time series. However, this technique is rarely applied directly on raw signals. Instead, a preprocessing step, such as linear filtering, is applied in order to remove noise or to isolate specific frequency bands. In the current work, we aimed at outlining the effect of linear filter preprocessing in the final PE values. By means of the Wiener–Khinchin theorem, we theoretically characterize the linear filter’s intrinsic PE and separated its contribution from the signal’s ordinal information. We tested these results by means of simulated signals, subject to a variety of linear filters such as the moving average, Butterworth, and Chebyshev type I. The PE results from simulations closely resembled our predicted results for all tested filters, which validated our theoretical propositions. More importantly, when we applied linear filters to signals with inner correlations, we were able to theoretically decouple the signal-specific contribution from that induced by the linear filter. Therefore, by providing a proper framework of PE linear filter characterization, we improved the PE interpretation by identifying possible artifact information introduced by the preprocessing steps. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications II)
Show Figures

Figure 1

16 pages, 5193 KiB  
Article
Intelligent Online Monitoring of Rolling Bearing: Diagnosis and Prognosis
by Hassane Hotait, Xavier Chiementin and Lanto Rasolofondraibe
Entropy 2021, 23(7), 791; https://doi.org/10.3390/e23070791 - 22 Jun 2021
Cited by 10 | Viewed by 2576
Abstract
This paper suggests a new method to predict the Remaining Useful Life (RUL) of rolling bearings based on Long Short Term Memory (LSTM), in order to obtain the degradation condition of the rolling bearings and realize the predictive maintenance. The approach is divided [...] Read more.
This paper suggests a new method to predict the Remaining Useful Life (RUL) of rolling bearings based on Long Short Term Memory (LSTM), in order to obtain the degradation condition of the rolling bearings and realize the predictive maintenance. The approach is divided into three parts: the first part is the clustering to detect the damage state by the density-based spatial clustering of applications with noise. The second one is the health indicator construction which could give a better reflection of the bearing degradation tendency and is selected as the input for the prediction model. In the third part of the RUL prediction, the LSTM approach is employed to improve the accuracy of the prediction. The rationale of this work is to combine the two methods—the density-based spatial clustering of applications with noise and LSTM—to identify the abnormal state in rolling bearings, then estimate the RUL. The suggested method is confirmed by experimental data of bearing life cycle, and the RUL prediction results of the model LSTM are compared with the nonlinear au-regressive model with exogenous input model. In addition, the constructed health indicator is compared with the spectral kurtosis feature. The results demonstrated that the suggested method is more appropriate than the nonlinear au-regressive model with exogenous input model for the prediction of bearing RUL. Full article
Show Figures

Figure 1

16 pages, 5730 KiB  
Article
Global Sensitivity Analysis Based on Entropy: From Differential Entropy to Alternative Measures
by Zdeněk Kala
Entropy 2021, 23(6), 778; https://doi.org/10.3390/e23060778 - 19 Jun 2021
Cited by 11 | Viewed by 6152
Abstract
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative [...] Read more.
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative entropy, just as Sobol sensitivity analysis does not have negative variance. Entropy is similar to variance but does not have the same properties. An alternative sensitivity measure based on the approximation of the differential entropy using dome-shaped functionals with non-negative values is proposed in the article. Case studies have shown that new sensitivity measures lead to a rational structure of sensitivity indices with a significantly lower proportion of higher-order sensitivity indices compared to other types of distributional sensitivity analysis. In terms of the concept of sensitivity analysis, a decrease in variance to zero means a transition from the differential to discrete entropy. The form of this transition is an open question, which can be studied using other scientific disciplines. The search for new functionals for distributional sensitivity analysis is not closed, and other suitable sensitivity measures may be found. Full article
Show Figures

Figure 1

22 pages, 934 KiB  
Article
Robust Universal Inference
by Amichai Painsky and Meir Feder
Entropy 2021, 23(6), 773; https://doi.org/10.3390/e23060773 - 18 Jun 2021
Cited by 3 | Viewed by 2641
Abstract
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number [...] Read more.
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

22 pages, 6224 KiB  
Article
Computing Accurate Probabilistic Estimates of One-D Entropy from Equiprobable Random Samples
by Hoshin V. Gupta, Mohammad Reza Ehsani, Tirthankar Roy, Maria A. Sans-Fuentes, Uwe Ehret and Ali Behrangi
Entropy 2021, 23(6), 740; https://doi.org/10.3390/e23060740 - 11 Jun 2021
Cited by 3 | Viewed by 3149
Abstract
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability [...] Read more.
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.250.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as 10% and for BC as large as 50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 690 KiB  
Article
On Representations of Divergence Measures and Related Quantities in Exponential Families
by Stefan Bedbur and Udo Kamps
Entropy 2021, 23(6), 726; https://doi.org/10.3390/e23060726 - 08 Jun 2021
Cited by 1 | Viewed by 2115
Abstract
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and [...] Read more.
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and mean value function. Moreover, the same applies to related entropy and affinity measures. We compile representations scattered in the literature and present a unified approach to the derivation in exponential families. As a statistical application, we highlight their use in the construction of confidence regions in a multi-sample setup. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

21 pages, 4195 KiB  
Article
Switching and Swapping of Quantum Information: Entropy and Entanglement Level
by Marek Sawerwain, Joanna Wiśniewska and Roman Gielerak
Entropy 2021, 23(6), 717; https://doi.org/10.3390/e23060717 - 04 Jun 2021
Cited by 2 | Viewed by 2499
Abstract
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information [...] Read more.
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information and its existence proof, together with some elementary properties analysed through the prism of the concept of the entropy, are presented. As an example of the local information swapping usage, we demonstrate a certain realisation of the quantum switch. Entanglement levels, during the work of the switch, are calculated with the Negativity measure and a separability criterion based on the von Neumann entropy, spectral decomposition and Schmidt decomposition. Results of numerical experiments, during which the entanglement levels are estimated for systems under consideration with and without distortions, are presented. The noise is generated by the Dzyaloshinskii-Moriya interaction and the intrinsic decoherence is modelled by the Milburn equation. This work contains a switch realisation in a circuit form—built out of elementary quantum gates, and a scheme of the circuit which estimates levels of entanglement during the switch’s operating. Full article
(This article belongs to the Special Issue Methods and Applications of Quantum Data Processing)
Show Figures

Figure 1

16 pages, 15341 KiB  
Article
A Geometric Perspective on Information Plane Analysis
by Mina Basirat, Bernhard C. Geiger and Peter M. Roth
Entropy 2021, 23(6), 711; https://doi.org/10.3390/e23060711 - 03 Jun 2021
Cited by 2 | Viewed by 2892
Abstract
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are [...] Read more.
Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels. Full article
Show Figures

Figure 1

27 pages, 3083 KiB  
Article
Information and Self-Organization II: Steady State and Phase Transition
by Hermann Haken and Juval Portugali
Entropy 2021, 23(6), 707; https://doi.org/10.3390/e23060707 - 02 Jun 2021
Cited by 11 | Viewed by 3403
Abstract
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation [...] Read more.
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation reflects the shift from the emphasis on physical principles to principles of information theory and Synergetics. In view of the expected general audience of this issue, we have chosen a somewhat tutorial style that does not require special knowledge on physics but familiarizes the reader with concepts rooted in information theory and Synergetics. Full article
(This article belongs to the Special Issue Information and Self-Organization II)
Show Figures

Figure 1

21 pages, 3269 KiB  
Article
Disentangling the Information in Species Interaction Networks
by Michiel Stock, Laura Hoebeke and Bernard De Baets
Entropy 2021, 23(6), 703; https://doi.org/10.3390/e23060703 - 02 Jun 2021
Viewed by 2559
Abstract
Shannon’s entropy measure is a popular means for quantifying ecological diversity. We explore how one can use information-theoretic measures (that are often called indices in ecology) on joint ensembles to study the diversity of species interaction networks. We leverage the little-known balance equation [...] Read more.
Shannon’s entropy measure is a popular means for quantifying ecological diversity. We explore how one can use information-theoretic measures (that are often called indices in ecology) on joint ensembles to study the diversity of species interaction networks. We leverage the little-known balance equation to decompose the network information into three components describing the species abundance, specificity, and redundancy. This balance reveals that there exists a fundamental trade-off between these components. The decomposition can be straightforwardly extended to analyse networks through time as well as space, leading to the corresponding notions for alpha, beta, and gamma diversity. Our work aims to provide an accessible introduction for ecologists. To this end, we illustrate the interpretation of the components on numerous real networks. The corresponding code is made available to the community in the specialised Julia package EcologicalNetworks.jl. Full article
(This article belongs to the Special Issue Information Theory-Based Approach to Assessing Ecosystem)
Show Figures

Figure 1

68 pages, 3761 KiB  
Article
Quantum Foundations of Classical Reversible Computing
by Michael P. Frank and Karpur Shukla
Entropy 2021, 23(6), 701; https://doi.org/10.3390/e23060701 - 01 Jun 2021
Cited by 5 | Viewed by 5651
Abstract
The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm. However, to date, the essential rationale for, and analysis of, [...] Read more.
The reversible computation paradigm aims to provide a new foundation for general classical digital computing that is capable of circumventing the thermodynamic limits to the energy efficiency of the conventional, non-reversible digital paradigm. However, to date, the essential rationale for, and analysis of, classical reversible computing (RC) has not yet been expressed in terms that leverage the modern formal methods of non-equilibrium quantum thermodynamics (NEQT). In this paper, we begin developing an NEQT-based foundation for the physics of reversible computing. We use the framework of Gorini-Kossakowski-Sudarshan-Lindblad dynamics (a.k.a. Lindbladians) with multiple asymptotic states, incorporating recent results from resource theory, full counting statistics and stochastic thermodynamics. Important conclusions include that, as expected: (1) Landauer’s Principle indeed sets a strict lower bound on entropy generation in traditional non-reversible architectures for deterministic computing machines when we account for the loss of correlations; and (2) implementations of the alternative reversible computation paradigm can potentially avoid such losses, and thereby circumvent the Landauer limit, potentially allowing the efficiency of future digital computing technologies to continue improving indefinitely. We also outline a research plan for identifying the fundamental minimum energy dissipation of reversible computing machines as a function of speed. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

23 pages, 5917 KiB  
Article
Information Geometric Theory in the Prediction of Abrupt Changes in System Dynamics
by Adrian-Josue Guel-Cortez and Eun-jin Kim
Entropy 2021, 23(6), 694; https://doi.org/10.3390/e23060694 - 31 May 2021
Cited by 14 | Viewed by 2744
Abstract
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes [...] Read more.
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes the abrupt dynamical change in the system. Here, we investigate the prediction capability of information theory by focusing on how sensitive information-geometric theory (information length diagnostics) and entropy-based information theoretical method (information flow) are to abrupt changes. To this end, we utilise a non-autonomous Kramer equation by including a sudden perturbation to the system to mimic the onset of a sudden event and calculate time-dependent probability density functions (PDFs) and various statistical quantities with the help of numerical simulations. We show that information length diagnostics predict the onset of a sudden event better than the information flow. Furthermore, it is explicitly shown that the information flow like any other entropy-based measures has limitations in measuring perturbations which do not affect entropy. Full article
Show Figures

Figure 1

25 pages, 6027 KiB  
Article
Medium Entropy Reduction and Instability in Stochastic Systems with Distributed Delay
by Sarah A. M. Loos, Simon Hermann and Sabine H. L. Klapp
Entropy 2021, 23(6), 696; https://doi.org/10.3390/e23060696 - 31 May 2021
Cited by 3 | Viewed by 3725
Abstract
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed [...] Read more.
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed stochastic systems. To this end, we study a simple classical model with white and colored noise, and focus on the class of Gamma-distributed delays which includes a variety of distinct delay distributions typical for feedback experiments and biological systems. A physical application is a colloid subject to time-delayed feedback control, which is, in principle, experimentally realizable by co-moving optical traps. We uncover several unexpected phenomena in regard to the system’s linear stability and its thermodynamic properties. First, increasing the mean delay time can destabilize or stabilize the process, depending on the distribution of the delay. Second, for all considered distributions, the heat dissipated by the controlled system (e.g., the colloidal particle) can become negative, which implies that the delay force extracts energy and entropy of the bath. As we show here, this refrigerating effect is particularly pronounced for exponential delay. For a specific non-reciprocal realization of a control device, we find that the entropic costs, measured by the total entropy production of the system plus controller, are the lowest for exponential delay. The exponential delay further yields the largest stable parameter regions. In this sense, exponential delay represents the most effective and robust type of delayed feedback. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Graphical abstract

12 pages, 2145 KiB  
Article
Stochastic Thermodynamics of a Piezoelectric Energy Harvester Model
by Luigi Costanzo, Alessandro Lo Schiavo, Alessandro Sarracino and Massimo Vitelli
Entropy 2021, 23(6), 677; https://doi.org/10.3390/e23060677 - 27 May 2021
Cited by 10 | Viewed by 2627
Abstract
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe [...] Read more.
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe the experimental data. In particular, the theoretical model allows us to define fluctuating currents and to study the stochastic thermodynamics of the system, with focus on the distribution of the extracted work over different time intervals. Our analytical and numerical analysis of the linear model is succesfully compared to the experiments. Full article
Show Figures

Figure 1

22 pages, 2352 KiB  
Article
Socio-Economic Impact of the Covid-19 Pandemic in the U.S.
by Jonathan Barlow and Irena Vodenska
Entropy 2021, 23(6), 673; https://doi.org/10.3390/e23060673 - 27 May 2021
Cited by 17 | Viewed by 5321
Abstract
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a [...] Read more.
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a weighted digraph G = (V,E,W) using the industry-by-industry total requirements table for 2018, provided by the Bureau of Economic Analysis (BEA). We impose an initial shock that disrupts the production capacity of one or more industries, and we calculate the propagation of production shortages with a modified Cobb–Douglas production function. For the Covid-19 case, we model the initial shock based on the loss of labor between March and April 2020 as reported by the Bureau of Labor Statistics (BLS). The industries within the network are assigned a resilience that determines the ability of an industry to absorb input losses, such that if the rate of input loss exceeds the resilience, the industry fails, and its outputs go to zero. We observed a critical resilience, such that, below this critical value, the network experienced a catastrophic cascade resulting in total network collapse. Lastly, we model the economic recovery from June 2020 through March 2021 using BLS data. Full article
(This article belongs to the Special Issue Structures and Dynamics of Economic Complex Networks)
Show Figures

Figure 1

19 pages, 2773 KiB  
Article
Gradient Profile Estimation Using Exponential Cubic Spline Smoothing in a Bayesian Framework
by Kushani De Silva, Carlo Cafaro and Adom Giffin
Entropy 2021, 23(6), 674; https://doi.org/10.3390/e23060674 - 27 May 2021
Viewed by 2365
Abstract
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking [...] Read more.
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

30 pages, 2012 KiB  
Article
A Maximum Entropy Model of Bounded Rational Decision-Making with Prior Beliefs and Market Feedback
by Benjamin Patrick Evans and Mikhail Prokopenko
Entropy 2021, 23(6), 669; https://doi.org/10.3390/e23060669 - 26 May 2021
Cited by 8 | Viewed by 4278
Abstract
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian [...] Read more.
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack–Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the homo economicus agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least biased decisions based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making in the presence of market feedback, as well as importantly adding a temporal interpretation to the framework. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation of past beliefs and utility maximisation behaviour of the agent as well as the analysis into the evolution of agent beliefs. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

37 pages, 1220 KiB  
Article
Ordinal Pattern Dependence in the Context of Long-Range Dependence
by Ines Nüßgen and Alexander Schnurr
Entropy 2021, 23(6), 670; https://doi.org/10.3390/e23060670 - 26 May 2021
Cited by 3 | Viewed by 2468
Abstract
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This [...] Read more.
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This article deals with ordinal pattern dependence for a long-range dependent time series including mixed cases of short- and long-range dependence. We investigate the limit distributions for estimators of ordinal pattern dependence. In doing so, we point out the differences that arise for the underlying time series having different dependence structures. Depending on these assumptions, central and non-central limit theorems are proven. The limit distributions for the latter ones can be included in the class of multivariate Rosenblatt processes. Finally, a simulation study is provided to illustrate our theoretical findings. Full article
(This article belongs to the Special Issue Time Series Modelling)
Show Figures

Figure 1

18 pages, 2185 KiB  
Article
Shall I Work with Them? A Knowledge Graph-Based Approach for Predicting Future Research Collaborations
by Nikos Kanakaris, Nikolaos Giarelis, Ilias Siachos and Nikos Karacapilidis
Entropy 2021, 23(6), 664; https://doi.org/10.3390/e23060664 - 25 May 2021
Cited by 7 | Viewed by 3801
Abstract
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information [...] Read more.
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information of a scientific knowledge graph through a purposeful integration of graph algorithms and natural language processing techniques. Our work: (i) investigates whether the integration of unstructured textual data into a single knowledge graph affects the performance of a link prediction model, (ii) studies the effect of previously proposed graph kernels based approaches on the performance of an ML model, as far as the link prediction problem is concerned, and (iii) proposes a three-phase pipeline that enables the exploitation of structural and textual information, as well as of pre-trained word embeddings. We benchmark the proposed approach against classical link prediction algorithms using accuracy, recall, and precision as our performance metrics. Finally, we empirically test our approach through various feature combinations with respect to the link prediction problem. Our experimentations with the new COVID-19 Open Research Dataset demonstrate a significant improvement of the abovementioned performance metrics in the prediction of future research collaborations. Full article
Show Figures

Figure 1

12 pages, 630 KiB  
Article
Menzerath’s Law in the Syntax of Languages Compared with Random Sentences
by Kumiko Tanaka-Ishii
Entropy 2021, 23(6), 661; https://doi.org/10.3390/e23060661 - 25 May 2021
Cited by 6 | Viewed by 2351
Abstract
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the [...] Read more.
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the unit increases. This article investigates this property syntactically, with x as the number of constituents modifying the main predicate of a sentence and y as the size of those constituents in terms of the number of words. Following previous articles that demonstrated that the Menzerath property held for dependency corpora, such as in Czech and Ukrainian, this article first examines how well the property applies across languages by using the entire Universal Dependency dataset ver. 2.3, including 76 languages over 129 corpora and the Penn Treebank (PTB). The results show that the law holds reasonably well for x>2. Then, for comparison, the property is investigated with syntactically randomized sentences generated from the PTB. These results show that the property is almost reproducible even from simple random data. Further analysis of the property highlights more detailed characteristics of natural language. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

14 pages, 2907 KiB  
Article
A Thermodynamic Approach to Measuring Entropy in a Few-Electron Nanodevice
by Eugenia Pyurbeeva and Jan A. Mol
Entropy 2021, 23(6), 640; https://doi.org/10.3390/e23060640 - 21 May 2021
Cited by 12 | Viewed by 3150
Abstract
The entropy of a system gives a powerful insight into its microscopic degrees of freedom; however, standard experimental ways of measuring entropy through heat capacity are hard to apply to nanoscale systems, as they require the measurement of increasingly small amounts of heat. [...] Read more.
The entropy of a system gives a powerful insight into its microscopic degrees of freedom; however, standard experimental ways of measuring entropy through heat capacity are hard to apply to nanoscale systems, as they require the measurement of increasingly small amounts of heat. Two alternative entropy measurement methods have been recently proposed for nanodevices: through charge balance measurements and transport properties. We describe a self-consistent thermodynamic framework for applying thermodynamic relations to few-electron nanodevices—small systems, where fluctuations in particle number are significant, whilst highlighting several ongoing misconceptions. We derive a relation (a consequence of a Maxwell relation for small systems), which describes both existing entropy measurement methods as special cases, while also allowing the experimentalist to probe the intermediate regime between them. Finally, we independently prove the applicability of our framework in systems with complex microscopic dynamics—those with many excited states of various degeneracies—from microscopic considerations. Full article
(This article belongs to the Special Issue Nature of Entropy and Its Direct Metrology)
Show Figures

Figure 1

14 pages, 3710 KiB  
Article
Characterization of a Two-Photon Quantum Battery: Initial Conditions, Stability and Work Extraction
by Anna Delmonte, Alba Crescente, Matteo Carrega, Dario Ferraro and Maura Sassetti
Entropy 2021, 23(5), 612; https://doi.org/10.3390/e23050612 - 14 May 2021
Cited by 26 | Viewed by 3400
Abstract
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as [...] Read more.
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as possible initial conditions for the cavity, a Fock state, a coherent state, and a squeezed state. We show that the first state leads to better performances for the battery. However, a coherent state with the same average number of photons, even if it is affected by stronger fluctuations in the stored energy, results in quite interesting performance, in particular since it allows for almost completely extracting the stored energy as usable work at short enough times. Full article
(This article belongs to the Special Issue Non-equilibrium Thermodynamics in the Quantum Regime)
Show Figures

Graphical abstract

33 pages, 2474 KiB  
Article
Information Structures for Causally Explainable Decisions
by Louis Anthony Cox, Jr.
Entropy 2021, 23(5), 601; https://doi.org/10.3390/e23050601 - 13 May 2021
Cited by 4 | Viewed by 3149
Abstract
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link [...] Read more.
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments. Full article
Show Figures

Figure 1

18 pages, 2418 KiB  
Article
Adding Prior Information in FWI through Relative Entropy
by Danilo Santos Cruz, João M. de Araújo, Carlos A. N. da Costa and Carlos C. N. da Silva
Entropy 2021, 23(5), 599; https://doi.org/10.3390/e23050599 - 13 May 2021
Cited by 3 | Viewed by 1788
Abstract
Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose [...] Read more.
Full waveform inversion is an advantageous technique for obtaining high-resolution subsurface information. In the petroleum industry, mainly in reservoir characterisation, it is common to use information from wells as previous information to decrease the ambiguity of the obtained results. For this, we propose adding a relative entropy term to the formalism of the full waveform inversion. In this context, entropy will be just a nomenclature for regularisation and will have the role of helping the converge to the global minimum. The application of entropy in inverse problems usually involves formulating the problem, so that it is possible to use statistical concepts. To avoid this step, we propose a deterministic application to the full waveform inversion. We will discuss some aspects of relative entropy and show three different ways of using them to add prior information through entropy in the inverse problem. We use a dynamic weighting scheme to add prior information through entropy. The idea is that the prior information can help to find the path of the global minimum at the beginning of the inversion process. In all cases, the prior information can be incorporated very quickly into the full waveform inversion and lead the inversion to the desired solution. When we include the logarithmic weighting that constitutes entropy to the inverse problem, we will suppress the low-intensity ripples and sharpen the point events. Thus, the addition of entropy relative to full waveform inversion can provide a result with better resolution. In regions where salt is present in the BP 2004 model, we obtained a significant improvement by adding prior information through the relative entropy for synthetic data. We will show that the prior information added through entropy in full-waveform inversion formalism will prove to be a way to avoid local minimums. Full article
(This article belongs to the Special Issue Entropy in Image Analysis III)
Show Figures

Figure 1

18 pages, 7262 KiB  
Article
EEG Fractal Analysis Reflects Brain Impairment after Stroke
by Maria Rubega, Emanuela Formaggio, Franco Molteni, Eleonora Guanziroli, Roberto Di Marco, Claudio Baracchini, Mario Ermani, Nick S. Ward, Stefano Masiero and Alessandra Del Felice
Entropy 2021, 23(5), 592; https://doi.org/10.3390/e23050592 - 11 May 2021
Cited by 12 | Viewed by 3317
Abstract
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms [...] Read more.
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms applied to electroencephalographic (EEG) signals may track brain impairment after stroke. Sixteen stroke survivors were studied in the hyperacute (<48 h) and in the acute phase (∼1 week after stroke), and 35 stroke survivors during the early subacute phase (from 8 days to 32 days and after ∼2 months after stroke): We compared resting-state EEG fractal changes using fractal measures (i.e., Higuchi Index, Tortuosity) with 11 healthy controls. Both Higuchi index and Tortuosity values were significantly lower after a stroke throughout the acute and early subacute stage compared to healthy subjects, reflecting a brain activity which is significantly less complex. These indices may be promising metrics to track behavioral changes in the very early stage after stroke. Our findings might contribute to the neurorehabilitation quest in identifying reliable biomarkers for a better tailoring of rehabilitation pathways. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Graphical abstract

19 pages, 533 KiB  
Article
Uncertainty Relation between Detection Probability and Energy Fluctuations
by Felix Thiel, Itay Mualem, David Kessler and Eli Barkai
Entropy 2021, 23(5), 595; https://doi.org/10.3390/e23050595 - 11 May 2021
Cited by 5 | Viewed by 1891
Abstract
A classical random walker starting on a node of a finite graph will always reach any other node since the search is ergodic, namely it fully explores space, hence the arrival probability is unity. For quantum walks, destructive interference may induce effectively non-ergodic [...] Read more.
A classical random walker starting on a node of a finite graph will always reach any other node since the search is ergodic, namely it fully explores space, hence the arrival probability is unity. For quantum walks, destructive interference may induce effectively non-ergodic features in such search processes. Under repeated projective local measurements, made on a target state, the final detection of the system is not guaranteed since the Hilbert space is split into a bright subspace and an orthogonal dark one. Using this we find an uncertainty relation for the deviations of the detection probability from its classical counterpart, in terms of the energy fluctuations. Full article
(This article belongs to the Special Issue Axiomatic Approaches to Quantum Mechanics)
Show Figures

Figure 1

23 pages, 1694 KiB  
Article
Unveiling Operator Growth Using Spin Correlation Functions
by Matteo Carrega, Joonho Kim and Dario Rosa
Entropy 2021, 23(5), 587; https://doi.org/10.3390/e23050587 - 10 May 2021
Cited by 15 | Viewed by 2866
Abstract
In this paper, we study non-equilibrium dynamics induced by a sudden quench of strongly correlated Hamiltonians with all-to-all interactions. By relying on a Sachdev-Ye-Kitaev (SYK)-based quench protocol, we show that the time evolution of simple spin-spin correlation functions is highly sensitive to the [...] Read more.
In this paper, we study non-equilibrium dynamics induced by a sudden quench of strongly correlated Hamiltonians with all-to-all interactions. By relying on a Sachdev-Ye-Kitaev (SYK)-based quench protocol, we show that the time evolution of simple spin-spin correlation functions is highly sensitive to the degree of k-locality of the corresponding operators, once an appropriate set of fundamental fields is identified. By tracking the time-evolution of specific spin-spin correlation functions and their decay, we argue that it is possible to distinguish between operator-hopping and operator growth dynamics; the latter being a hallmark of quantum chaos in many-body quantum systems. Such an observation, in turn, could constitute a promising tool to probe the emergence of chaotic behavior, rather accessible in state-of-the-art quench setups. Full article
(This article belongs to the Special Issue Entropy and Complexity in Quantum Dynamics)
Show Figures

Figure 1

26 pages, 343 KiB  
Review
The Phase Space Model of Nonrelativistic Quantum Mechanics
by Jaromir Tosiek and Maciej Przanowski
Entropy 2021, 23(5), 581; https://doi.org/10.3390/e23050581 - 08 May 2021
Cited by 3 | Viewed by 2338
Abstract
We focus on several questions arising during the modelling of quantum systems on a phase space. First, we discuss the choice of phase space and its structure. We include an interesting case of discrete phase space. Then, we introduce the respective algebras of [...] Read more.
We focus on several questions arising during the modelling of quantum systems on a phase space. First, we discuss the choice of phase space and its structure. We include an interesting case of discrete phase space. Then, we introduce the respective algebras of functions containing quantum observables. We also consider the possibility of performing strict calculations and indicate cases where only formal considerations can be performed. We analyse alternative realisations of strict and formal calculi, which are determined by different kernels. Finally, two classes of Wigner functions as representations of states are investigated. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
22 pages, 637 KiB  
Article
Selecting an Effective Entropy Estimator for Short Sequences of Bits and Bytes with Maximum Entropy
by Lianet Contreras Rodríguez, Evaristo José Madarro-Capó , Carlos Miguel Legón-Pérez , Omar Rojas and Guillermo Sosa-Gómez
Entropy 2021, 23(5), 561; https://doi.org/10.3390/e23050561 - 30 Apr 2021
Cited by 8 | Viewed by 2467
Abstract
Entropy makes it possible to measure the uncertainty about an information source from the distribution of its output symbols. It is known that the maximum Shannon’s entropy of a discrete source of information is reached when its symbols follow a Uniform distribution. In [...] Read more.
Entropy makes it possible to measure the uncertainty about an information source from the distribution of its output symbols. It is known that the maximum Shannon’s entropy of a discrete source of information is reached when its symbols follow a Uniform distribution. In cryptography, these sources have great applications since they allow for the highest security standards to be reached. In this work, the most effective estimator is selected to estimate entropy in short samples of bytes and bits with maximum entropy. For this, 18 estimators were compared. Results concerning the comparisons published in the literature between these estimators are discussed. The most suitable estimator is determined experimentally, based on its bias, the mean square error short samples of bytes and bits. Full article
(This article belongs to the Special Issue Types of Entropies and Divergences with Their Applications)
Show Figures

Figure 1

Back to TopTop