Editor's Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:

Article

Article
Accuracy-Risk Trade-Off Due to Social Learning in Crowd-Sourced Financial Predictions
Entropy 2021, 23(7), 801; https://doi.org/10.3390/e23070801 - 24 Jun 2021
Abstract
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the [...] Read more.
A critical question relevant to the increasing importance of crowd-sourced-based finance is how to optimize collective information processing and decision-making. Here, we investigate an often under-studied aspect of the performance of online traders: beyond focusing on just accuracy, what gives rise to the trade-off between risk and accuracy at the collective level? Answers to this question will lead to designing and deploying more effective crowd-sourced financial platforms and to minimizing issues stemming from risk such as implied volatility. To investigate this trade-off, we conducted a large online Wisdom of the Crowd study where 2037 participants predicted the prices of real financial assets (S&P 500, WTI Oil and Gold prices). Using the data collected, we modeled the belief update process of participants using models inspired by Bayesian models of cognition. We show that subsets of predictions chosen based on their belief update strategies lie on a Pareto frontier between accuracy and risk, mediated by social learning. We also observe that social learning led to superior accuracy during one of our rounds that occurred during the high market uncertainty of the Brexit vote. Full article
(This article belongs to the Special Issue Swarms and Network Intelligence)
Show Figures

Figure 1

Article
Global Sensitivity Analysis Based on Entropy: From Differential Entropy to Alternative Measures
Entropy 2021, 23(6), 778; https://doi.org/10.3390/e23060778 - 19 Jun 2021
Cited by 5
Abstract
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative [...] Read more.
Differential entropy can be negative, while discrete entropy is always non-negative. This article shows that negative entropy is a significant flaw when entropy is used as a sensitivity measure in global sensitivity analysis. Global sensitivity analysis based on differential entropy cannot have negative entropy, just as Sobol sensitivity analysis does not have negative variance. Entropy is similar to variance but does not have the same properties. An alternative sensitivity measure based on the approximation of the differential entropy using dome-shaped functionals with non-negative values is proposed in the article. Case studies have shown that new sensitivity measures lead to a rational structure of sensitivity indices with a significantly lower proportion of higher-order sensitivity indices compared to other types of distributional sensitivity analysis. In terms of the concept of sensitivity analysis, a decrease in variance to zero means a transition from the differential to discrete entropy. The form of this transition is an open question, which can be studied using other scientific disciplines. The search for new functionals for distributional sensitivity analysis is not closed, and other suitable sensitivity measures may be found. Full article
Show Figures

Figure 1

Article
Robust Universal Inference
Entropy 2021, 23(6), 773; https://doi.org/10.3390/e23060773 - 18 Jun 2021
Abstract
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number [...] Read more.
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

Article
Computing Accurate Probabilistic Estimates of One-D Entropy from Equiprobable Random Samples
Entropy 2021, 23(6), 740; https://doi.org/10.3390/e23060740 - 11 Jun 2021
Cited by 2
Abstract
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability [...] Read more.
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.250.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as 10% and for BC as large as 50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
On Representations of Divergence Measures and Related Quantities in Exponential Families
Entropy 2021, 23(6), 726; https://doi.org/10.3390/e23060726 - 08 Jun 2021
Cited by 1
Abstract
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and [...] Read more.
Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and mean value function. Moreover, the same applies to related entropy and affinity measures. We compile representations scattered in the literature and present a unified approach to the derivation in exponential families. As a statistical application, we highlight their use in the construction of confidence regions in a multi-sample setup. Full article
(This article belongs to the Special Issue Measures of Information)
Show Figures

Figure 1

Article
Switching and Swapping of Quantum Information: Entropy and Entanglement Level
Entropy 2021, 23(6), 717; https://doi.org/10.3390/e23060717 - 04 Jun 2021
Abstract
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information [...] Read more.
Information switching and swapping seem to be fundamental elements of quantum communication protocols. Another crucial issue is the presence of entanglement and its level in inspected quantum systems. In this article, a formal definition of the operation of the swapping local quantum information and its existence proof, together with some elementary properties analysed through the prism of the concept of the entropy, are presented. As an example of the local information swapping usage, we demonstrate a certain realisation of the quantum switch. Entanglement levels, during the work of the switch, are calculated with the Negativity measure and a separability criterion based on the von Neumann entropy, spectral decomposition and Schmidt decomposition. Results of numerical experiments, during which the entanglement levels are estimated for systems under consideration with and without distortions, are presented. The noise is generated by the Dzyaloshinskii-Moriya interaction and the intrinsic decoherence is modelled by the Milburn equation. This work contains a switch realisation in a circuit form—built out of elementary quantum gates, and a scheme of the circuit which estimates levels of entanglement during the switch’s operating. Full article
(This article belongs to the Special Issue Methods and Applications of Quantum Data Processing)
Show Figures

Figure 1

Article
Information and Self-Organization II: Steady State and Phase Transition
Entropy 2021, 23(6), 707; https://doi.org/10.3390/e23060707 - 02 Jun 2021
Cited by 3
Abstract
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation [...] Read more.
This paper starts from Schrödinger’s famous question “what is life” and elucidates answers that invoke, in particular, Friston’s free energy principle and its relation to the method of Bayesian inference and to Synergetics 2nd foundation that utilizes Jaynes’ maximum entropy principle. Our presentation reflects the shift from the emphasis on physical principles to principles of information theory and Synergetics. In view of the expected general audience of this issue, we have chosen a somewhat tutorial style that does not require special knowledge on physics but familiarizes the reader with concepts rooted in information theory and Synergetics. Full article
(This article belongs to the Special Issue Information and Self-Organization II)
Show Figures

Figure 1

Article
Information Geometric Theory in the Prediction of Abrupt Changes in System Dynamics
Entropy 2021, 23(6), 694; https://doi.org/10.3390/e23060694 - 31 May 2021
Cited by 4
Abstract
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes [...] Read more.
Detection and measurement of abrupt changes in a process can provide us with important tools for decision making in systems management. In particular, it can be utilised to predict the onset of a sudden event such as a rare, extreme event which causes the abrupt dynamical change in the system. Here, we investigate the prediction capability of information theory by focusing on how sensitive information-geometric theory (information length diagnostics) and entropy-based information theoretical method (information flow) are to abrupt changes. To this end, we utilise a non-autonomous Kramer equation by including a sudden perturbation to the system to mimic the onset of a sudden event and calculate time-dependent probability density functions (PDFs) and various statistical quantities with the help of numerical simulations. We show that information length diagnostics predict the onset of a sudden event better than the information flow. Furthermore, it is explicitly shown that the information flow like any other entropy-based measures has limitations in measuring perturbations which do not affect entropy. Full article
Show Figures

Figure 1

Article
Medium Entropy Reduction and Instability in Stochastic Systems with Distributed Delay
Entropy 2021, 23(6), 696; https://doi.org/10.3390/e23060696 - 31 May 2021
Cited by 1
Abstract
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed [...] Read more.
Many natural and artificial systems are subject to some sort of delay, which can be in the form of a single discrete delay or distributed over a range of times. Here, we discuss the impact of this distribution on (thermo-)dynamical properties of time-delayed stochastic systems. To this end, we study a simple classical model with white and colored noise, and focus on the class of Gamma-distributed delays which includes a variety of distinct delay distributions typical for feedback experiments and biological systems. A physical application is a colloid subject to time-delayed feedback control, which is, in principle, experimentally realizable by co-moving optical traps. We uncover several unexpected phenomena in regard to the system’s linear stability and its thermodynamic properties. First, increasing the mean delay time can destabilize or stabilize the process, depending on the distribution of the delay. Second, for all considered distributions, the heat dissipated by the controlled system (e.g., the colloidal particle) can become negative, which implies that the delay force extracts energy and entropy of the bath. As we show here, this refrigerating effect is particularly pronounced for exponential delay. For a specific non-reciprocal realization of a control device, we find that the entropic costs, measured by the total entropy production of the system plus controller, are the lowest for exponential delay. The exponential delay further yields the largest stable parameter regions. In this sense, exponential delay represents the most effective and robust type of delayed feedback. Full article
(This article belongs to the Special Issue Nonequilibrium Thermodynamics and Stochastic Processes)
Show Figures

Figure 1

Article
Stochastic Thermodynamics of a Piezoelectric Energy Harvester Model
Entropy 2021, 23(6), 677; https://doi.org/10.3390/e23060677 - 27 May 2021
Abstract
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe [...] Read more.
We experimentally study a piezoelectric energy harvester driven by broadband random vibrations. We show that a linear model, consisting of an underdamped Langevin equation for the dynamics of the tip mass, electromechanically coupled with a capacitor and a load resistor, can accurately describe the experimental data. In particular, the theoretical model allows us to define fluctuating currents and to study the stochastic thermodynamics of the system, with focus on the distribution of the extracted work over different time intervals. Our analytical and numerical analysis of the linear model is succesfully compared to the experiments. Full article
Show Figures

Figure 1

Article
Socio-Economic Impact of the Covid-19 Pandemic in the U.S.
Entropy 2021, 23(6), 673; https://doi.org/10.3390/e23060673 - 27 May 2021
Cited by 4
Abstract
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a [...] Read more.
This paper proposes a dynamic cascade model to investigate the systemic risk posed by sector-level industries within the U.S. inter-industry network. We then use this model to study the effect of the disruptions presented by Covid-19 on the U.S. economy. We construct a weighted digraph G = (V,E,W) using the industry-by-industry total requirements table for 2018, provided by the Bureau of Economic Analysis (BEA). We impose an initial shock that disrupts the production capacity of one or more industries, and we calculate the propagation of production shortages with a modified Cobb–Douglas production function. For the Covid-19 case, we model the initial shock based on the loss of labor between March and April 2020 as reported by the Bureau of Labor Statistics (BLS). The industries within the network are assigned a resilience that determines the ability of an industry to absorb input losses, such that if the rate of input loss exceeds the resilience, the industry fails, and its outputs go to zero. We observed a critical resilience, such that, below this critical value, the network experienced a catastrophic cascade resulting in total network collapse. Lastly, we model the economic recovery from June 2020 through March 2021 using BLS data. Full article
(This article belongs to the Special Issue Structures and Dynamics of Economic Complex Networks)
Show Figures

Figure 1

Article
Gradient Profile Estimation Using Exponential Cubic Spline Smoothing in a Bayesian Framework
Entropy 2021, 23(6), 674; https://doi.org/10.3390/e23060674 - 27 May 2021
Abstract
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking [...] Read more.
Attaining reliable gradient profiles is of utmost relevance for many physical systems. In many situations, the estimation of the gradient is inaccurate due to noise. It is common practice to first estimate the underlying system and then compute the gradient profile by taking the subsequent analytic derivative of the estimated system. The underlying system is often estimated by fitting or smoothing the data using other techniques. Taking the subsequent analytic derivative of an estimated function can be ill-posed. This becomes worse as the noise in the system increases. As a result, the uncertainty generated in the gradient estimate increases. In this paper, a theoretical framework for a method to estimate the gradient profile of discrete noisy data is presented. The method was developed within a Bayesian framework. Comprehensive numerical experiments were conducted on synthetic data at different levels of noise. The accuracy of the proposed method was quantified. Our findings suggest that the proposed gradient profile estimation method outperforms the state-of-the-art methods. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

Article
Ordinal Pattern Dependence in the Context of Long-Range Dependence
Entropy 2021, 23(6), 670; https://doi.org/10.3390/e23060670 - 26 May 2021
Cited by 2
Abstract
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This [...] Read more.
Ordinal pattern dependence is a multivariate dependence measure based on the co-movement of two time series. In strong connection to ordinal time series analysis, the ordinal information is taken into account to derive robust results on the dependence between the two processes. This article deals with ordinal pattern dependence for a long-range dependent time series including mixed cases of short- and long-range dependence. We investigate the limit distributions for estimators of ordinal pattern dependence. In doing so, we point out the differences that arise for the underlying time series having different dependence structures. Depending on these assumptions, central and non-central limit theorems are proven. The limit distributions for the latter ones can be included in the class of multivariate Rosenblatt processes. Finally, a simulation study is provided to illustrate our theoretical findings. Full article
(This article belongs to the Special Issue Time Series Modelling)
Show Figures

Figure 1

Article
Shall I Work with Them? A Knowledge Graph-Based Approach for Predicting Future Research Collaborations
Entropy 2021, 23(6), 664; https://doi.org/10.3390/e23060664 - 25 May 2021
Cited by 2
Abstract
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information [...] Read more.
We consider the prediction of future research collaborations as a link prediction problem applied on a scientific knowledge graph. To the best of our knowledge, this is the first work on the prediction of future research collaborations that combines structural and textual information of a scientific knowledge graph through a purposeful integration of graph algorithms and natural language processing techniques. Our work: (i) investigates whether the integration of unstructured textual data into a single knowledge graph affects the performance of a link prediction model, (ii) studies the effect of previously proposed graph kernels based approaches on the performance of an ML model, as far as the link prediction problem is concerned, and (iii) proposes a three-phase pipeline that enables the exploitation of structural and textual information, as well as of pre-trained word embeddings. We benchmark the proposed approach against classical link prediction algorithms using accuracy, recall, and precision as our performance metrics. Finally, we empirically test our approach through various feature combinations with respect to the link prediction problem. Our experimentations with the new COVID-19 Open Research Dataset demonstrate a significant improvement of the abovementioned performance metrics in the prediction of future research collaborations. Full article
Show Figures

Figure 1

Article
Menzerath’s Law in the Syntax of Languages Compared with Random Sentences
Entropy 2021, 23(6), 661; https://doi.org/10.3390/e23060661 - 25 May 2021
Cited by 1
Abstract
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the [...] Read more.
The Menzerath law is considered to show an aspect of the complexity underlying natural language. This law suggests that, for a linguistic unit, the size (y) of a linguistic construct decreases as the number (x) of constructs in the unit increases. This article investigates this property syntactically, with x as the number of constituents modifying the main predicate of a sentence and y as the size of those constituents in terms of the number of words. Following previous articles that demonstrated that the Menzerath property held for dependency corpora, such as in Czech and Ukrainian, this article first examines how well the property applies across languages by using the entire Universal Dependency dataset ver. 2.3, including 76 languages over 129 corpora and the Penn Treebank (PTB). The results show that the law holds reasonably well for x>2. Then, for comparison, the property is investigated with syntactically randomized sentences generated from the PTB. These results show that the property is almost reproducible even from simple random data. Further analysis of the property highlights more detailed characteristics of natural language. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Article
Characterization of a Two-Photon Quantum Battery: Initial Conditions, Stability and Work Extraction
Entropy 2021, 23(5), 612; https://doi.org/10.3390/e23050612 - 14 May 2021
Cited by 4
Abstract
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as [...] Read more.
We consider a quantum battery that is based on a two-level system coupled with a cavity radiation by means of a two-photon interaction. Various figures of merit, such as stored energy, average charging power, energy fluctuations, and extractable work are investigated, considering, as possible initial conditions for the cavity, a Fock state, a coherent state, and a squeezed state. We show that the first state leads to better performances for the battery. However, a coherent state with the same average number of photons, even if it is affected by stronger fluctuations in the stored energy, results in quite interesting performance, in particular since it allows for almost completely extracting the stored energy as usable work at short enough times. Full article
(This article belongs to the Special Issue Non-equilibrium Thermodynamics in the Quantum Regime)
Show Figures

Figure 1

Article
Information Structures for Causally Explainable Decisions
Entropy 2021, 23(5), 601; https://doi.org/10.3390/e23050601 - 13 May 2021
Cited by 2
Abstract
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link [...] Read more.
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments. Full article
Show Figures

Figure 1

Article
EEG Fractal Analysis Reflects Brain Impairment after Stroke
Entropy 2021, 23(5), 592; https://doi.org/10.3390/e23050592 - 11 May 2021
Cited by 1
Abstract
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms [...] Read more.
Stroke is the commonest cause of disability. Novel treatments require an improved understanding of the underlying mechanisms of recovery. Fractal approaches have demonstrated that a single metric can describe the complexity of seemingly random fluctuations of physiological signals. We hypothesize that fractal algorithms applied to electroencephalographic (EEG) signals may track brain impairment after stroke. Sixteen stroke survivors were studied in the hyperacute (<48 h) and in the acute phase (∼1 week after stroke), and 35 stroke survivors during the early subacute phase (from 8 days to 32 days and after ∼2 months after stroke): We compared resting-state EEG fractal changes using fractal measures (i.e., Higuchi Index, Tortuosity) with 11 healthy controls. Both Higuchi index and Tortuosity values were significantly lower after a stroke throughout the acute and early subacute stage compared to healthy subjects, reflecting a brain activity which is significantly less complex. These indices may be promising metrics to track behavioral changes in the very early stage after stroke. Our findings might contribute to the neurorehabilitation quest in identifying reliable biomarkers for a better tailoring of rehabilitation pathways. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Figure 1

Article
Causality and Information Transfer Between the Solar Wind and the Magnetosphere–Ionosphere System
Entropy 2021, 23(4), 390; https://doi.org/10.3390/e23040390 - 25 Mar 2021
Cited by 5
Abstract
An information-theoretic approach for detecting causality and information transfer is used to identify interactions of solar activity and interplanetary medium conditions with the Earth’s magnetosphere–ionosphere systems. A causal information transfer from the solar wind parameters to geomagnetic indices is detected. The vertical component [...] Read more.
An information-theoretic approach for detecting causality and information transfer is used to identify interactions of solar activity and interplanetary medium conditions with the Earth’s magnetosphere–ionosphere systems. A causal information transfer from the solar wind parameters to geomagnetic indices is detected. The vertical component of the interplanetary magnetic field (Bz) influences the auroral electrojet (AE) index with an information transfer delay of 10 min and the geomagnetic disturbances at mid-latitudes measured by the symmetric field in the H component (SYM-H) index with a delay of about 30 min. Using a properly conditioned causality measure, no causal link between AE and SYM-H, or between magnetospheric substorms and magnetic storms can be detected. The observed causal relations can be described as linear time-delayed information transfer. Full article
Show Figures

Figure 1

Article
On the Classical Capacity of General Quantum Gaussian Measurement
Entropy 2021, 23(3), 377; https://doi.org/10.3390/e23030377 - 21 Mar 2021
Cited by 1
Abstract
In this paper, we consider the classical capacity problem for Gaussian measurement channels. We establish Gaussianity of the average state of the optimal ensemble in the general case and discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble. Then, we [...] Read more.
In this paper, we consider the classical capacity problem for Gaussian measurement channels. We establish Gaussianity of the average state of the optimal ensemble in the general case and discuss the Hypothesis of Gaussian Maximizers concerning the structure of the ensemble. Then, we consider the case of one mode in detail, including the dual problem of accessible information of a Gaussian ensemble. Our findings are relevant to practical situations in quantum communications where the receiver is Gaussian (say, a general-dyne detection) and concatenation of the Gaussian channel and the receiver can be considered as one Gaussian measurement channel. Our efforts in this and preceding papers are then aimed at establishing full Gaussianity of the optimal ensemble (usually taken as an assumption) in such schemes. Full article
(This article belongs to the Special Issue Quantum Communication, Quantum Radar, and Quantum Cipher)
Show Figures

Figure 1

Article
Spectral Ranking of Causal Influence in Complex Systems
Entropy 2021, 23(3), 369; https://doi.org/10.3390/e23030369 - 20 Mar 2021
Cited by 1
Abstract
Similar to natural complex systems, such as the Earth’s climate or a living cell, semiconductor lithography systems are characterized by nonlinear dynamics across more than a dozen orders of magnitude in space and time. Thousands of sensors measure relevant process variables at appropriate [...] Read more.
Similar to natural complex systems, such as the Earth’s climate or a living cell, semiconductor lithography systems are characterized by nonlinear dynamics across more than a dozen orders of magnitude in space and time. Thousands of sensors measure relevant process variables at appropriate sampling rates, to provide time series as primary sources for system diagnostics. However, high-dimensionality, non-linearity and non-stationarity of the data are major challenges to efficiently, yet accurately, diagnose rare or new system issues by merely using model-based approaches. To reliably narrow down the causal search space, we validate a ranking algorithm that applies transfer entropy for bivariate interaction analysis of a system’s multivariate time series to obtain a weighted directed graph, and graph eigenvector centrality to identify the system’s most important sources of original information or causal influence. The results suggest that this approach robustly identifies the true drivers or causes of a complex system’s deviant behavior, even when its reconstructed information transfer network includes redundant edges. Full article
Show Figures

Figure 1

Article
Stirling Refrigerating Machine Modeling Using Schmidt and Finite Physical Dimensions Thermodynamic Models: A Comparison with Experiments
Entropy 2021, 23(3), 368; https://doi.org/10.3390/e23030368 - 19 Mar 2021
Cited by 2
Abstract
The purpose of the study is to show that two simple models that take into account only the irreversibility due to temperature difference in the heat exchangers and imperfect regeneration are able to indicate refrigerating machine behavior. In the present paper, the finite [...] Read more.
The purpose of the study is to show that two simple models that take into account only the irreversibility due to temperature difference in the heat exchangers and imperfect regeneration are able to indicate refrigerating machine behavior. In the present paper, the finite physical dimensions thermodynamics (FPDT) method and 0-D modeling using the Schmidt model with imperfect regeneration were applied in the study of a β type Stirling refrigeration machine.The 0-D modeling is improved by including the irreversibility caused by imperfect regeneration and the finite temperature difference between the gas and the heat exchangers wall. A flowchart of the Stirling refrigerator exergy balance is presented to show the internal and external irreversibilities. It is found that the irreversibility at the regenerator level is more important than that at the heat exchangers level. The energies exchanged by the working gas are expressed according to the practical parameters, necessary for the engineer during the entire project. The results of the two thermodynamic models are presented in comparison with the experimental results, which leads to validation of the proposed FPDT model for the functional and constructive parameters of the studied refrigerating machine. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications II)
Show Figures

Figure 1

Article
Mechanism Integrated Information
Entropy 2021, 23(3), 362; https://doi.org/10.3390/e23030362 - 18 Mar 2021
Cited by 8
Abstract
The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) [...] Read more.
The Integrated Information Theory (IIT) of consciousness starts from essential phenomenological properties, which are then translated into postulates that any physical system must satisfy in order to specify the physical substrate of consciousness. We recently introduced an information measure (Barbosa et al., 2020) that captures three postulates of IIT—existence, intrinsicality and information—and is unique. Here we show that the new measure also satisfies the remaining postulates of IIT—integration and exclusion—and create the framework that identifies maximally irreducible mechanisms. These mechanisms can then form maximally irreducible systems, which in turn will specify the physical substrate of conscious experience. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness)
Show Figures

Figure 1

Article
Crowded Trades, Market Clustering, and Price Instability
Entropy 2021, 23(3), 336; https://doi.org/10.3390/e23030336 - 12 Mar 2021
Abstract
Crowded trades by similarly trading peers influence the dynamics of asset prices, possibly creating systemic risk. We propose a market clustering measure using granular trading data. For each stock, the clustering measure captures the degree of trading overlap among any two investors in [...] Read more.
Crowded trades by similarly trading peers influence the dynamics of asset prices, possibly creating systemic risk. We propose a market clustering measure using granular trading data. For each stock, the clustering measure captures the degree of trading overlap among any two investors in that stock, based on a comparison with the expected crowding in a null model where trades are maximally random while still respecting the empirical heterogeneity of both stocks and investors. We investigate the effect of crowded trades on stock price stability and present evidence that market clustering has a causal effect on the properties of the tails of the stock return distribution, particularly the positive tail, even after controlling for commonly considered risk drivers. Reduced investor pool diversity could thus negatively affect stock price stability. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Economics, Finance, and Management)
Show Figures

Figure 1

Article
Why Do Big Data and Machine Learning Entail the Fractional Dynamics?
Entropy 2021, 23(3), 297; https://doi.org/10.3390/e23030297 - 28 Feb 2021
Cited by 2
Abstract
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex [...] Read more.
Fractional-order calculus is about the differentiation and integration of non-integer orders. Fractional calculus (FC) is based on fractional-order thinking (FOT) and has been shown to help us to understand complex systems better, improve the processing of complex signals, enhance the control of complex systems, increase the performance of optimization, and even extend the enabling of the potential for creativity. In this article, the authors discuss the fractional dynamics, FOT and rich fractional stochastic models. First, the use of fractional dynamics in big data analytics for quantifying big data variability stemming from the generation of complex systems is justified. Second, we show why fractional dynamics is needed in machine learning and optimal randomness when asking: “is there a more optimal way to optimize?”. Third, an optimal randomness case study for a stochastic configuration network (SCN) machine-learning method with heavy-tailed distributions is discussed. Finally, views on big data and (physics-informed) machine learning with fractional dynamics for future research are presented with concluding remarks. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

Article
The Principle of Covariance and the Hamiltonian Formulation of General Relativity
Entropy 2021, 23(2), 215; https://doi.org/10.3390/e23020215 - 10 Feb 2021
Cited by 5
Abstract
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the [...] Read more.
The implications of the general covariance principle for the establishment of a Hamiltonian variational formulation of classical General Relativity are addressed. The analysis is performed in the framework of the Einstein-Hilbert variational theory. Preliminarily, customary Lagrangian variational principles are reviewed, pointing out the existence of a novel variational formulation in which the class of variations remains unconstrained. As a second step, the conditions of validity of the non-manifestly covariant ADM variational theory are questioned. The main result concerns the proof of its intrinsic non-Hamiltonian character and the failure of this approach in providing a symplectic structure of space-time. In contrast, it is demonstrated that a solution reconciling the physical requirements of covariance and manifest covariance of variational theory with the existence of a classical Hamiltonian structure for the gravitational field can be reached in the framework of synchronous variational principles. Both path-integral and volume-integral realizations of the Hamilton variational principle are explicitly determined and the corresponding physical interpretations are pointed out. Full article
(This article belongs to the Special Issue Quantum Regularization of Singular Black Hole Solutions)
Article
Spectral Properties of Effective Dynamics from Conditional Expectations
Entropy 2021, 23(2), 134; https://doi.org/10.3390/e23020134 - 21 Jan 2021
Cited by 2
Abstract
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this [...] Read more.
The reduction of high-dimensional systems to effective models on a smaller set of variables is an essential task in many areas of science. For stochastic dynamics governed by diffusion processes, a general procedure to find effective equations is the conditioning approach. In this paper, we are interested in the spectrum of the generator of the resulting effective dynamics, and how it compares to the spectrum of the full generator. We prove a new relative error bound in terms of the eigenfunction approximation error for reversible systems. We also present numerical examples indicating that, if Kramers–Moyal (KM) type approximations are used to compute the spectrum of the reduced generator, it seems largely insensitive to the time window used for the KM estimators. We analyze the implications of these observations for systems driven by underdamped Langevin dynamics, and show how meaningful effective dynamics can be defined in this setting. Full article
Show Figures

Figure 1

Article
Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s
Entropy 2021, 23(1), 114; https://doi.org/10.3390/e23010114 - 16 Jan 2021
Cited by 2
Abstract
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are [...] Read more.
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are problematic precisely because they do not fully transcend the assumption that causal or constructive explanation must always be fundamental. Unlike retrocausal accounts, our principle explanation is a complete rejection of Reichenbach’s Principle. Furthermore, we will argue that the basis for our principle account of quantum mechanics is the physical principle sought by quantum information theorists for their reconstructions of quantum mechanics. Finally, we explain why our account is both fully realist and psi-epistemic. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Graphical abstract

Article
Coupling between Blood Pressure and Subarachnoid Space Width Oscillations during Slow Breathing
Entropy 2021, 23(1), 113; https://doi.org/10.3390/e23010113 - 15 Jan 2021
Cited by 1
Abstract
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width [...] Read more.
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width (SAS), collected during slow ventilation and ventilation against inspiration resistance. The experiment was performed on a group of 20 healthy volunteers (12 females and 8 males; BMI =22.1±3.2 kg/m2; age 25.3±7.9 years). We analysed the recorded signals with a wavelet transform. For the first time, a method based on dynamical Bayesian inference was used to detect the effective phase connectivity and the underlying coupling functions between the SAS and BP signals. There are several new findings. Slow breathing with or without resistance increases the strength of the coupling between the respiratory and cardiac components of both measured signals. We also observed increases in the strength of the coupling between the respiratory component of the BP and the cardiac component of the SAS and vice versa. Slow breathing synchronises the SAS oscillations, between the brain hemispheres. It also diminishes the similarity of the coupling between all analysed pairs of oscillators, while inspiratory resistance partially reverses this phenomenon. BP–SAS and SAS–BP interactions may reflect changes in the overall biomechanical characteristics of the brain. Full article
Show Figures

Figure 1

Article
Deep Task-Based Quantization
Entropy 2021, 23(1), 104; https://doi.org/10.3390/e23010104 - 13 Jan 2021
Cited by 11
Abstract
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such [...] Read more.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task. Full article
Show Figures

Figure 1

Article
Generalised Geometric Brownian Motion: Theory and Applications to Option Pricing
Entropy 2020, 22(12), 1432; https://doi.org/10.3390/e22121432 - 18 Dec 2020
Cited by 14
Abstract
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when [...] Read more.
Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics, due to irregularities found when comparing its properties with empirical distributions. As a solution, we investigate a generalisation of GBM where the introduction of a memory kernel critically determines the behaviour of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and then obtain the corresponding probability density functions using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) in order to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. Full article
(This article belongs to the Special Issue New Trends in Random Walks)
Show Figures

Figure 1

Article
Examining the Causal Structures of Deep Neural Networks Using Information Theory
Entropy 2020, 22(12), 1429; https://doi.org/10.3390/e22121429 - 18 Dec 2020
Cited by 1
Abstract
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the [...] Read more.
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a suite of metrics based on information theory to quantify and track changes in the causal structure of DNNs during training. Specifically, we introduce the effective information (EI) of a feedforward DNN, which is the mutual information between layer input and output following a maximum-entropy perturbation. The EI can be used to assess the degree of causal influence nodes and edges have over their downstream targets in each layer. We show that the EI can be further decomposed in order to examine the sensitivity of a layer (measured by how well edges transmit perturbations) and the degeneracy of a layer (measured by how edge overlap interferes with transmission), along with estimates of the amount of integrated information of a layer. Together, these properties define where each layer lies in the “causal plane”, which can be used to visualize how layer connectivity becomes more sensitive or degenerate over time, and how integration changes during training, revealing how the layer-by-layer causal structure differentiates. These results may help in understanding the generalization capabilities of DNNs and provide foundational tools for making DNNs both more generalizable and more explainable. Full article
Show Figures

Figure 1

Article
A Comprehensive Framework for Uncovering Non-Linearity and Chaos in Financial Markets: Empirical Evidence for Four Major Stock Market Indices
Entropy 2020, 22(12), 1435; https://doi.org/10.3390/e22121435 - 18 Dec 2020
Cited by 5
Abstract
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market [...] Read more.
The presence of chaos in the financial markets has been the subject of a great number of studies, but the results have been contradictory and inconclusive. This research tests for the existence of nonlinear patterns and chaotic nature in four major stock market indices: namely Dow Jones Industrial Average, Ibex 35, Nasdaq-100 and Nikkei 225. To this end, a comprehensive framework has been adopted encompassing a wide range of techniques and the most suitable methods for the analysis of noisy time series. By using daily closing values from January 1992 to July 2013, this study employs twelve techniques and tools of which five are specific to detecting chaos. The findings show no clear evidence of chaos, suggesting that the behavior of financial markets is nonlinear and stochastic. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

Article
Foundations of the Quaternion Quantum Mechanics
Entropy 2020, 22(12), 1424; https://doi.org/10.3390/e22121424 - 17 Dec 2020
Cited by 4
Abstract
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy [...] Read more.
We show that quaternion quantum mechanics has well-founded mathematical roots and can be derived from the model of the elastic continuum by French mathematician Augustin Cauchy, i.e., it can be regarded as representing the physical reality of elastic continuum. Starting from the Cauchy theory (classical balance equations for isotropic Cauchy-elastic material) and using the Hamilton quaternion algebra, we present a rigorous derivation of the quaternion form of the non- and relativistic wave equations. The family of the wave equations and the Poisson equation are a straightforward consequence of the quaternion representation of the Cauchy model of the elastic continuum. This is the most general kind of quantum mechanics possessing the same kind of calculus of assertions as conventional quantum mechanics. The problem of the Schrödinger equation, where imaginary ‘i’ should emerge, is solved. This interpretation is a serious attempt to describe the ontology of quantum mechanics, and demonstrates that, besides Bohmian mechanics, the complete ontological interpretations of quantum theory exists. The model can be generalized and falsified. To ensure this theory to be true, we specified problems, allowing exposing its falsity. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Article
Artificial Intelligence for Modeling Real Estate Price Using Call Detail Records and Hybrid Machine Learning Approach
Entropy 2020, 22(12), 1421; https://doi.org/10.3390/e22121421 - 16 Dec 2020
Cited by 9
Abstract
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel [...] Read more.
Advancement of accurate models for predicting real estate price is of utmost importance for urban development and several critical economic functions. Due to the significant uncertainties and dynamic variables, modeling real estate has been studied as complex systems. In this study, a novel machine learning method is proposed to tackle real estate modeling complexity. Call detail records (CDR) provides excellent opportunities for in-depth investigation of the mobility characterization. This study explores the CDR potential for predicting the real estate price with the aid of artificial intelligence (AI). Several essential mobility entropy factors, including dweller entropy, dweller gyration, workers’ entropy, worker gyration, dwellers’ work distance, and workers’ home distance, are used as input variables. The prediction model is developed using the machine learning method of multi-layered perceptron (MLP) trained with the evolutionary algorithm of particle swarm optimization (PSO). Model performance is evaluated using mean square error (MSE), sustainability index (SI), and Willmott’s index (WI). The proposed model showed promising results revealing that the workers’ entropy and the dwellers’ work distances directly influence the real estate price. However, the dweller gyration, dweller entropy, workers’ gyration, and the workers’ home had a minimum effect on the price. Furthermore, it is shown that the flow of activities and entropy of mobility are often associated with the regions with lower real estate prices. Full article
Show Figures

Figure 1

Article
Statistical Features in High-Frequency Bands of Interictal iEEG Work Efficiently in Identifying the Seizure Onset Zone in Patients with Focal Epilepsy
Entropy 2020, 22(12), 1415; https://doi.org/10.3390/e22121415 - 15 Dec 2020
Cited by 5
Abstract
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the [...] Read more.
The design of a computer-aided system for identifying the seizure onset zone (SOZ) from interictal and ictal electroencephalograms (EEGs) is desired by epileptologists. This study aims to introduce the statistical features of high-frequency components (HFCs) in interictal intracranial electroencephalograms (iEEGs) to identify the possible seizure onset zone (SOZ) channels. It is known that the activity of HFCs in interictal iEEGs, including ripple and fast ripple bands, is associated with epileptic seizures. This paper proposes to decompose multi-channel interictal iEEG signals into a number of subbands. For every 20 s segment, twelve features are computed from each subband. A mutual information (MI)-based method with grid search was applied to select the most prominent bands and features. A gradient-boosting decision tree-based algorithm called LightGBM was used to score each segment of the channels and these were averaged together to achieve a final score for each channel. The possible SOZ channels were localized based on the higher value channels. The experimental results with eleven epilepsy patients were tested to observe the efficiency of the proposed design compared to the state-of-the-art methods. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Diffusion Limitations and Translocation Barriers in Atomically Thin Biomimetic Pores
Entropy 2020, 22(11), 1326; https://doi.org/10.3390/e22111326 - 20 Nov 2020
Cited by 3
Abstract
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors [...] Read more.
Ionic transport in nano- to sub-nano-scale pores is highly dependent on translocation barriers and potential wells. These features in the free-energy landscape are primarily the result of ion dehydration and electrostatic interactions. For pores in atomically thin membranes, such as graphene, other factors come into play. Ion dynamics both inside and outside the geometric volume of the pore can be critical in determining the transport properties of the channel due to several commensurate length scales, such as the effective membrane thickness, radii of the first and the second hydration layers, pore radius, and Debye length. In particular, for biomimetic pores, such as the graphene crown ether we examine here, there are regimes where transport is highly sensitive to the pore size due to the interplay of dehydration and interaction with pore charge. Picometer changes in the size, e.g., due to a minute strain, can lead to a large change in conductance. Outside of these regimes, the small pore size itself gives a large resistance, even when electrostatic factors and dehydration compensate each other to give a relatively flat—e.g., near barrierless—free energy landscape. The permeability, though, can still be large and ions will translocate rapidly after they arrive within the capture radius of the pore. This, in turn, leads to diffusion and drift effects dominating the conductance. The current thus plateaus and becomes effectively independent of pore-free energy characteristics. Measurement of this effect will give an estimate of the magnitude of kinetically limiting features, and experimentally constrain the local electromechanical conditions. Full article
Show Figures

Figure 1

Article
Entropy Ratio and Entropy Concentration Coefficient, with Application to the COVID-19 Pandemic
Entropy 2020, 22(11), 1315; https://doi.org/10.3390/e22111315 - 18 Nov 2020
Cited by 8
Abstract
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient [...] Read more.
In order to study the spread of an epidemic over a region as a function of time, we introduce an entropy ratio U describing the uniformity of infections over various states and their districts, and an entropy concentration coefficient C=1U. The latter is a multiplicative version of the Kullback-Leibler distance, with values between 0 and 1. For product measures and self-similar phenomena, it does not depend on the measurement level. Hence, C is an alternative to Gini’s concentration coefficient for measures with variation on different levels. Simple examples concern population density and gross domestic product. Application to time series patterns is indicated with a Markov chain. For the Covid-19 pandemic, entropy ratios indicate a homogeneous distribution of infections and the potential of local action when compared to measures for a whole region. Full article
(This article belongs to the Special Issue Information theory and Symbolic Analysis: Theory and Applications)
Show Figures

Figure 1

Article
Coherence and Entanglement Dynamics in Training Variational Quantum Perceptron
Entropy 2020, 22(11), 1277; https://doi.org/10.3390/e22111277 - 11 Nov 2020
Cited by 1
Abstract
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum [...] Read more.
In quantum computation, what contributes supremacy of quantum computation? One of the candidates is known to be a quantum coherence because it is a resource used in the various quantum algorithms. We reveal that quantum coherence contributes to the training of variational quantum perceptron proposed by Y. Du et al., arXiv:1809.06056 (2018). In detail, we show that in the first part of the training of the variational quantum perceptron, the quantum coherence of the total system is concentrated in the index register and in the second part, the Grover algorithm consumes the quantum coherence in the index register. This implies that the quantum coherence distribution and the quantum coherence depletion are required in the training of variational quantum perceptron. In addition, we investigate the behavior of entanglement during the training of variational quantum perceptron. We show that the bipartite concurrence between feature and index register decreases since Grover operation is only performed on the index register. Also, we reveal that the concurrence between the two qubits of index register increases as the variational quantum perceptron is trained. Full article
(This article belongs to the Special Issue Physical Information and the Physical Foundations of Computation)
Show Figures

Figure 1

Article
Quantum Finite-Time Thermodynamics: Insight from a Single Qubit Engine
Entropy 2020, 22(11), 1255; https://doi.org/10.3390/e22111255 - 04 Nov 2020
Cited by 21
Abstract
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin [...] Read more.
Incorporating time into thermodynamics allows for addressing the tradeoff between efficiency and power. A qubit engine serves as a toy model in order to study this tradeoff from first principles, based on the quantum theory of open systems. We study the quantum origin of irreversibility, originating from heat transport, quantum friction, and thermalization in the presence of external driving. We construct various finite-time engine cycles that are based on the Otto and Carnot templates. Our analysis highlights the role of coherence and the quantum origin of entropy production. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Graphical abstract