Next Issue
Volume 23, February
Previous Issue
Volume 22, December

Entropy, Volume 23, Issue 1 (January 2021) – 128 articles

Cover Story (view full-size image): Permeation through a potassium channel is quickly terminated after it opens via selectivity filter inactivation. This process is essential to functioning potassium channels, including bacterial KcsA. We compared the behavior of the wild-type KcsA channel and its mutants using molecular dynamic simulations, identifying residues with distinct properties. By analyzing causal links between rearrangements of these residues and permeating ions, we unraveled a network that acts as a self-organized system in regulating the ion permeation. Changes in one part of the network can lead to an adaptation in other regions, and the network can dynamically switch to an inactive state. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Article
A Refined Composite Multivariate Multiscale Fluctuation Dispersion Entropy and Its Application to Multivariate Signal of Rotating Machinery
Entropy 2021, 23(1), 128; https://doi.org/10.3390/e23010128 - 19 Jan 2021
Cited by 1 | Viewed by 1384
Abstract
In the fault monitoring of rotating machinery, the vibration signal of the bearing and gear in a complex operating environment has poor stationarity and high noise. How to accurately and efficiently identify various fault categories is a major challenge in rotary fault diagnosis. [...] Read more.
In the fault monitoring of rotating machinery, the vibration signal of the bearing and gear in a complex operating environment has poor stationarity and high noise. How to accurately and efficiently identify various fault categories is a major challenge in rotary fault diagnosis. Most of the existing methods only analyze the single channel vibration signal and do not comprehensively consider the multi-channel vibration signal. Therefore, this paper presents Refined Composite Multivariate Multiscale Fluctuation Dispersion Entropy (RCMMFDE), a method which extracts the recognition information of multi-channel signals with different scale factors, and the refined composite analysis ensures the recognition stability. The simulation results show that this method has the characteristics of low sensitivity to signal length and strong anti-noise ability. At the same time, combined with Joint Mutual Information Maximisation (JMIM) and support vector machine (SVM), RCMMFDE-JMIM-SVM fault diagnosis method has been proposed. This method uses RCMMFDE to extract the state characteristics of the multiple vibration signals of the rotary machine, and then uses the JMIM method to extract the sensitive characteristics. Finally, different states of the rotary machine are classified by SVM. The validity of the method is verified by the composite gear fault data set and bearing fault data set. The diagnostic accuracy of the method is 99.25% and 100.00%. The experimental results show that RCMMFDE-JMIM-SVM can effectively recognize multiple signals. Full article
Show Figures

Figure 1

Article
An Entropy Metric for Regular Grammar Classification and Learning with Recurrent Neural Networks
Entropy 2021, 23(1), 127; https://doi.org/10.3390/e23010127 - 19 Jan 2021
Viewed by 890
Abstract
Recently, there has been a resurgence of formal language theory in deep learning research. However, most research focused on the more practical problems of attempting to represent symbolic knowledge by machine learning. In contrast, there has been limited research on exploring the fundamental [...] Read more.
Recently, there has been a resurgence of formal language theory in deep learning research. However, most research focused on the more practical problems of attempting to represent symbolic knowledge by machine learning. In contrast, there has been limited research on exploring the fundamental connection between them. To obtain a better understanding of the internal structures of regular grammars and their corresponding complexity, we focus on categorizing regular grammars by using both theoretical analysis and empirical evidence. Specifically, motivated by the concentric ring representation, we relaxed the original order information and introduced an entropy metric for describing the complexity of different regular grammars. Based on the entropy metric, we categorized regular grammars into three disjoint subclasses: the polynomial, exponential and proportional classes. In addition, several classification theorems are provided for different representations of regular grammars. Our analysis was validated by examining the process of learning grammars with multiple recurrent neural networks. Our results show that as expected more complex grammars are generally more difficult to learn. Full article
(This article belongs to the Special Issue Entropy in Data Analysis)
Show Figures

Figure 1

Article
Information-Theoretic Generalization Bounds for Meta-Learning and Applications
Entropy 2021, 23(1), 126; https://doi.org/10.3390/e23010126 - 19 Jan 2021
Cited by 8 | Viewed by 1050
Abstract
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, [...] Read more.
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning. Full article
Show Figures

Figure 1

Review
Dynamics of Ion Channels via Non-Hermitian Quantum Mechanics
Entropy 2021, 23(1), 125; https://doi.org/10.3390/e23010125 - 19 Jan 2021
Cited by 1 | Viewed by 1241
Abstract
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the [...] Read more.
We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the electric filed to stay mostly within the water-filled channel. Statistical mechanics of such Coulomb systems is dominated by entropic effects which may be accurately accounted for by mapping onto an effective quantum mechanics. In presence of multivalent ions the corresponding quantum mechanics appears to be non-Hermitian. In this review we discuss a framework for semiclassical calculations for the effective non-Hermitian Hamiltonians. Non-Hermiticity elevates WKB action integrals from the real line to closed cycles on a complex Riemann surfaces where direct calculations are not attainable. We circumvent this issue by applying tools from algebraic topology, such as the Picard-Fuchs equation. We discuss how its solutions relate to the thermodynamics and correlation functions of multivalent solutions within narrow, water-filled channels. Full article
Show Figures

Figure 1

Article
Quantum Mechanics and Its Evolving Formulations
Entropy 2021, 23(1), 124; https://doi.org/10.3390/e23010124 - 19 Jan 2021
Cited by 1 | Viewed by 763
Abstract
In this paper, we discuss the time evolution of the quantum mechanics formalism. Starting from the heroic beginnings of Heisenberg and Schrödinger, we cover successively the rigorous Hilbert space formulation of von Neumann, the practical bra-ket formalism of Dirac, and the more recent [...] Read more.
In this paper, we discuss the time evolution of the quantum mechanics formalism. Starting from the heroic beginnings of Heisenberg and Schrödinger, we cover successively the rigorous Hilbert space formulation of von Neumann, the practical bra-ket formalism of Dirac, and the more recent rigged Hilbert space approach. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations)
Article
Variationally Inferred Sampling through a Refined Bound
Entropy 2021, 23(1), 123; https://doi.org/10.3390/e23010123 - 19 Jan 2021
Cited by 2 | Viewed by 871
Abstract
In this work, a framework to boost the efficiency of Bayesian inference in probabilistic models is introduced by embedding a Markov chain sampler within a variational posterior approximation. We call this framework “refined variational approximation”. Its strengths are its ease of implementation and [...] Read more.
In this work, a framework to boost the efficiency of Bayesian inference in probabilistic models is introduced by embedding a Markov chain sampler within a variational posterior approximation. We call this framework “refined variational approximation”. Its strengths are its ease of implementation and the automatic tuning of sampler parameters, leading to a faster mixing time through automatic differentiation. Several strategies to approximate evidence lower bound (ELBO) computation are also introduced. Its efficient performance is showcased experimentally using state-space models for time-series data, a variational encoder for density estimation and a conditional variational autoencoder as a deep Bayes classifier. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

Article
Wave-Particle Duality Relation with a Quantum Which-Path Detector
Entropy 2021, 23(1), 122; https://doi.org/10.3390/e23010122 - 18 Jan 2021
Cited by 2 | Viewed by 794
Abstract
According to the relevant theories on duality relation, the summation of the extractable information of a quanton’s wave and particle properties, which are characterized by interference visibility V and path distinguishability D, respectively, is limited. However, this relation is violated upon quantum [...] Read more.
According to the relevant theories on duality relation, the summation of the extractable information of a quanton’s wave and particle properties, which are characterized by interference visibility V and path distinguishability D, respectively, is limited. However, this relation is violated upon quantum superposition between the wave-state and particle-state of the quanton, which is caused by the quantum beamsplitter (QBS). Along another line, recent studies have considered quantum coherence C in the l1-norm measure as a candidate for the wave property. In this study, we propose an interferometer with a quantum which-path detector (QWPD) and examine the generalized duality relation based on C. We find that this relationship still holds under such a circumstance, but the interference between these two properties causes the full-particle property to be observed when the QWPD system is partially present. Using a pair of polarization-entangled photons, we experimentally verify our analysis in the two-path case. This study extends the duality relation between coherence and path information to the quantum case and reveals the effect of quantum superposition on the duality relation. Full article
(This article belongs to the Special Issue Quantum Information and Quantum Optics)
Show Figures

Figure 1

Article
Kolmogorovian versus Non-Kolmogorovian Probabilities in Contextual Theories
Entropy 2021, 23(1), 121; https://doi.org/10.3390/e23010121 - 18 Jan 2021
Viewed by 609
Abstract
Most scholars maintain that quantum mechanics (QM) is a contextual theory and that quantum probability does not allow for an epistemic (ignorance) interpretation. By inquiring possible connections between contextuality and non-classical probabilities we show that a class TμMP of theories can [...] Read more.
Most scholars maintain that quantum mechanics (QM) is a contextual theory and that quantum probability does not allow for an epistemic (ignorance) interpretation. By inquiring possible connections between contextuality and non-classical probabilities we show that a class TμMP of theories can be selected in which probabilities are introduced as classical averages of Kolmogorovian probabilities over sets of (microscopic) contexts, which endows them with an epistemic interpretation. The conditions characterizing TμMP are compatible with classical mechanics (CM), statistical mechanics (SM), and QM, hence we assume that these theories belong to TμMP. In the case of CM and SM, this assumption is irrelevant, as all of the notions introduced in them as members of TμMP reduce to standard notions. In the case of QM, it leads to interpret quantum probability as a derived notion in a Kolmogorovian framework, explains why it is non-Kolmogorovian, and provides it with an epistemic interpretation. These results were anticipated in a previous paper, but they are obtained here in a general framework without referring to individual objects, which shows that they hold, even if only a minimal (statistical) interpretation of QM is adopted in order to avoid the problems following from the standard quantum theory of measurement. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Review
The Broadcast Approach in Communication Networks
Entropy 2021, 23(1), 120; https://doi.org/10.3390/e23010120 - 18 Jan 2021
Viewed by 1195
Abstract
In this paper we review the theoretical and practical principles of the broadcast approach to communication over state-dependent channels and networks in which the transmitters have access to only the probabilistic description of the time-varying states while remaining oblivious to their instantaneous realizations. [...] Read more.
In this paper we review the theoretical and practical principles of the broadcast approach to communication over state-dependent channels and networks in which the transmitters have access to only the probabilistic description of the time-varying states while remaining oblivious to their instantaneous realizations. When the temporal variations are frequent enough, an effective long-term strategy is adapting the transmission strategies to the system’s ergodic behavior. However, when the variations are infrequent, their temporal average can deviate significantly from the channel’s ergodic mode, rendering a lack of instantaneous performance guarantees. To circumvent a lack of short-term guarantees, the broadcast approach provides principles for designing transmission schemes that benefit from both short- and long-term performance guarantees. This paper provides an overview of how to apply the broadcast approach to various channels and network models under various operational constraints. Full article
(This article belongs to the Special Issue Multiuser Information Theory III)
Show Figures

Figure 1

Article
Automatic ECG Classification Using Continuous Wavelet Transform and Convolutional Neural Network
Entropy 2021, 23(1), 119; https://doi.org/10.3390/e23010119 - 18 Jan 2021
Cited by 4 | Viewed by 1618
Abstract
Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG [...] Read more.
Early detection of arrhythmia and effective treatment can prevent deaths caused by cardiovascular disease (CVD). In clinical practice, the diagnosis is made by checking the electrocardiogram (ECG) beat-by-beat, but this is usually time-consuming and laborious. In the paper, we propose an automatic ECG classification method based on Continuous Wavelet Transform (CWT) and Convolutional Neural Network (CNN). CWT is used to decompose ECG signals to obtain different time-frequency components, and CNN is used to extract features from the 2D-scalogram composed of the above time-frequency components. Considering the surrounding R peak interval (also called RR interval) is also useful for the diagnosis of arrhythmia, four RR interval features are extracted and combined with the CNN features to input into a fully connected layer for ECG classification. By testing in the MIT-BIH arrhythmia database, our method achieves an overall performance of 70.75%, 67.47%, 68.76%, and 98.74% for positive predictive value, sensitivity, F1-score, and accuracy, respectively. Compared with existing methods, the overall F1-score of our method is increased by 4.75~16.85%. Because our method is simple and highly accurate, it can potentially be used as a clinical auxiliary diagnostic tool. Full article
Show Figures

Figure 1

Article
On the Scope of Lagrangian Vortex Methods for Two-Dimensional Flow Simulations and the POD Technique Application for Data Storing and Analyzing
Entropy 2021, 23(1), 118; https://doi.org/10.3390/e23010118 - 18 Jan 2021
Viewed by 1101
Abstract
The possibilities of applying the pure Lagrangian vortex methods of computational fluid dynamics to viscous incompressible flow simulations are considered in relation to various problem formulations. The modification of vortex methods—the Viscous Vortex Domain method—is used which is implemented in the VM2D code [...] Read more.
The possibilities of applying the pure Lagrangian vortex methods of computational fluid dynamics to viscous incompressible flow simulations are considered in relation to various problem formulations. The modification of vortex methods—the Viscous Vortex Domain method—is used which is implemented in the VM2D code developed by the authors. Problems of flow simulation around airfoils with different shapes at various Reynolds numbers are considered: the Blasius problem, the flow around circular cylinders at different Reynolds numbers, the flow around a wing airfoil at the Reynolds numbers 104 and 105, the flow around two closely spaced circular cylinders and the flow around rectangular airfoils with a different chord to the thickness ratio. In addition, the problem of the internal flow modeling in the channel with a backward-facing step is considered. To store the results of the calculations, the POD technique is used, which, in addition, allows one to investigate the structure of the flow and obtain some additional information about the properties of flow regimes. Full article
Show Figures

Figure 1

Review
Probabilistic Models with Deep Neural Networks
Entropy 2021, 23(1), 117; https://doi.org/10.3390/e23010117 - 18 Jan 2021
Viewed by 1188
Abstract
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic [...] Read more.
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of parameters, and (ii) scalable inference methods based on stochastic gradient descent and distributed computing engines allow probabilistic modeling to be applied to massive data sets. One important practical consequence of these advances is the possibility to include deep neural networks within probabilistic models, thereby capturing complex non-linear stochastic relationships between the random variables. These advances, in conjunction with the release of novel probabilistic modeling toolboxes, have greatly expanded the scope of applications of probabilistic models, and allowed the models to take advantage of the recent strides made by the deep learning community. In this paper, we provide an overview of the main concepts, methods, and tools needed to use deep neural networks within a probabilistic modeling framework. Full article
(This article belongs to the Special Issue Bayesian Inference in Probabilistic Graphical Models)
Show Figures

Figure 1

Article
A Multi-Class Automatic Sleep Staging Method Based on Photoplethysmography Signals
Entropy 2021, 23(1), 116; https://doi.org/10.3390/e23010116 - 18 Jan 2021
Cited by 3 | Viewed by 818
Abstract
Automatic sleep staging with only one channel is a challenging problem in sleep-related research. In this paper, a simple and efficient method named PPG-based multi-class automatic sleep staging (PMSS) is proposed using only a photoplethysmography (PPG) signal. Single-channel PPG data were obtained from [...] Read more.
Automatic sleep staging with only one channel is a challenging problem in sleep-related research. In this paper, a simple and efficient method named PPG-based multi-class automatic sleep staging (PMSS) is proposed using only a photoplethysmography (PPG) signal. Single-channel PPG data were obtained from four categories of subjects in the CAP sleep database. After the preprocessing of PPG data, feature extraction was performed from the time domain, frequency domain, and nonlinear domain, and a total of 21 features were extracted. Finally, the Light Gradient Boosting Machine (LightGBM) classifier was used for multi-class sleep staging. The accuracy of the multi-class automatic sleep staging was over 70%, and the Cohen’s kappa statistic k was over 0.6. This also showed that the PMSS method can also be applied to stage the sleep state for patients with sleep disorders. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders II)
Show Figures

Figure 1

Article
No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors
Entropy 2021, 23(1), 115; https://doi.org/10.3390/e23010115 - 16 Jan 2021
Viewed by 1129
Abstract
We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis [...] Read more.
We provide a non-asymptotic analysis of the spiked Wishart and Wigner matrix models with a generative neural network prior. Spiked random matrices have the form of a rank-one signal plus noise and have been used as models for high dimensional Principal Component Analysis (PCA), community detection and synchronization over groups. Depending on the prior imposed on the spike, these models can display a statistical-computational gap between the information theoretically optimal reconstruction error that can be achieved with unbounded computational resources and the sub-optimal performances of currently known polynomial time algorithms. These gaps are believed to be fundamental, as in the emblematic case of Sparse PCA. In stark contrast to such cases, we show that there is no statistical-computational gap under a generative network prior, in which the spike lies on the range of a generative neural network. Specifically, we analyze a gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and show that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level. Full article
Show Figures

Figure 1

Article
Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s
Entropy 2021, 23(1), 114; https://doi.org/10.3390/e23010114 - 16 Jan 2021
Cited by 1 | Viewed by 1481
Abstract
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are [...] Read more.
Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are problematic precisely because they do not fully transcend the assumption that causal or constructive explanation must always be fundamental. Unlike retrocausal accounts, our principle explanation is a complete rejection of Reichenbach’s Principle. Furthermore, we will argue that the basis for our principle account of quantum mechanics is the physical principle sought by quantum information theorists for their reconstructions of quantum mechanics. Finally, we explain why our account is both fully realist and psi-epistemic. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Graphical abstract

Article
Coupling between Blood Pressure and Subarachnoid Space Width Oscillations during Slow Breathing
Entropy 2021, 23(1), 113; https://doi.org/10.3390/e23010113 - 15 Jan 2021
Viewed by 889
Abstract
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width [...] Read more.
The precise mechanisms connecting the cardiovascular system and the cerebrospinal fluid (CSF) are not well understood in detail. This paper investigates the couplings between the cardiac and respiratory components, as extracted from blood pressure (BP) signals and oscillations of the subarachnoid space width (SAS), collected during slow ventilation and ventilation against inspiration resistance. The experiment was performed on a group of 20 healthy volunteers (12 females and 8 males; BMI =22.1±3.2 kg/m2; age 25.3±7.9 years). We analysed the recorded signals with a wavelet transform. For the first time, a method based on dynamical Bayesian inference was used to detect the effective phase connectivity and the underlying coupling functions between the SAS and BP signals. There are several new findings. Slow breathing with or without resistance increases the strength of the coupling between the respiratory and cardiac components of both measured signals. We also observed increases in the strength of the coupling between the respiratory component of the BP and the cardiac component of the SAS and vice versa. Slow breathing synchronises the SAS oscillations, between the brain hemispheres. It also diminishes the similarity of the coupling between all analysed pairs of oscillators, while inspiratory resistance partially reverses this phenomenon. BP–SAS and SAS–BP interactions may reflect changes in the overall biomechanical characteristics of the brain. Full article
Show Figures

Figure 1

Article
Multi-Chaotic Analysis of Inter-Beat (R-R) Intervals in Cardiac Signals for Discrimination between Normal and Pathological Classes
Entropy 2021, 23(1), 112; https://doi.org/10.3390/e23010112 - 15 Jan 2021
Viewed by 886
Abstract
Cardiac signals have complex structures representing a combination of simpler structures. In this paper, we develop a new data analytic tool that can extract the complex structures of cardiac signals using the framework of multi-chaotic analysis, which is based on the p-norm [...] Read more.
Cardiac signals have complex structures representing a combination of simpler structures. In this paper, we develop a new data analytic tool that can extract the complex structures of cardiac signals using the framework of multi-chaotic analysis, which is based on the p-norm for calculating the largest Lyapunov exponent (LLE). Appling the p-norm is useful for deriving the spectrum of the generalized largest Lyapunov exponents (GLLE), which is characterized by the width of the spectrum (which we denote by W). This quantity measures the degree of multi-chaos of the process and can potentially be used to discriminate between different classes of cardiac signals. We propose the joint use of the GLLE and spectrum width to investigate the multi-chaotic behavior of inter-beat (R-R) intervals of cardiac signals recorded from 54 healthy subjects (hs), 44 subjects diagnosed with congestive heart failure (chf), and 25 subjects diagnosed with atrial fibrillation (af). With the proposed approach, we build a regression model for the diagnosis of pathology. Multi-chaotic analysis showed a good performance, allowing the underlying dynamics of the system that generates the heart beat to be examined and expert systems to be built for the diagnosis of cardiac pathologies. Full article
(This article belongs to the Special Issue From Time Series to Stochastic Dynamic Models)
Show Figures

Figure 1

Article
Effects of Future Information and Trajectory Complexity on Kinematic Signal and Muscle Activation during Visual-Motor Tracking
Entropy 2021, 23(1), 111; https://doi.org/10.3390/e23010111 - 15 Jan 2021
Viewed by 775
Abstract
Visual-motor tracking movement is a common and essential behavior in daily life. However, the contribution of future information to visual-motor tracking performance is not well understood in current research. In this study, the visual-motor tracking performance with and without future-trajectories was compared. Meanwhile, [...] Read more.
Visual-motor tracking movement is a common and essential behavior in daily life. However, the contribution of future information to visual-motor tracking performance is not well understood in current research. In this study, the visual-motor tracking performance with and without future-trajectories was compared. Meanwhile, three task demands were designed to investigate their impact. Eighteen healthy young participants were recruited and instructed to track a target on a screen by stretching/flexing their elbow joint. The kinematic signals (elbow joint angle) and surface electromyographic (EMG) signals of biceps and triceps were recorded. The normalized integrated jerk (NIJ) and fuzzy approximate entropy (fApEn) of the joint trajectories, as well as the multiscale fuzzy approximate entropy (MSfApEn) values of the EMG signals, were calculated. Accordingly, the NIJ values with the future-trajectory were significantly lower than those without future-trajectory (p-value < 0.01). The smoother movement with future-trajectories might be related to the increasing reliance of feedforward control. When the task demands increased, the fApEn values of joint trajectories increased significantly, as well as the MSfApEn of EMG signals (p-value < 0.05). These findings enrich our understanding about visual-motor control with future information. Full article
Show Figures

Figure 1

Review
Applications of Distributed-Order Fractional Operators: A Review
Entropy 2021, 23(1), 110; https://doi.org/10.3390/e23010110 - 15 Jan 2021
Cited by 8 | Viewed by 1131
Abstract
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities [...] Read more.
Distributed-order fractional calculus (DOFC) is a rapidly emerging branch of the broader area of fractional calculus that has important and far-reaching applications for the modeling of complex systems. DOFC generalizes the intrinsic multiscale nature of constant and variable-order fractional operators opening significant opportunities to model systems whose behavior stems from the complex interplay and superposition of nonlocal and memory effects occurring over a multitude of scales. In recent years, a significant amount of studies focusing on mathematical aspects and real-world applications of DOFC have been produced. However, a systematic review of the available literature and of the state-of-the-art of DOFC as it pertains, specifically, to real-world applications is still lacking. This review article is intended to provide the reader a road map to understand the early development of DOFC and the progressive evolution and application to the modeling of complex real-world problems. The review starts by offering a brief introduction to the mathematics of DOFC, including analytical and numerical methods, and it continues providing an extensive overview of the applications of DOFC to fields like viscoelasticity, transport processes, and control theory that have seen most of the research activity to date. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

Article
Optimization of an Industrial Sector Regulated by an International Treaty. The Case for Transportation of Perishable Foodstuff
Entropy 2021, 23(1), 109; https://doi.org/10.3390/e23010109 - 15 Jan 2021
Viewed by 559
Abstract
Transportation of perishable foodstuff is an engineering and commercial activity ruled by an international Agreement (the ATP) that needs an updated regulation. Before addressing such updating, some analyses are required about the physics of the problem, in order to identify the optimum use [...] Read more.
Transportation of perishable foodstuff is an engineering and commercial activity ruled by an international Agreement (the ATP) that needs an updated regulation. Before addressing such updating, some analyses are required about the physics of the problem, in order to identify the optimum use of the available technologies and the advantages represented by new methodologies that could be enabled soon. It is worth pointing out that manufacturers of ATP equipment follow quite closely the prescriptions given by this Agreement. So, optimizing those prescriptions will generate a general optimization trend in this sector. In this paper, a coherent analysis on these subjects is presented, and a new coefficient is proposed for qualifying ATP units, and some new tests are also proposed for measuring that coefficient in an efficient and inexpensive way. These goals are justified in this paper as a contribution from basic physics to a particular domain of Thermal Engineering. The paper is intended to be a bridge from Science to Technology, which is a must to get optimum results in exploiting technical knowledge. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Complex Energy Systems)
Show Figures

Figure 1

Article
A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem
Entropy 2021, 23(1), 108; https://doi.org/10.3390/e23010108 - 14 Jan 2021
Cited by 3 | Viewed by 740
Abstract
In this paper, we present a hybrid genetic-hierarchical algorithm for the solution of the quadratic assignment problem. The main distinguishing aspect of the proposed algorithm is that this is an innovative hybrid genetic algorithm with the original, hierarchical architecture. In particular, the genetic [...] Read more.
In this paper, we present a hybrid genetic-hierarchical algorithm for the solution of the quadratic assignment problem. The main distinguishing aspect of the proposed algorithm is that this is an innovative hybrid genetic algorithm with the original, hierarchical architecture. In particular, the genetic algorithm is combined with the so-called hierarchical (self-similar) iterated tabu search algorithm, which serves as a powerful local optimizer (local improvement algorithm) of the offspring solutions produced by the crossover operator of the genetic algorithm. The results of the conducted computational experiments demonstrate the promising performance and competitiveness of the proposed algorithm. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Article
Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data
Entropy 2021, 23(1), 107; https://doi.org/10.3390/e23010107 - 14 Jan 2021
Cited by 1 | Viewed by 640
Abstract
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data [...] Read more.
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data and study their properties. We further concentrate on the case of mixed-scale data, that is, data measured in both categorical and interval scale. We study the asymptotic properties and the robustness of minimum disparity estimators obtained in the case of mixed-scale data and exemplify the performance of the methods via simulation. Full article
Article
Neural Networks for Estimating Speculative Attacks Models
Entropy 2021, 23(1), 106; https://doi.org/10.3390/e23010106 - 13 Jan 2021
Cited by 1 | Viewed by 863
Abstract
Currency crises have been analyzed and modeled over the last few decades. These currency crises develop mainly due to a balance of payments crisis, and in many cases, these crises lead to speculative attacks against the price of the currency. Despite the popularity [...] Read more.
Currency crises have been analyzed and modeled over the last few decades. These currency crises develop mainly due to a balance of payments crisis, and in many cases, these crises lead to speculative attacks against the price of the currency. Despite the popularity of these models, they are currently shown as models with low estimation precision. In the present study, estimates are made with first- and second-generation speculative attack models using neural network methods. The results conclude that the Quantum-Inspired Neural Network and Deep Neural Decision Trees methodologies are shown to be the most accurate, with results around 90% accuracy. These results exceed the estimates made with Ordinary Least Squares, the usual estimation method for speculative attack models. In addition, the time required for the estimation is less for neural network methods than for Ordinary Least Squares. These results can be of great importance for public and financial institutions when anticipating speculative pressures on currencies that are in price crisis in the markets. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

Article
Constraint Closure Drove Major Transitions in the Origins of Life
Entropy 2021, 23(1), 105; https://doi.org/10.3390/e23010105 - 13 Jan 2021
Viewed by 730
Abstract
Life is an epiphenomenon for which origins are of tremendous interest to explain. We provide a framework for doing so based on the thermodynamic concept of work cycles. These cycles can create their own closure events, and thereby provide a mechanism for engendering [...] Read more.
Life is an epiphenomenon for which origins are of tremendous interest to explain. We provide a framework for doing so based on the thermodynamic concept of work cycles. These cycles can create their own closure events, and thereby provide a mechanism for engendering novelty. We note that three significant such events led to life as we know it on Earth: (1) the advent of collective autocatalytic sets (CASs) of small molecules; (2) the advent of CASs of reproducing informational polymers; and (3) the advent of CASs of polymerase replicases. Each step could occur only when the boundary conditions of the system fostered constraints that fundamentally changed the phase space. With the realization that these successive events are required for innovative forms of life, we may now be able to focus more clearly on the question of life’s abundance in the universe. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

Article
Deep Task-Based Quantization
Entropy 2021, 23(1), 104; https://doi.org/10.3390/e23010104 - 13 Jan 2021
Cited by 5 | Viewed by 851
Abstract
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such [...] Read more.
Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task. Full article
Show Figures

Figure 1

Article
Universal Regimes in Long-Time Asymptotic of Multilevel Quantum System Under Time-Dependent Perturbation
Entropy 2021, 23(1), 99; https://doi.org/10.3390/e23010099 - 12 Jan 2021
Viewed by 532
Abstract
In the framework of an exactly soluble model, one considers a typical problem of the interaction between radiation and matter: the dynamics of population in a multilevel quantum system subject to a time dependent perturbation. The algebraic structure of the model is taken [...] Read more.
In the framework of an exactly soluble model, one considers a typical problem of the interaction between radiation and matter: the dynamics of population in a multilevel quantum system subject to a time dependent perturbation. The algebraic structure of the model is taken richly enough, such that there exists a strong argument in favor of the fact that the behavior of the system in the asymptotic of long time has a universal character, which is system-independent and governed by the functional property of the time dependence exclusively. Functional properties of the excitation time dependence, resulting in the regimes of resonant excitation, random walks, and dynamic localization, are identified. Moreover, an intermediate regime between the random walks and the localization is identified for the polyharmonic excitation at frequencies given by the Liouville numbers. Full article
Article
A Novel Measure Inspired by Lyapunov Exponents for the Characterization of Dynamics in State-Transition Networks
Entropy 2021, 23(1), 103; https://doi.org/10.3390/e23010103 - 12 Jan 2021
Cited by 1 | Viewed by 959
Abstract
The combination of network sciences, nonlinear dynamics and time series analysis provides novel insights and analogies between the different approaches to complex systems. By combining the considerations behind the Lyapunov exponent of dynamical systems and the average entropy of transition probabilities for Markov [...] Read more.
The combination of network sciences, nonlinear dynamics and time series analysis provides novel insights and analogies between the different approaches to complex systems. By combining the considerations behind the Lyapunov exponent of dynamical systems and the average entropy of transition probabilities for Markov chains, we introduce a network measure for characterizing the dynamics on state-transition networks with special focus on differentiating between chaotic and cyclic modes. One important property of this Lyapunov measure consists of its non-monotonous dependence on the cylicity of the dynamics. Motivated by providing proper use cases for studying the new measure, we also lay out a method for mapping time series to state transition networks by phase space coarse graining. Using both discrete time and continuous time dynamical systems the Lyapunov measure extracted from the corresponding state-transition networks exhibits similar behavior to that of the Lyapunov exponent. In addition, it demonstrates a strong sensitivity to boundary crisis suggesting applicability in predicting the collapse of chaos. Full article
Show Figures

Figure 1

Article
More Tolerant Reconstructed Networks Using Self-Healing against Attacks in Saving Resource
Entropy 2021, 23(1), 102; https://doi.org/10.3390/e23010102 - 12 Jan 2021
Viewed by 616
Abstract
Complex network infrastructure systems for power supply, communication, and transportation support our economic and social activities; however, they are extremely vulnerable to frequently increasing large disasters or attacks. Thus, the reconstruction of a damaged network is more advisable than an empirically performed recovery [...] Read more.
Complex network infrastructure systems for power supply, communication, and transportation support our economic and social activities; however, they are extremely vulnerable to frequently increasing large disasters or attacks. Thus, the reconstruction of a damaged network is more advisable than an empirically performed recovery of the original vulnerable one. To reconstruct a sustainable network, we focus on enhancing loops so that they are not trees, which is made possible by node removal. Although this optimization corresponds with an intractable combinatorial problem, we propose self-healing methods based on enhancing loops when applying an approximate calculation inspired by statistical physics. We show that both higher robustness and efficiency are obtained in our proposed methods by saving the resources of links and ports when compared to ones in conventional healing methods. Moreover, the reconstructed network can become more tolerant than the original when some damaged links are reusable or compensated for as an investment of resource. These results present the potential of network reconstruction using self-healing with adaptive capacity in terms of resilience. Full article
(This article belongs to the Special Issue Critical Phenomena and Optimization in Complex Networks)
Show Figures

Figure 1

Article
Distribution-Dependent Weighted Union Bound
Entropy 2021, 23(1), 101; https://doi.org/10.3390/e23010101 - 12 Jan 2021
Viewed by 712
Abstract
In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to [...] Read more.
In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that PLR^(h),δqhR(h)UR^(h),δph1δ where R^(h) is the empirical errors, if it is possible to prove that P{R(h)L(R^(h),δ)}1δ and P{R(h)U(R^(h),δ)}1δ, when h, qh, and ph are chosen before seeing the data such that qh,ph[0,1] and hH(qh+ph)=1. If no a priori information is available qh and ph are set to 12m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB. Full article
Show Figures

Figure 1

Article
Breakpoint Analysis for the COVID-19 Pandemic and Its Effect on the Stock Markets
Entropy 2021, 23(1), 100; https://doi.org/10.3390/e23010100 - 12 Jan 2021
Cited by 11 | Viewed by 1367
Abstract
In this research, statistical models are formulated to study the effect of the health crisis arising from COVID-19 in global markets. Breakpoints in the price series of stock indexes are considered. Such indexes are used as an approximation of the stock markets in [...] Read more.
In this research, statistical models are formulated to study the effect of the health crisis arising from COVID-19 in global markets. Breakpoints in the price series of stock indexes are considered. Such indexes are used as an approximation of the stock markets in different countries, taking into account that they are indicative of these markets because of their composition. The main results obtained in this investigation highlight that countries with better institutional and economic conditions are less affected by the pandemic. In addition, the effect of the health index in the models is associated with their non-significant parameters. This is due to that the health index used in the modeling would not determine the different capacities of the countries analyzed to respond efficiently to the pandemic effect. Therefore, the contagion is the preponderant factor when analyzing the structural breakdown that occurred in the world economy. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop