Next Issue
Volume 22, October
Previous Issue
Volume 22, August

Entropy, Volume 22, Issue 9 (September 2020) – 156 articles

Cover Story (view full-size image): This paper presents a theory and simulation of viscous dissipation in evolving interfaces and membranes under astigmatic kinematics. The viscous dissipation is captured by the Boussinesq–Scriven fluid model. We characterize and explain the relationship between the physical surface and the thermodynamic surface in the frame of decoupled shape-curvedness representation. The entropy production surface under constant homogeneous normal velocity decays with growth and shows minima for saddles and spheres, and maxima for cylindrical patches. We demonstrate that spheres and cylinders grow under constant shape, while growing cylinders can evolve into saddles or spheres by small shape perturbations. Taken together, the results and analysis provide novel and significant relations between shape evolution and viscous dissipation in deforming viscous membranes and surfaces. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
The Flow of Information in Trading: An Entropy Approach to Market Regimes
Entropy 2020, 22(9), 1064; https://doi.org/10.3390/e22091064 - 22 Sep 2020
Cited by 3 | Viewed by 985
Abstract
In this study, we use entropy-based measures to identify different types of trading behaviors. We detect the return-driven trading using the conditional block entropy that dynamically reflects the “self-causality” of market return flows. Then we use the transfer entropy to identify the news-driven [...] Read more.
In this study, we use entropy-based measures to identify different types of trading behaviors. We detect the return-driven trading using the conditional block entropy that dynamically reflects the “self-causality” of market return flows. Then we use the transfer entropy to identify the news-driven trading activity that is revealed by the information flows from news sentiment to market returns. We argue that when certain trading behavior becomes dominant or jointly dominant, the market will form a specific regime, namely return-, news- or mixed regime. Based on 11 years of news and market data, we find that the evolution of financial market regimes in terms of adaptive trading activities over the 2008 liquidity and euro-zone debt crises can be explicitly explained by the information flows. The proposed method can be expanded to make “causal” inferences on other types of economic phenomena. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

Article
The Operational Choi–Jamiołkowski Isomorphism
Entropy 2020, 22(9), 1063; https://doi.org/10.3390/e22091063 - 22 Sep 2020
Cited by 3 | Viewed by 1194
Abstract
In this article, I use an operational formulation of the Choi–Jamiołkowski isomorphism to explore an approach to quantum mechanics in which the state is not the fundamental object. I first situate this project in the context of generalized probabilistic theories and argue that [...] Read more.
In this article, I use an operational formulation of the Choi–Jamiołkowski isomorphism to explore an approach to quantum mechanics in which the state is not the fundamental object. I first situate this project in the context of generalized probabilistic theories and argue that this framework may be understood as a means of drawing conclusions about the intratheoretic causal structure of quantum mechanics which are independent of any specific ontological picture. I then give an operational formulation of the Choi–Jamiołkowski isomorphism and show that, in an operational theory which exhibits this isomorphism, several features of the theory which are usually regarded as properties of the quantum state can be derived from constraints on non-local correlations. This demonstrates that there is no need to postulate states to be the bearers of these properties, since they can be understood as consequences of a fundamental equivalence between multipartite and temporal correlations. Full article
(This article belongs to the Special Issue Quantum Theory and Causation)
Show Figures

Figure 1

Article
DSP-Assisted Nonlinear Impairments Tolerant 100 Gbps Optical Backhaul Network for Long-Haul Transmission
Entropy 2020, 22(9), 1062; https://doi.org/10.3390/e22091062 - 22 Sep 2020
Cited by 1 | Viewed by 716
Abstract
High capacity long haul communication and cost-effective solutions for low loss transmission are the major advantages of optical fibers, which makes them a promising solution to be used for backhaul network transportation. A distortion-tolerant 100 Gbps framework that consists of long haul and [...] Read more.
High capacity long haul communication and cost-effective solutions for low loss transmission are the major advantages of optical fibers, which makes them a promising solution to be used for backhaul network transportation. A distortion-tolerant 100 Gbps framework that consists of long haul and high capacity transport based wavelength division multiplexed (WDM) system is investigated in this paper, with an analysis on different design parameters to mitigate the amplified spontaneous emission (ASE) noise and nonlinear effects due to the fiber transmission. The performance degradation in the presence of non-linear effects is evaluated and a digital signal processing (DSP) assisted receiver is proposed in order to achieve bit error rate (BER) of 1.56 × 106 and quality factor (Q-factor) of 5, using 25 and 50 GHz channel spacing with 90 μm2 effective area of the optical fiber. Analytical calculations of the proposed WDM system are presented and the simulation results verify the effectiveness of the proposed approach in order to mitigate non-linear effects for up to 300 km length of optical fiber transmission. Full article
(This article belongs to the Special Issue Reliability of Modern Electro-Mechanical Systems)
Show Figures

Figure 1

Article
A New Measure to Characterize the Degree of Self-Similarity of a Shape and Its Applicability
Entropy 2020, 22(9), 1061; https://doi.org/10.3390/e22091061 - 22 Sep 2020
Cited by 3 | Viewed by 665
Abstract
We propose a new measure (Γ) to quantify the degree of self-similarity of a shape using branch length similarity (BLS) entropy which is defined on a simple network consisting of a single node and its branches. To investigate the properties of [...] Read more.
We propose a new measure (Γ) to quantify the degree of self-similarity of a shape using branch length similarity (BLS) entropy which is defined on a simple network consisting of a single node and its branches. To investigate the properties of this measure, we computed the Γ values for 70 object groups (20 shapes in each group) in the MPEG-7 shape database and performed grouping on the values. With relatively high Γ values, identical groups had visually similar shapes. On the other hand, the identical groups with low Γ values had visually different shapes. However, the aspect of topological similarity of the shapes also warrants consideration. The shapes of statistically different groups exhibited significant visual difference from each other. Also, in order to show that the Γ can have a wide variety of applicability when properly used with other variables, we showed that the finger gestures in the (Γ, Z) space are successfully classified. Here, the Z means a correlation coefficient value between entropy profiles for gesture shapes. As shown in the applications, Γ has a strong advantage over conventional geometric measures in that it captures the geometrical and topological properties of a shape together. If we could define the BLS entropy for color, Γ could be used to characterize images expressed in RGB. We briefly discussed the problems to be solved before the applicability of Γ can be expanded to various fields. Full article
Show Figures

Figure 1

Article
The Quantum Friction and Optimal Finite-Time Performance of the Quantum Otto Cycle
Entropy 2020, 22(9), 1060; https://doi.org/10.3390/e22091060 - 22 Sep 2020
Cited by 7 | Viewed by 755
Abstract
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider [...] Read more.
In this work we considered the quantum Otto cycle within an optimization framework. The goal was maximizing the power for a heat engine or maximizing the cooling power for a refrigerator. In the field of finite-time quantum thermodynamics it is common to consider frictionless trajectories since these have been shown to maximize the work extraction during the adiabatic processes. Furthermore, for frictionless cycles, the energy of the system decouples from the other degrees of freedom, thereby simplifying the mathematical treatment. Instead, we considered general limit cycles and we used analytical techniques to compute the derivative of the work production over the whole cycle with respect to the time allocated for each of the adiabatic processes. By doing so, we were able to directly show that the frictionless cycle maximizes the work production, implying that the optimal power production must necessarily allow for some friction generation so that the duration of the cycle is reduced. Full article
(This article belongs to the Special Issue Finite-Time Thermodynamics)
Show Figures

Figure 1

Article
Surface-Codes-Based Quantum Communication Networks
Entropy 2020, 22(9), 1059; https://doi.org/10.3390/e22091059 - 22 Sep 2020
Viewed by 805
Abstract
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission [...] Read more.
In this paper, we propose the surface codes (SCs)-based multipartite quantum communication networks (QCNs). We describe an approach that enables us to simultaneously entangle multiple nodes in an arbitrary network topology based on the SCs. We also describe how to extend the transmission distance between arbitrary two nodes by using the SCs. The numerical results indicate that transmission distance between nodes can be extended to beyond 1000 km by employing simple syndrome decoding. Finally, we describe how to operate the proposed QCN by employing the software-defined networking (SDN) concept. Full article
Show Figures

Figure 1

Article
Detection of Algorithmically Generated Domain Names Using the Recurrent Convolutional Neural Network with Spatial Pyramid Pooling
Entropy 2020, 22(9), 1058; https://doi.org/10.3390/e22091058 - 22 Sep 2020
Cited by 1 | Viewed by 752
Abstract
Domain generation algorithms (DGAs) use specific parameters as random seeds to generate a large number of random domain names to prevent malicious domain name detection. This greatly increases the difficulty of detecting and defending against botnets and malware. Traditional models for detecting algorithmically [...] Read more.
Domain generation algorithms (DGAs) use specific parameters as random seeds to generate a large number of random domain names to prevent malicious domain name detection. This greatly increases the difficulty of detecting and defending against botnets and malware. Traditional models for detecting algorithmically generated domain names generally rely on manually extracting statistical characteristics from the domain names or network traffic and then employing classifiers to distinguish the algorithmically generated domain names. These models always require labor intensive manual feature engineering. In contrast, most state-of-the-art models based on deep neural networks are sensitive to imbalance in the sample distribution and cannot fully exploit the discriminative class features in domain names or network traffic, leading to decreased detection accuracy. To address these issues, we employ the borderline synthetic minority over-sampling algorithm (SMOTE) to improve sample balance. We also propose a recurrent convolutional neural network with spatial pyramid pooling (RCNN-SPP) to extract discriminative and distinctive class features. The recurrent convolutional neural network combines a convolutional neural network (CNN) and a bi-directional long short-term memory network (Bi-LSTM) to extract both the semantic and contextual information from domain names. We then employ the spatial pyramid pooling strategy to refine the contextual representation by capturing multi-scale contextual information from domain names. The experimental results from different domain name datasets demonstrate that our model can achieve 92.36% accuracy, an 89.55% recall rate, a 90.46% F1-score, and 95.39% AUC in identifying DGA and legitimate domain names, and it can achieve 92.45% accuracy rate, a 90.12% recall rate, a 90.86% F1-score, and 96.59% AUC in multi-classification problems. It achieves significant improvement over existing models in terms of accuracy and robustness. Full article
Show Figures

Figure 1

Article
Software Requirements Classification Using Machine Learning Algorithms
Entropy 2020, 22(9), 1057; https://doi.org/10.3390/e22091057 - 21 Sep 2020
Cited by 6 | Viewed by 1720
Abstract
The correct classification of requirements has become an essential task within software engineering. This study shows a comparison among the text feature extraction techniques, and machine learning algorithms to the problem of requirements engineer classification to answer the two major questions “Which works [...] Read more.
The correct classification of requirements has become an essential task within software engineering. This study shows a comparison among the text feature extraction techniques, and machine learning algorithms to the problem of requirements engineer classification to answer the two major questions “Which works best (Bag of Words (BoW) vs. Term Frequency–Inverse Document Frequency (TF-IDF) vs. Chi Squared (CHI2)) for classifying Software Requirements into Functional Requirements (FR) and Non-Functional Requirements (NF), and the sub-classes of Non-Functional Requirements?” and “Which Machine Learning Algorithm provides the best performance for the requirements classification task?”. The data used to perform the research was the PROMISE_exp, a recently made dataset that expands the already known PROMISE repository, a repository that contains labeled software requirements. All the documents from the database were cleaned with a set of normalization steps and the two feature extractions, and feature selection techniques used were BoW, TF-IDF and CHI2 respectively. The algorithms used for classification were Logist Regression (LR), Support Vector Machine (SVM), Multinomial Naive Bayes (MNB) and k-Nearest Neighbors (kNN). The novelty of our work is the data used to perform the experiment, the details of the steps used to reproduce the classification, and the comparison between BoW, TF-IDF and CHI2 for this repository not having been covered by other studies. This work will serve as a reference for the software engineering community and will help other researchers to understand the requirement classification process. We noticed that the use of TF-IDF followed by the use of LR had a better classification result to differentiate requirements, with an F-measure of 0.91 in binary classification (tying with SVM in that case), 0.74 in NF classification and 0.78 in general classification. As future work we intend to compare more algorithms and new forms to improve the precision of our models. Full article
Show Figures

Figure 1

Article
Financial Performance Analysis in European Football Clubs
Entropy 2020, 22(9), 1056; https://doi.org/10.3390/e22091056 - 21 Sep 2020
Cited by 1 | Viewed by 1470
Abstract
The financial performance of football clubs has become an essential element to ensure the solvency and viability of the club over time. For this, both the theory and the practical and regulatory evidence show the need to study financial factors, as well as [...] Read more.
The financial performance of football clubs has become an essential element to ensure the solvency and viability of the club over time. For this, both the theory and the practical and regulatory evidence show the need to study financial factors, as well as sports and corporate factors to analyze the possible flow of income and for good management of the club’s accounts, respectively. Through these factors, the present study analyzes the financial performance of European football clubs using neural networks as a methodology, where the popular multilayer perceptron and the novel quantum neural network are applied. The results show the financial performance of the club is determined by liquidity, leverage, and sporting performance. Additionally, the quantum network as the most accurate variant. These conclusions can be useful for football clubs and interest groups, as well as for regulatory bodies that try to make the best recommendations and conditions for the football industry. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Figure 1

Article
Improving Multi-Agent Generative Adversarial Nets with Variational Latent Representation
Entropy 2020, 22(9), 1055; https://doi.org/10.3390/e22091055 - 21 Sep 2020
Cited by 1 | Viewed by 682
Abstract
Generative adversarial networks (GANs), which are a promising type of deep generative network, have recently drawn considerable attention and made impressive progress. However, GAN models suffer from the well-known problem of mode collapse. This study focuses on this challenge and introduces a new [...] Read more.
Generative adversarial networks (GANs), which are a promising type of deep generative network, have recently drawn considerable attention and made impressive progress. However, GAN models suffer from the well-known problem of mode collapse. This study focuses on this challenge and introduces a new model design, called the encoded multi-agent generative adversarial network (E-MGAN), which tackles the mode collapse problem by introducing the variational latent representations learned from a variable auto-encoder (VAE) to a multi-agent GAN. The variational latent representations are extracted from training data to replace the random noise input of the general multi-agent GANs. The generator in E-MGAN employs multiple generators and is penalized by a classifier. This integration guarantees that the proposed model not only enhances the quality of generated samples but also improves the diversity of generated samples to avoid the mode collapse problem. Moreover, extensive experiments are conducted on both a synthetic dataset and two large-scale real-world datasets. The generated samples are visualized for qualitative evaluation. The inception score (IS) and Fréchet inception distance (FID) are adopted to measure the performance of the model for quantitative assessment. The results confirmed that the proposed model achieves outstanding performances compared to other state-of-the-art GAN variants. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Electric Double Layer and Orientational Ordering of Water Dipoles in Narrow Channels within a Modified Langevin Poisson-Boltzmann Model
Entropy 2020, 22(9), 1054; https://doi.org/10.3390/e22091054 - 21 Sep 2020
Cited by 3 | Viewed by 803
Abstract
The electric double layer (EDL) is an important phenomenon that arises in systems where a charged surface comes into contact with an electrolyte solution. In this work we describe the generalization of classic Poisson-Boltzmann (PB) theory for point-like ions by taking into account [...] Read more.
The electric double layer (EDL) is an important phenomenon that arises in systems where a charged surface comes into contact with an electrolyte solution. In this work we describe the generalization of classic Poisson-Boltzmann (PB) theory for point-like ions by taking into account orientational ordering of water molecules. The modified Langevin Poisson-Boltzmann (LPB) model of EDL is derived by minimizing the corresponding Helmholtz free energy functional, which includes also orientational entropy contribution of water dipoles. The formation of EDL is important in many artificial and biological systems bound by a cylindrical geometry. We therefore numerically solve the modified LPB equation in cylindrical coordinates, determining the spatial dependencies of electric potential, relative permittivity and average orientations of water dipoles within charged tubes of different radii. Results show that for tubes of a large radius, macroscopic (net) volume charge density of coions and counterions is zero at the geometrical axis. This is attributed to effective electrolyte charge screening in the vicinity of the inner charged surface of the tube. For tubes of small radii, the screening region extends into the whole inner space of the tube, leading to non-zero net volume charge density and non-zero orientational ordering of water dipoles near the axis. Full article
Show Figures

Figure 1

Article
Reduction Theorem for Secrecy over Linear Network Code for Active Attacks
Entropy 2020, 22(9), 1053; https://doi.org/10.3390/e22091053 - 21 Sep 2020
Cited by 3 | Viewed by 740
Abstract
We discuss the effect of sequential error injection on information leakage under a network code. We formulate a network code for the single transmission setting and the multiple transmission setting. Under this formulation, we show that the eavesdropper cannot increase the power of [...] Read more.
We discuss the effect of sequential error injection on information leakage under a network code. We formulate a network code for the single transmission setting and the multiple transmission setting. Under this formulation, we show that the eavesdropper cannot increase the power of eavesdropping by sequential error injection when the operations in the network are linear operations. We demonstrated the usefulness of this reduction theorem by applying a concrete example of network. Full article
(This article belongs to the Special Issue Multiuser Information Theory III)
Show Figures

Figure 1

Article
Metaheuristics in the Optimization of Cryptographic Boolean Functions
Entropy 2020, 22(9), 1052; https://doi.org/10.3390/e22091052 - 21 Sep 2020
Cited by 2 | Viewed by 1202
Abstract
Generating Boolean Functions (BFs) with high nonlinearity is a complex task that is usually addresses through algebraic constructions. Metaheuristics have also been applied extensively to this task. However, metaheuristics have not been able to attain so good results as the algebraic techniques. This [...] Read more.
Generating Boolean Functions (BFs) with high nonlinearity is a complex task that is usually addresses through algebraic constructions. Metaheuristics have also been applied extensively to this task. However, metaheuristics have not been able to attain so good results as the algebraic techniques. This paper proposes a novel diversity-aware metaheuristic that is able to excel. This proposal includes the design of a novel cost function that combines several information from the Walsh Hadamard Transform (WHT) and a replacement strategy that promotes a gradual change from exploration to exploitation as well as the formation of clusters of solutions with the aim of allowing intensification steps at each iteration. The combination of a high entropy in the population and a lower entropy inside clusters allows a proper balance between exploration and exploitation. This is the first memetic algorithm that is able to generate 10-variable BFs of similar quality than algebraic methods. Experimental results and comparisons provide evidence of the high performance of the proposed optimization mechanism for the generation of high quality BFs. Full article
(This article belongs to the Special Issue Entropy in Metaheuristics and Bioinspired Algorithms)
Show Figures

Figure 1

Article
Social Entropy and Normative Network
Entropy 2020, 22(9), 1051; https://doi.org/10.3390/e22091051 - 20 Sep 2020
Cited by 3 | Viewed by 965
Abstract
The paper introduces a new concept of social entropy and a new concept of social order, both based on the normative framework of society. From these two concepts, typologies (logical and historical) of societies are inferred and examined in their basic features. To [...] Read more.
The paper introduces a new concept of social entropy and a new concept of social order, both based on the normative framework of society. From these two concepts, typologies (logical and historical) of societies are inferred and examined in their basic features. To these ends, some well-known concepts such as entropy, order, system, network, synergy, norm, autopoieticity, fetality, and complexity are revisited and placed into an integrated framework. The core body of this paper addresses the structure and the mechanism of social entropy, understood as an institutionally working counterpart of social order. Finally, this paper concludes that social entropy is an artefact, like society itself, and acts through people’s behavior. Full article
(This article belongs to the Special Issue Information Theory and Economic Network)
Show Figures

Graphical abstract

Article
Tsallis Entropy for Assessing Spatial Uncertainty Associated with Mean Annual Runoff of Quaternary Catchments of the Middle Vaal Basin in South Africa
Entropy 2020, 22(9), 1050; https://doi.org/10.3390/e22091050 - 19 Sep 2020
Viewed by 658
Abstract
This study assesses mainly the uncertainty of the mean annual runoff (MAR) for quaternary catchments (QCs) considered as metastable nonextensive systems (from Tsalllis entropy) in the Middle Vaal catchment. The study is applied to the surface water resources (WR) of the South Africa [...] Read more.
This study assesses mainly the uncertainty of the mean annual runoff (MAR) for quaternary catchments (QCs) considered as metastable nonextensive systems (from Tsalllis entropy) in the Middle Vaal catchment. The study is applied to the surface water resources (WR) of the South Africa 1990 (WR90), 2005 (WR2005) and 2012 (WR2012) data sets. The q-information index (from the Tsalllis entropy) is used here as a deviation indicator for the spatial evolution of uncertainty for the different QCs, using the Shannon entropy as a baseline. It enables the determination of a (virtual) convergence point, zone of positive and negative uncertainty deviation, zone of null deviation and chaotic zone for each data set. Such a determination is not possible on the basis of the Shannon entropy alone as a measure for the MAR uncertainty of QCs, i.e., when they are viewed as extensive systems. Finally, the spatial distributions for the zones of the q-uncertainty deviation (gain or loss in information) of the MAR are derived and lead to iso q-uncertainty deviation maps. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Show Figures

Figure 1

Article
Breaking of the Trade-Off Principle between Computational Universality and Efficiency by Asynchronous Updating
Entropy 2020, 22(9), 1049; https://doi.org/10.3390/e22091049 - 19 Sep 2020
Cited by 2 | Viewed by 897
Abstract
Although natural and bioinspired computing has developed significantly, the relationship between the computational universality and efficiency beyond the Turing machine has not been studied in detail. Here, we investigate how asynchronous updating can contribute to the universal and efficient computation in cellular automata [...] Read more.
Although natural and bioinspired computing has developed significantly, the relationship between the computational universality and efficiency beyond the Turing machine has not been studied in detail. Here, we investigate how asynchronous updating can contribute to the universal and efficient computation in cellular automata (CA). First, we define the computational universality and efficiency in CA and show that there is a trade-off relation between the universality and efficiency in CA implemented in synchronous updating. Second, we introduce asynchronous updating in CA and show that asynchronous updating can break the trade-off found in synchronous updating. Our finding spells out the significance of asynchronous updating or the timing of computation in robust and efficient computation. Full article
Show Figures

Figure 1

Article
Expected Logarithm and Negative Integer Moments of a Noncentral χ2-Distributed Random Variable
Entropy 2020, 22(9), 1048; https://doi.org/10.3390/e22091048 - 19 Sep 2020
Viewed by 693
Abstract
Closed-form expressions for the expected logarithm and for arbitrary negative integer moments of a noncentral χ2-distributed random variable are presented in the cases of both even and odd degrees of freedom. Moreover, some basic properties of these expectations are derived and tight upper and lower bounds on them are proposed. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Article
Partial Derivative Approach to the Integral Transform for the Function Space in the Banach Algebra
Entropy 2020, 22(9), 1047; https://doi.org/10.3390/e22091047 - 18 Sep 2020
Cited by 2 | Viewed by 712 | Correction
Abstract
We investigate some relationships among the integral transform, the function space integral and the first variation of the partial derivative approach in the Banach algebra defined on the function space. We prove that the function space integral and the integral transform of the [...] Read more.
We investigate some relationships among the integral transform, the function space integral and the first variation of the partial derivative approach in the Banach algebra defined on the function space. We prove that the function space integral and the integral transform of the partial derivative in some Banach algebra can be expanded as the limit of a sequence of function space integrals. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Article
Toppling Pencils—Macroscopic Randomness from Microscopic Fluctuations
Entropy 2020, 22(9), 1046; https://doi.org/10.3390/e22091046 - 18 Sep 2020
Viewed by 811
Abstract
We construct a microscopic model to study discrete randomness in bistable systems coupled to an environment comprising many degrees of freedom. A quartic double well is bilinearly coupled to a finite number N of harmonic oscillators. Solving the time-reversal invariant Hamiltonian equations of [...] Read more.
We construct a microscopic model to study discrete randomness in bistable systems coupled to an environment comprising many degrees of freedom. A quartic double well is bilinearly coupled to a finite number N of harmonic oscillators. Solving the time-reversal invariant Hamiltonian equations of motion numerically, we show that for N=1, the system exhibits a transition with increasing coupling strength from integrable to chaotic motion, following the Kolmogorov-Arnol’d-Moser (KAM) scenario. Raising N to values of the order of 10 and higher, the dynamics crosses over to a quasi-relaxation, approaching either one of the stable equilibria at the two minima of the potential. We corroborate the irreversibility of this relaxation on other characteristic timescales of the system by recording the time dependences of autocorrelation, partial entropy, and the frequency of jumps between the wells as functions of N and other parameters. Preparing the central system in the unstable equilibrium at the top of the barrier and the bath in a random initial state drawn from a Gaussian distribution, symmetric under spatial reflection, we demonstrate that the decision whether to relax into the left or the right well is determined reproducibly by residual asymmetries in the initial positions and momenta of the bath oscillators. This result reconciles the randomness and spontaneous symmetry breaking of the asymptotic state with the conservation of entropy under canonical transformations and the manifest symmetry of potential and initial condition of the bistable system. Full article
(This article belongs to the Collection Randomness and Entropy Production)
Show Figures

Graphical abstract

Article
Energy Efficiency and Spectral Efficiency Tradeoff in Massive MIMO Multicast Transmission with Statistical CSI
Entropy 2020, 22(9), 1045; https://doi.org/10.3390/e22091045 - 18 Sep 2020
Cited by 1 | Viewed by 971
Abstract
As the core technology of 5G mobile communication systems, massive multi-input multi-output (MIMO) can dramatically enhance the energy efficiency (EE), as well as the spectral efficiency (SE), which meets the requirements of new applications. Meanwhile, physical layer multicast technology has gradually become the [...] Read more.
As the core technology of 5G mobile communication systems, massive multi-input multi-output (MIMO) can dramatically enhance the energy efficiency (EE), as well as the spectral efficiency (SE), which meets the requirements of new applications. Meanwhile, physical layer multicast technology has gradually become the focus of next-generation communication technology research due to its capacity to efficiently provide wireless transmission from point to multipoint. The availability of channel state information (CSI), to a large extent, determines the performance of massive MIMO. However, because obtaining the perfect instantaneous CSI in massive MIMO is quite challenging, it is reasonable and practical to design a massive MIMO multicast transmission strategy using statistical CSI. In this paper, in order to optimize the system resource efficiency (RE) to achieve EE-SE balance, the EE-SE trade-offs in the massive MIMO multicast transmission are investigated with statistical CSI. Firstly, we formulate the eigenvectors of the RE optimization multicast covariance matrices of different user terminals in closed form, which illustrates that in the massive MIMO downlink, optimal RE multicast precoding is supposed to be done in the beam domain. On the basis of this viewpoint, the optimal RE precoding design is simplified into a resource efficient power allocation problem. Via invoking the quadratic transform, we propose an iterative power allocation algorithm, which obtains an adjustable and reasonable EE-SE tradeoff. Numerical simulation results reveal the near-optimal performance and the effectiveness of our proposed statistical CSI-assisted RE maximization in massive MIMO. Full article
(This article belongs to the Special Issue Information Theory and 5G/6G Mobile Communications)
Show Figures

Figure 1

Article
On the Application of Entropy Measures with Sliding Window for Intrusion Detection in Automotive In-Vehicle Networks
Entropy 2020, 22(9), 1044; https://doi.org/10.3390/e22091044 - 18 Sep 2020
Cited by 1 | Viewed by 930
Abstract
The evolution of modern automobiles to higher levels of connectivity and automatism has also increased the need to focus on the mitigation of potential cybersecurity risks. Researchers have proven in recent years that attacks on in-vehicle networks of automotive vehicles are possible and [...] Read more.
The evolution of modern automobiles to higher levels of connectivity and automatism has also increased the need to focus on the mitigation of potential cybersecurity risks. Researchers have proven in recent years that attacks on in-vehicle networks of automotive vehicles are possible and the research community has investigated various cybersecurity mitigation techniques and intrusion detection systems which can be adopted in the automotive sector. In comparison to conventional intrusion detection systems in large fixed networks and ICT infrastructures in general, in-vehicle systems have limited computing capabilities and other constraints related to data transfer and the management of cryptographic systems. In addition, it is important that attacks are detected in a short time-frame as cybersecurity attacks in vehicles can lead to safety hazards. This paper proposes an approach for intrusion detection of cybersecurity attacks in in-vehicle networks, which takes in consideration the constraints listed above. The approach is based on the application of an information entropy-based method based on a sliding window, which is quite efficient from time point of view, it does not require the implementation of complex cryptographic systems and it still provides a very high detection accuracy. Different entropy measures are used in the evaluation: Shannon Entropy, Renyi Entropy, Sample Entropy, Approximate Entropy, Permutation Entropy, Dispersion and Fuzzy Entropy. This paper evaluates the impact of the different hyperparameters present in the definition of entropy measures on a very large public data set of CAN-bus traffic with millions of CAN-bus messages with four different types of attacks: Denial of Service, Fuzzy Attack and two spoofing attacks related to RPM and Gear information. The sliding window approach in combination with entropy measures can detect attacks in a time-efficient way and with great accuracy for specific choices of the hyperparameters and entropy measures. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Complexity in Economic and Social Systems: Cryptocurrency Market at around COVID-19
Entropy 2020, 22(9), 1043; https://doi.org/10.3390/e22091043 - 18 Sep 2020
Cited by 11 | Viewed by 2220
Abstract
Social systems are characterized by an enormous network of connections and factors that can influence the structure and dynamics of these systems. Among them the whole economical sphere of human activity seems to be the most interrelated and complex. All financial markets, including [...] Read more.
Social systems are characterized by an enormous network of connections and factors that can influence the structure and dynamics of these systems. Among them the whole economical sphere of human activity seems to be the most interrelated and complex. All financial markets, including the youngest one, the cryptocurrency market, belong to this sphere. The complexity of the cryptocurrency market can be studied from different perspectives. First, the dynamics of the cryptocurrency exchange rates to other cryptocurrencies and fiat currencies can be studied and quantified by means of multifractal formalism. Second, coupling and decoupling of the cryptocurrencies and the conventional assets can be investigated with the advanced cross-correlation analyses based on fractal analysis. Third, an internal structure of the cryptocurrency market can also be a subject of analysis that exploits, for example, a network representation of the market. In this work, we approach the subject from all three perspectives based on data from a recent time interval between January 2019 and June 2020. This period includes the peculiar time of the Covid-19 pandemic; therefore, we pay particular attention to this event and investigate how strong its impact on the structure and dynamics of the market was. Besides, the studied data covers a few other significant events like double bull and bear phases in 2019. We show that, throughout the considered interval, the exchange rate returns were multifractal with intermittent signatures of bifractality that can be associated with the most volatile periods of the market dynamics like a bull market onset in April 2019 and the Covid-19 outburst in March 2020. The topology of a minimal spanning tree representation of the market also used to alter during these events from a distributed type without any dominant node to a highly centralized type with a dominating hub of USDT. However, the MST topology during the pandemic differs in some details from other volatile periods. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

Article
Multiscale Entropy Analysis: Application to Cardio-Respiratory Coupling
Entropy 2020, 22(9), 1042; https://doi.org/10.3390/e22091042 - 18 Sep 2020
Cited by 2 | Viewed by 1616
Abstract
It is known that in pathological conditions, physiological systems develop changes in the multiscale properties of physiological signals. However, in real life, little is known about how changes in the function of one of the two coupled physiological systems induce changes in function [...] Read more.
It is known that in pathological conditions, physiological systems develop changes in the multiscale properties of physiological signals. However, in real life, little is known about how changes in the function of one of the two coupled physiological systems induce changes in function of the other one, especially on their multiscale behavior. Hence, in this work we aimed to examine the complexity of cardio-respiratory coupled systems control using multiscale entropy (MSE) analysis of cardiac intervals MSE (RR), respiratory time series MSE (Resp), and synchrony of these rhythms by cross multiscale entropy (CMSE) analysis, in the heart failure (HF) patients and healthy subjects. We analyzed 20 min of synchronously recorded RR intervals and respiratory signal during relaxation in the supine position in 42 heart failure patients and 14 control healthy subjects. Heart failure group was divided into three subgroups, according to the RR interval time series characteristics (atrial fibrillation (HFAF), sinus rhythm (HFSin), and sinus rhythm with ventricular extrasystoles (HFVES)). Compared with healthy control subjects, alterations in respiratory signal properties were observed in patients from the HFSin and HFVES groups. Further, mean MSE curves of RR intervals and respiratory signal were not statistically different only in the HFSin group (p = 0.43). The level of synchrony between these time series was significantly higher in HFSin and HFVES patients than in control subjects and HFAF patients (p < 0.01). In conclusion, depending on the specific pathologies, primary alterations in the regularity of cardiac rhythm resulted in changes in the regularity of the respiratory rhythm, as well as in the level of their asynchrony. Full article
(This article belongs to the Special Issue Entropy in Data Analysis)
Show Figures

Figure 1

Article
Machine Learning for Modeling the Singular Multi-Pantograph Equations
Entropy 2020, 22(9), 1041; https://doi.org/10.3390/e22091041 - 18 Sep 2020
Cited by 2 | Viewed by 967
Abstract
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules [...] Read more.
In this study, a new approach to basis of intelligent systems and machine learning algorithms is introduced for solving singular multi-pantograph differential equations (SMDEs). For the first time, a type-2 fuzzy logic based approach is formulated to find an approximated solution. The rules of the suggested type-2 fuzzy logic system (T2-FLS) are optimized by the square root cubature Kalman filter (SCKF) such that the proposed fineness function to be minimized. Furthermore, the stability and boundedness of the estimation error is proved by novel approach on basis of Lyapunov theorem. The accuracy and robustness of the suggested algorithm is verified by several statistical examinations. It is shown that the suggested method results in an accurate solution with rapid convergence and a lower computational cost. Full article
Show Figures

Figure 1

Article
A Reliable Auto-Robust Analysis of Blood Smear Images for Classification of Microcytic Hypochromic Anemia Using Gray Level Matrices and Gabor Feature Bank
Entropy 2020, 22(9), 1040; https://doi.org/10.3390/e22091040 - 17 Sep 2020
Viewed by 907
Abstract
Accurate blood smear quantification with various blood cell samples is of great clinical importance. The conventional manual process of blood smear quantification is quite time consuming and is prone to errors. Therefore, this paper presents automatic detection of the most frequently occurring condition [...] Read more.
Accurate blood smear quantification with various blood cell samples is of great clinical importance. The conventional manual process of blood smear quantification is quite time consuming and is prone to errors. Therefore, this paper presents automatic detection of the most frequently occurring condition in human blood—microcytic hyperchromic anemia—which is the cause of various life-threatening diseases. This task has been done with segmentation of blood contents, i.e., Red Blood Cells (RBCs), White Blood Cells (WBCs), and platelets, in the first step. Then, the most influential features like geometric shape descriptors, Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and Gabor features (mean squared energy and mean amplitude) are extracted from each of the RBCs. To discriminate the cells as hypochromic microcytes among other RBC classes, scanning is done at angles (0, 45, 90, and 135). To achieve high-level accuracy, Adaptive Synthetic (AdaSyn) sampling for imbalance learning is used to balance the datasets and locality sensitive discriminant analysis (LSDA) technique is used for feature reduction. Finally, upon using these features, classification of blood cells is done using the multilayer perceptual model and random forest learning algorithms. Performance in terms of accuracy was 96%, which is better than the performance of existing techniques. The final outcome of this work may be useful in the efforts to produce a cost-effective screening scheme that could make inexpensive screening for blood smear analysis available globally, thus providing early detection of these diseases. Full article
(This article belongs to the Special Issue Reliability of Modern Electro-Mechanical Systems)
Show Figures

Figure 1

Article
A Novel Hybrid Approach for Partial Discharge Signal Detection Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Approximate Entropy
Entropy 2020, 22(9), 1039; https://doi.org/10.3390/e22091039 - 17 Sep 2020
Cited by 3 | Viewed by 725
Abstract
To eliminate the influence of white noise in partial discharge (PD) detection, we propose a novel method based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and approximate entropy (ApEn). By introducing adaptive noise into the decomposition process, CEEMDAN can effectively [...] Read more.
To eliminate the influence of white noise in partial discharge (PD) detection, we propose a novel method based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and approximate entropy (ApEn). By introducing adaptive noise into the decomposition process, CEEMDAN can effectively separate the original signal into different intrinsic mode functions (IMFs) with distinctive frequency scales. Afterward, the approximate entropy value of each IMF is calculated to eliminate noisy IMFs. Then, correlation coefficient analysis is employed to select useful IMFs that represent dominant PD features. Finally, real IMFs are extracted for PD signal reconstruction. On the basis of EEMD, CEEMDAN can further improve reconstruction accuracy and reduce iteration numbers to solve mode mixing problems. The results on both simulated and on-site PD signals show that the proposed method can be effectively employed for noise suppression and successfully extract PD pulses. The fusion algorithm combines the CEEMDAN algorithm and the ApEn algorithm with their respective advantages and has a better de-noising effect than EMD and EEMD. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Article
Market of Stocks during Crisis Looks Like a Flock of Birds
Entropy 2020, 22(9), 1038; https://doi.org/10.3390/e22091038 - 17 Sep 2020
Cited by 1 | Viewed by 862
Abstract
A crisis in financial markets can be considered as a collective behaviour phenomenon. The collective behaviour is a complex behaviour which exists among a group of animals. The Vicsek model has been adapted to represent this complexity. A unique phase space has been [...] Read more.
A crisis in financial markets can be considered as a collective behaviour phenomenon. The collective behaviour is a complex behaviour which exists among a group of animals. The Vicsek model has been adapted to represent this complexity. A unique phase space has been introduced to represent all possible results of the model. The return of the transaction volumes versus the return of the closed price of each share has been used within the defined phase space. The findings show that the direction of the resultant velocity vectors of all share in this phase space act in the same direction when the financial crisis happens. By monitoring the market’s collective behaviour, it will be possible to gain more knowledge about the condition of the market days in crisis. This research aims to investigate the collective behaviour of stocks using the Vicsek model to study the condition of the market during the days in crisis. Full article
(This article belongs to the Special Issue Entropy and Social Physics)
Show Figures

Figure 1

Article
Analytic Hierarchy Process (AHP)-Based Aggregation Mechanism for Resilience Measurement: NATO Aggregated Resilience Decision Support Model
Entropy 2020, 22(9), 1037; https://doi.org/10.3390/e22091037 - 16 Sep 2020
Cited by 1 | Viewed by 813
Abstract
Resilience is a complex system that represents dynamic behaviours through its complicated structure with various nodes, interrelations, and information flows. Like other international organizations NATO has also been dealing with the measurement of this complex phenomenon in order to have a comprehensive understanding [...] Read more.
Resilience is a complex system that represents dynamic behaviours through its complicated structure with various nodes, interrelations, and information flows. Like other international organizations NATO has also been dealing with the measurement of this complex phenomenon in order to have a comprehensive understanding of the civil environment and its impact on military operations. With this ultimate purpose, NATO had developed and executed a prototype model with the system dynamics modelling and simulation paradigm. NATO has created an aggregated resilience model as an upgrade of the prototype one, as discussed within this study. The structure of the model, aggregation mechanism and shock parametrization methodologies used in the development of the model comprise the scope of this study. Analytic Hierarchy Process (AHP), which is a multi-criteria decision-making technique is the methodology that is used for the development of the aggregation mechanism. The main idea of selecting the AHP methodology is its power and usefulness in mitigating bias in the decision-making process, its capability to increase the number of what-if scenarios to be created, and its contribution to the quality of causal explanations with the granularity it provides. The parametrized strategic shock input page, AHP-based weighted resilience and risk parameters input pages, one more country insertion to the model, and the decision support system page enhance the capacity of the prototype model. As part of the model, the decision support system page stands out as the strategic level cockpit where the colour codes give a clear idea at first about the overall situational picture and country-wise resilience and risk status. At the validation workshop, users not only validated the model but also discussed further development opportunities, such as adding more strategic shocks into the model and introduction of new parameters that will be determined by a big data analysis on relevant open source databases. The developed model has the potential to inspire high-level decision-makers dealing with resilience management in other international organizations, such as the United Nations. Full article
Show Figures

Figure 1

Article
Regularization Methods Based on the Lq-Likelihood for Linear Models with Heavy-Tailed Errors
Entropy 2020, 22(9), 1036; https://doi.org/10.3390/e22091036 - 16 Sep 2020
Viewed by 764
Abstract
We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume q-normal distributions as the errors in linear models. A q-normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with q-normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Article
Time Fractional Fisher–KPP and Fitzhugh–Nagumo Equations
Entropy 2020, 22(9), 1035; https://doi.org/10.3390/e22091035 - 16 Sep 2020
Cited by 1 | Viewed by 958
Abstract
A standard reaction–diffusion equation consists of two additive terms, a diffusion term and a reaction rate term. The latter term is obtained directly from a reaction rate equation which is itself derived from known reaction kinetics, together with modelling assumptions such as the [...] Read more.
A standard reaction–diffusion equation consists of two additive terms, a diffusion term and a reaction rate term. The latter term is obtained directly from a reaction rate equation which is itself derived from known reaction kinetics, together with modelling assumptions such as the law of mass action for well-mixed systems. In formulating a reaction–subdiffusion equation, it is not sufficient to know the reaction rate equation. It is also necessary to know details of the reaction kinetics, even in well-mixed systems where reactions are not diffusion limited. This is because, at a fundamental level, birth and death processes need to be dealt with differently in subdiffusive environments. While there has been some discussion of this in the published literature, few examples have been provided, and there are still very many papers being published with Caputo fractional time derivatives simply replacing first order time derivatives in reaction–diffusion equations. In this paper, we formulate clear examples of reaction–subdiffusion systems, based on; equal birth and death rate dynamics, Fisher–Kolmogorov, Petrovsky and Piskunov (Fisher–KPP) equation dynamics, and Fitzhugh–Nagumo equation dynamics. These examples illustrate how to incorporate considerations of reaction kinetics into fractional reaction–diffusion equations. We also show how the dynamics of a system with birth rates and death rates cancelling, in an otherwise subdiffusive environment, are governed by a mass-conserving tempered time fractional diffusion equation that is subdiffusive for short times but standard diffusion for long times. Full article
(This article belongs to the Special Issue Fractional Calculus and the Future of Science)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop