Next Issue
Volume 22, January
Previous Issue
Volume 21, November

Entropy, Volume 21, Issue 12 (December 2019) – 105 articles

Cover Story (view full-size image): The information bottleneck (IB) method is a technique for extracting information in one random variable that is relevant for predicting another random variable. IB has applications in many fields, including machine learning with neural networks. In order to perform IB, however, one must find an optimally-compressed "bottleneck" random variable, which involves solving a difficult optimization problem with an information-theoretic objective function. We propose a method for solving this optimization problem using neural networks and a recently proposed bound on mutual information. We demonstrate that our approach exhibits better performance than other recent proposals. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Multi-Type Node Detection in Network Communities
Entropy 2019, 21(12), 1237; https://doi.org/10.3390/e21121237 - 17 Dec 2019
Cited by 2 | Viewed by 906
Abstract
Patterns of connectivity among nodes on networks can be revealed by community detection algorithms. The great significance of communities in the study of clustering patterns of nodes in different systems has led to the development of various methods for identifying different node types [...] Read more.
Patterns of connectivity among nodes on networks can be revealed by community detection algorithms. The great significance of communities in the study of clustering patterns of nodes in different systems has led to the development of various methods for identifying different node types on diverse complex systems. However, most of the existing methods identify only either disjoint nodes or overlapping nodes. Many of these methods rarely identify disjunct nodes, even though they could play significant roles on networks. In this paper, a new method, which distinctly identifies disjoint nodes (node clusters), disjunct nodes (single node partitions) and overlapping nodes (nodes binding overlapping communities), is proposed. The approach, which differs from existing methods, involves iterative computation of bridging centrality to determine nodes with the highest bridging centrality value. Additionally, node similarity is computed between the bridge-node and its neighbours, and the neighbours with the least node similarity values are disconnected. This process is sustained until a stoppage criterion condition is met. Bridging centrality metric and Jaccard similarity coefficient are employed to identify bridge-nodes (nodes at cut points) and the level of similarity between the bridge-nodes and their direct neighbours respectively. Properties that characterise disjunct nodes are equally highlighted. Extensive experiments are conducted with artificial networks and real-world datasets and the results obtained demonstrate efficiency of the proposed method in distinctly detecting and classifying multi-type nodes in network communities. This method can be applied to vast areas such as examination of cell interactions and drug designs, disease control in epidemics, dislodging organised crime gangs and drug courier networks, etc. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Open AccessArticle
Probabilistic Modeling with Matrix Product States
Entropy 2019, 21(12), 1236; https://doi.org/10.3390/e21121236 - 17 Dec 2019
Cited by 7 | Viewed by 894
Abstract
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models. The gradient-free algorithm, presented as a sequence [...] Read more.
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models. The gradient-free algorithm, presented as a sequence of exactly solvable effective models, is a modification of the density matrix renormalization group procedure adapted for learning a probability distribution. The conclusion that circuit-based models offer a useful inductive bias for classical datasets is supported by experimental results on the parity learning problem. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Open AccessArticle
A Quantum Cellular Automata Type Architecture with Quantum Teleportation for Quantum Computing
Entropy 2019, 21(12), 1235; https://doi.org/10.3390/e21121235 - 17 Dec 2019
Viewed by 1029
Abstract
We propose an architecture based on Quantum Cellular Automata which allows the use of only one type of quantum gate per computational step, using nearest neighbor interactions. The model is built in partial steps, each one of them analyzed using nearest neighbor interactions, [...] Read more.
We propose an architecture based on Quantum Cellular Automata which allows the use of only one type of quantum gate per computational step, using nearest neighbor interactions. The model is built in partial steps, each one of them analyzed using nearest neighbor interactions, starting with single-qubit operations and continuing with two-qubit ones. A demonstration of the model is given, by analyzing how the techniques can be used to design a circuit implementing the Quantum Fourier Transform. Since the model uses only one type of quantum gate at each phase of the computation, physical implementation can be easier since at each step only one kind of input pulse needs to be applied to the apparatus. Full article
(This article belongs to the Special Issue Quantum Information Processing)
Show Figures

Figure 1

Open AccessReview
Mathematics and the Brain: A Category Theoretical Approach to Go Beyond the Neural Correlates of Consciousness
Entropy 2019, 21(12), 1234; https://doi.org/10.3390/e21121234 - 17 Dec 2019
Cited by 2 | Viewed by 2383
Abstract
Consciousness is a central issue in neuroscience, however, we still lack a formal framework that can address the nature of the relationship between consciousness and its physical substrates. In this review, we provide a novel mathematical framework of category theory (CT), in which [...] Read more.
Consciousness is a central issue in neuroscience, however, we still lack a formal framework that can address the nature of the relationship between consciousness and its physical substrates. In this review, we provide a novel mathematical framework of category theory (CT), in which we can define and study the sameness between different domains of phenomena such as consciousness and its neural substrates. CT was designed and developed to deal with the relationships between various domains of phenomena. We introduce three concepts of CT which include (i) category; (ii) inclusion functor and expansion functor; and, most importantly, (iii) natural transformation between the functors. Each of these mathematical concepts is related to specific features in the neural correlates of consciousness (NCC). In this novel framework, we will examine two of the major theories of consciousness, integrated information theory (IIT) of consciousness and temporospatial theory of consciousness (TTC). We conclude that CT, especially the application of the notion of natural transformation, highlights that we need to go beyond NCC and unravels questions that need to be addressed by any future neuroscientific theory of consciousness. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Figure 1

Open AccessArticle
Detecting Causality in Multivariate Time Series via Non-Uniform Embedding
Entropy 2019, 21(12), 1233; https://doi.org/10.3390/e21121233 - 16 Dec 2019
Cited by 5 | Viewed by 1152
Abstract
Causal analysis based on non-uniform embedding schemes is an important way to detect the underlying interactions between dynamic systems. However, there are still some obstacles to estimating high-dimensional conditional mutual information and forming optimal mixed embedding vector in traditional non-uniform embedding schemes. In [...] Read more.
Causal analysis based on non-uniform embedding schemes is an important way to detect the underlying interactions between dynamic systems. However, there are still some obstacles to estimating high-dimensional conditional mutual information and forming optimal mixed embedding vector in traditional non-uniform embedding schemes. In this study, we present a new non-uniform embedding method framed in information theory to detect causality for multivariate time series, named LM-PMIME, which integrates the low-dimensional approximation of conditional mutual information and the mixed search strategy for the construction of the mixed embedding vector. We apply the proposed method to simulations of linear stochastic, nonlinear stochastic, and chaotic systems, demonstrating its superiority over partial conditional mutual information from mixed embedding (PMIME) method. Moreover, the proposed method works well for multivariate time series with weak coupling strengths, especially for chaotic systems. In the actual application, we show its applicability to epilepsy multichannel electrocorticographic recordings. Full article
Show Figures

Figure 1

Open AccessArticle
Progress in Carnot and Chambadal Modeling of Thermomechanical Engine by Considering Entropy Production and Heat Transfer Entropy
Entropy 2019, 21(12), 1232; https://doi.org/10.3390/e21121232 - 16 Dec 2019
Cited by 9 | Viewed by 717
Abstract
Nowadays the importance of thermomechanical engines is recognized worldwide. Since the industrial revolution, physicists and engineers have sought to maximize the efficiency of these machines, but also the mechanical energy or the power output of the engine, as we have recently found. The [...] Read more.
Nowadays the importance of thermomechanical engines is recognized worldwide. Since the industrial revolution, physicists and engineers have sought to maximize the efficiency of these machines, but also the mechanical energy or the power output of the engine, as we have recently found. The optimization procedure applied in many works in the literature focuses on considering new objective functions including economic and environmental criteria (i.e., ECOP ecological coefficient of performance). The debate here is oriented more towards fundamental aspects. It is known that the maximum of the power output is not obtained under the same conditions as the maximum of efficiency. This is shown, among other things, by the so-called nice radical that accounts for efficiency at maximum power, most often for the endoreversible configuration. We propose here to enrich the model and the debate by emphasizing the fundamental role of the heat transfer entropy together with the production of entropy, accounting for the external or internal irreversibilities of the converter. This original modeling to our knowledge, leads to new and more general results that are reported here. The main consequences of the approach are emphasized, and new limits of the efficiency at maximum energy or power output are obtained. Full article
Show Figures

Figure 1

Open AccessArticle
A New Approach to Fuzzy TOPSIS Method Based on Entropy Measure under Spherical Fuzzy Information
Entropy 2019, 21(12), 1231; https://doi.org/10.3390/e21121231 - 16 Dec 2019
Cited by 10 | Viewed by 1005
Abstract
Spherical fuzzy set (SFS) is one of the most important and extensive concept to accommodate more uncertainties than existing fuzzy set structures. In this article, we will describe a novel enhanced TOPSIS-based procedure for tackling multi attribute group decision making (MAGDM) issues under [...] Read more.
Spherical fuzzy set (SFS) is one of the most important and extensive concept to accommodate more uncertainties than existing fuzzy set structures. In this article, we will describe a novel enhanced TOPSIS-based procedure for tackling multi attribute group decision making (MAGDM) issues under spherical fuzzy setting, in which the weights of both decision-makers (DMs) and criteria are totally unknown. First, we study the notion of SFSs, the score and accuracy functions of SFSs and their basic operating laws. In addition, defined the generalized distance measure for SFSs based on spherical fuzzy entropy measure to compute the unknown weights information. Secondly, the spherical fuzzy information-based decision-making technique for MAGDM is presented. Lastly, an illustrative example is delivered with robot selection to reveal the efficiency of the proposed spherical fuzzy decision support approach, along with the discussion of comparative results, to prove that their results are feasible and credible. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessFeature PaperArticle
A Novel Approach to Support Failure Mode, Effects, and Criticality Analysis Based on Complex Networks
Entropy 2019, 21(12), 1230; https://doi.org/10.3390/e21121230 - 16 Dec 2019
Cited by 2 | Viewed by 794
Abstract
Failure Mode, Effects and Criticality Analysis (FMECA) is a method which involves quantitative failure analysis. It systematically examines potential failure modes in a system, as well as the components of the system, to determine the impact of a failure. In addition, it is [...] Read more.
Failure Mode, Effects and Criticality Analysis (FMECA) is a method which involves quantitative failure analysis. It systematically examines potential failure modes in a system, as well as the components of the system, to determine the impact of a failure. In addition, it is one of the most powerful techniques used for risk assessment and maintenance management. However, various drawbacks are inherent to the classical FMECA method, especially in ranking failure modes. This paper proposes a novel approach that uses complex networks theory to support FMECA. Firstly, the failure modes and their causes and effects are defined as nodes, and according to the logical relationship between failure modes, and their causes and effects, a weighted graph is established. Secondly, we use complex network theory to analyze the weighted graph, and the entropy centrality approach is applied to identify influential nodes. Finally, a real-world case is presented to illustrate and verify the proposed method. Full article
Show Figures

Figure 1

Open AccessArticle
Digital Volume Pulse Measured at the Fingertip as an Indicator of Diabetic Peripheral Neuropathy in the Aged and Diabetic
Entropy 2019, 21(12), 1229; https://doi.org/10.3390/e21121229 - 16 Dec 2019
Cited by 3 | Viewed by 990
Abstract
This study investigated the application of a modified percussion entropy index (PEIPPI) in assessing the complexity of baroreflex sensitivity (BRS) for diabetic peripheral neuropathy prognosis. The index was acquired by comparing the obedience of the fluctuation tendency in the change between [...] Read more.
This study investigated the application of a modified percussion entropy index (PEIPPI) in assessing the complexity of baroreflex sensitivity (BRS) for diabetic peripheral neuropathy prognosis. The index was acquired by comparing the obedience of the fluctuation tendency in the change between the amplitudes of continuous digital volume pulse (DVP) and variations in the peak-to-peak interval (PPI) from a decomposed intrinsic mode function (i.e., IMF6) through ensemble empirical mode decomposition (EEMD). In total, 100 middle-aged subjects were split into 3 groups: healthy subjects (group 1, 48–89 years, n = 34), subjects with type 2 diabetes without peripheral neuropathy within 5 years (group 2, 42–86 years, n = 42, HbA1c ≥ 6.5%), and type 2 diabetic patients with peripheral neuropathy within 5 years (group 3, 37–75 years, n = 24). The results were also found to be very successful at discriminating between PEIPPI values among the three groups (p < 0.017), and indicated significant associations with the anthropometric (i.e., body weight and waist circumference) and serum biochemical (i.e., triglycerides, glycated hemoglobin, and fasting blood glucose) parameters in all subjects (p < 0.05). The present study, which utilized the DVP signals of aged, overweight subjects and diabetic patients, successfully determined the PPI intervals from IMF6 through EEMD. The PEIPPI can provide a prognosis of peripheral neuropathy from diabetic patients within 5 years after photoplethysmography (PPG) measurement. Full article
(This article belongs to the Special Issue Entropy and Nonlinear Dynamics in Medicine, Health, and Life Sciences)
Show Figures

Figure 1

Open AccessArticle
Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability
Entropy 2019, 21(12), 1228; https://doi.org/10.3390/e21121228 - 16 Dec 2019
Cited by 3 | Viewed by 912
Abstract
Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some [...] Read more.
Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Open AccessArticle
Convolutional Recurrent Neural Networks with a Self-Attention Mechanism for Personnel Performance Prediction
Entropy 2019, 21(12), 1227; https://doi.org/10.3390/e21121227 - 16 Dec 2019
Cited by 2 | Viewed by 774
Abstract
Personnel performance is important for the high-technology industry to ensure its core competitive advantages are present. Therefore, predicting personnel performance is an important research area in human resource management (HRM). In this paper, to improve prediction performance, we propose a novel framework for [...] Read more.
Personnel performance is important for the high-technology industry to ensure its core competitive advantages are present. Therefore, predicting personnel performance is an important research area in human resource management (HRM). In this paper, to improve prediction performance, we propose a novel framework for personnel performance prediction to help decision-makers to forecast future personnel performance and recruit the best suitable talents. Firstly, a hybrid convolutional recurrent neural network (CRNN) model based on self-attention mechanism is presented, which can automatically learn discriminative features and capture global contextual information from personnel performance data. Moreover, we treat the prediction problem as a classification task. Then, the k-nearest neighbor (KNN) classifier was used to predict personnel performance. The proposed framework is applied to a real case of personnel performance prediction. The experimental results demonstrate that the presented approach achieves significant performance improvement for personnel performance compared to existing methods. Full article
Show Figures

Figure 1

Open AccessArticle
Entropy Generation and Heat Transfer in Drilling Nanoliquids with Clay Nanoparticles
Entropy 2019, 21(12), 1226; https://doi.org/10.3390/e21121226 - 16 Dec 2019
Cited by 1 | Viewed by 721
Abstract
Different types of nanomaterials are used these days. Among them, clay nanoparticles are the one of the most applicable and affordable options. Specifically, clay nanoparticles have numerous applications in the field of medical science for cleaning blood, water, etc. Based on this motivation, [...] Read more.
Different types of nanomaterials are used these days. Among them, clay nanoparticles are the one of the most applicable and affordable options. Specifically, clay nanoparticles have numerous applications in the field of medical science for cleaning blood, water, etc. Based on this motivation, this article aimed to study entropy generation in different drilling nanoliquids with clay nanoparticles. Entropy generation and natural convection usually occur during the drilling process of oil and gas from rocks and land, wherein clay nanoparticles may be included in the drilling fluids. In this work, water, engine oil and kerosene oil were taken as base fluids. A comparative analysis was completed for these three types of base fluid, each containing clay nanoparticles. Numerical values of viscosity and effective thermal conductivity were computed for the nanofluids based on the Maxwell–Garnett (MG) and Brinkman models. The closed-form solution of the formulated problem (in terms of partial differential equations with defined initial and boundary conditions) was determined using the Laplace transform technique. Numerical facts for temperature and velocity fields were used to calculate the Bejan number and local entropy generation. These solutions are uncommon in the literature and therefore this work can assist in the exact solutions of a number of problems of technical relevance to this type. Herein, the effect of different parameters on entropy generation and Bejan number minimization and maximization are displayed through graphs. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer II)
Show Figures

Figure 1

Open AccessArticle
A Comparative Study of Geoelectric Signals Possibly Associated with the Occurrence of Two Ms > 7 EQs in the South Pacific Coast of Mexico
Entropy 2019, 21(12), 1225; https://doi.org/10.3390/e21121225 - 15 Dec 2019
Viewed by 861
Abstract
During past decades, several studies have suggested the existence of possible seismic electric precursors associated with earthquakes of magnitude M > 7 . However, additional analyses are needed to have more reliable evidence of pattern behavior prior to the occurrence of a big [...] Read more.
During past decades, several studies have suggested the existence of possible seismic electric precursors associated with earthquakes of magnitude M > 7 . However, additional analyses are needed to have more reliable evidence of pattern behavior prior to the occurrence of a big event. In this article we report analyses of self-potential Δ V records during approximately two years in three electro-seismic stations in Mexico located at Acapulco, Guerrero; Petatlán, Guerrero and Pinotepa Nacional, Oaxaca. On 18 April 2014 an M s 7.2 earthquake occurred near our Petatlán station. Our study shows two notable anomalies observed in the behavior of the Fourier power spectrum of Δ V for ultra low frequency ULF-range, and the transition of the α l -exponent of the detrended fluctuation analysis of the Δ V time series from uncorrelated to correlated signals. These anomalies lasted approximately three and a half months before the main shock. We compare this electric pattern with another electric signal we reported associated with an M s 7.4 that occurred on 14 September 1995 at Guerrero state, Mexico. Our characterization of the anomalies observed in both signals point out similar features that enrich our knowledge about precursory phenomena linked to the occurrence of earthquakes of magnitude M > 7 . Full article
Show Figures

Figure 1

Open AccessArticle
Alterations of Cardiovascular Complexity during Acute Exposure to High Altitude: A Multiscale Entropy Approach
Entropy 2019, 21(12), 1224; https://doi.org/10.3390/e21121224 - 15 Dec 2019
Cited by 2 | Viewed by 838
Abstract
Stays at high altitude induce alterations in cardiovascular control and are a model of specific pathological cardiovascular derangements at sea level. However, high-altitude alterations of the complex cardiovascular dynamics remain an almost unexplored issue. Therefore, our aim is to describe the altered cardiovascular [...] Read more.
Stays at high altitude induce alterations in cardiovascular control and are a model of specific pathological cardiovascular derangements at sea level. However, high-altitude alterations of the complex cardiovascular dynamics remain an almost unexplored issue. Therefore, our aim is to describe the altered cardiovascular complexity at high altitude with a multiscale entropy (MSE) approach. We recorded the beat-by-beat series of systolic and diastolic blood pressure and heart rate in 20 participants for 15 min twice, at sea level and after arrival at 4554 m a.s.l. We estimated Sample Entropy and MSE at scales of up to 64 beats, deriving average MSE values over the scales corresponding to the high-frequency (MSEHF) and low-frequency (MSELF) bands of heart-rate variability. We found a significant loss of complexity at heart-rate and blood-pressure scales complementary to each other, with the decrease with high altitude being concentrated at Sample Entropy and at MSEHF for heart rate and at MSELF for blood pressure. These changes can be ascribed to the acutely increased chemoreflex sensitivity in hypoxia that causes sympathetic activation and hyperventilation. Considering high altitude as a model of pathological states like heart failure, our results suggest new ways for monitoring treatments and rehabilitation protocols. Full article
Show Figures

Figure 1

Open AccessArticle
Information Theory for Non-Stationary Processes with Stationary Increments
Entropy 2019, 21(12), 1223; https://doi.org/10.3390/e21121223 - 15 Dec 2019
Cited by 2 | Viewed by 956
Abstract
We describe how to analyze the wide class of non-stationary processes with stationary centered increments using Shannon information theory. To do so, we use a practical viewpoint and define ersatz quantities from time-averaged probability distributions. These ersatz versions of entropy, mutual information, and [...] Read more.
We describe how to analyze the wide class of non-stationary processes with stationary centered increments using Shannon information theory. To do so, we use a practical viewpoint and define ersatz quantities from time-averaged probability distributions. These ersatz versions of entropy, mutual information, and entropy rate can be estimated when only a single realization of the process is available. We abundantly illustrate our approach by analyzing Gaussian and non-Gaussian self-similar signals, as well as multi-fractal signals. Using Gaussian signals allows us to check that our approach is robust in the sense that all quantities behave as expected from analytical derivations. Using the stationarity (independence on the integration time) of the ersatz entropy rate, we show that this quantity is not only able to fine probe the self-similarity of the process, but also offers a new way to quantify the multi-fractality. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Deep Learning and Artificial Intelligence for the Determination of the Cervical Vertebra Maturation Degree from Lateral Radiography
Entropy 2019, 21(12), 1222; https://doi.org/10.3390/e21121222 - 14 Dec 2019
Cited by 2 | Viewed by 1159
Abstract
Deep Learning (DL) and Artificial Intelligence (AI) tools have shown great success in different areas of medical diagnostics. In this paper, we show another success in orthodontics. In orthodontics, the right treatment timing of many actions and operations is crucial because many environmental [...] Read more.
Deep Learning (DL) and Artificial Intelligence (AI) tools have shown great success in different areas of medical diagnostics. In this paper, we show another success in orthodontics. In orthodontics, the right treatment timing of many actions and operations is crucial because many environmental and genetic conditions may modify jaw growth. The stage of growth is related to the Cervical Vertebra Maturation (CVM) degree. Thus, determining the CVM to determine the suitable timing of the treatment is important. In orthodontics, lateral X-ray radiography is used to determine it. Many classical methods need knowledge and time to look and identify some features. Nowadays, ML and AI tools are used for many medical and biological diagnostic imaging. This paper reports on the development of a Deep Learning (DL) Convolutional Neural Network (CNN) method to determine (directly from images) the degree of maturation of CVM classified in six degrees. The results show the performances of the proposed method in different contexts with different number of images for training, evaluation and testing and different pre-processing of these images. The proposed model and method are validated by cross validation. The implemented software is almost ready for use by orthodontists. Full article
Show Figures

Figure 1

Open AccessArticle
Identification of Functional Bioprocess Model for Recombinant E. Coli Cultivation Process
Entropy 2019, 21(12), 1221; https://doi.org/10.3390/e21121221 - 14 Dec 2019
Cited by 1 | Viewed by 789
Abstract
The purpose of this study is to introduce an improved Luedeking–Piret model that represents a structurally simple biomass concentration approach. The developed routine provides acceptable accuracy when fitting experimental data that incorporate the target protein concentration of Escherichia coli culture BL21 (DE3) pET28a [...] Read more.
The purpose of this study is to introduce an improved Luedeking–Piret model that represents a structurally simple biomass concentration approach. The developed routine provides acceptable accuracy when fitting experimental data that incorporate the target protein concentration of Escherichia coli culture BL21 (DE3) pET28a in fed-batch processes. This paper presents system identification, biomass, and product parameter fitting routines, starting from their roots of origin to the entropy-related development, characterized by robustness and simplicity. A single tuning coefficient allows for the selection of an optimization criterion that serves equally well for higher and lower biomass concentrations. The idea of the paper is to demonstrate that the use of fundamental knowledge can make the general model more common for technological use compared to a sophisticated artificial neural network. Experimental validation of the proposed model involved data analysis of six cultivation experiments compared to 19 experiments used for model fitting and parameter estimation. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Open AccessArticle
Permutation Entropy and Statistical Complexity Analysis of Brazilian Agricultural Commodities
Entropy 2019, 21(12), 1220; https://doi.org/10.3390/e21121220 - 14 Dec 2019
Cited by 8 | Viewed by 1005
Abstract
Agricultural commodities are considered perhaps the most important commodities, as any abrupt increase in food prices has serious consequences on food security and welfare, especially in developing countries. In this work, we analyze predictability of Brazilian agricultural commodity prices during the period after [...] Read more.
Agricultural commodities are considered perhaps the most important commodities, as any abrupt increase in food prices has serious consequences on food security and welfare, especially in developing countries. In this work, we analyze predictability of Brazilian agricultural commodity prices during the period after 2007/2008 food crisis. We use information theory based method Complexity/Entropy causality plane (CECP) that was shown to be successful in the analysis of market efficiency and predictability. By estimating information quantifiers permutation entropy and statistical complexity, we associate to each commodity the position in CECP and compare their efficiency (lack of predictability) using the deviation from a random process. Coffee market shows highest efficiency (lowest predictability) while pork market shows lowest efficiency (highest predictability). By analyzing temporal evolution of commodities in the complexity–entropy causality plane, we observe that during the analyzed period (after 2007/2008 crisis) the efficiency of cotton, rice, and cattle markets increases, the soybeans market shows the decrease in efficiency until 2012, followed by the lower predictability and the increase of efficiency, while most commodities (8 out of total 12) exhibit relatively stable efficiency, indicating increased market integration in post-crisis period. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Open AccessArticle
Finite Amplitude Stability of Internal Steady Flows of the Giesekus Viscoelastic Rate-Type Fluid
Entropy 2019, 21(12), 1219; https://doi.org/10.3390/e21121219 - 13 Dec 2019
Cited by 3 | Viewed by 788
Abstract
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number [...] Read more.
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number that guarantee the unconditional asymptotic stability of the corresponding steady internal flow, wherein the distance between the steady flow field and the perturbed flow field is measured with the help of the Bures–Wasserstein distance between positive definite matrices. The application of the theoretical results is documented in the finite amplitude stability analysis of Taylor–Couette flow. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Open AccessArticle
Ordering of Trotterization: Impact on Errors in Quantum Simulation of Electronic Structure
Entropy 2019, 21(12), 1218; https://doi.org/10.3390/e21121218 - 13 Dec 2019
Cited by 7 | Viewed by 1086
Abstract
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter [...] Read more.
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter number. However, this increases the length of the quantum circuits required, which may be impractical. It is therefore desirable to find methods of reducing the Trotter error through alternate means. The Trotter error is dependent on the order in which individual term unitaries are applied. Due to the factorial growth in the number of possible orderings with respect to the number of terms, finding an optimal strategy for ordering Trotter sequences is difficult. In this paper, we propose three ordering strategies, and assess their impact on the Trotter error incurred. Initially, we exhaustively examine the possible orderings for molecular hydrogen in a STO-3G basis. We demonstrate how the optimal ordering scheme depends on the compatibility graph of the Hamiltonian, and show how it varies with increasing bond length. We then use 44 molecular Hamiltonians to evaluate two strategies based on coloring their incompatibility graphs, while considering the properties of the obtained colorings. We find that the Trotter error for most systems involving heavy atoms, using a reference magnitude ordering, is less than 1 kcal/mol. Relative to this, the difference between ordering schemes can be substantial, being approximately on the order of millihartrees. The coloring-based ordering schemes are reasonably promising—particularly for systems involving heavy atoms—however further work is required to increase dependence on the magnitude of terms. Finally, we consider ordering strategies based on the norm of the Trotter error operator, including an iterative method for generating the new error operator terms added upon insertion of a term into an ordered Hamiltonian. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Open AccessEditorial
The Fractional View of Complexity
Entropy 2019, 21(12), 1217; https://doi.org/10.3390/e21121217 - 13 Dec 2019
Viewed by 601
Abstract
Fractal analysis and fractional differential equations have been proven as useful tools for describing the dynamics of complex phenomena characterized by long memory and spatial heterogeneity [...] Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
Open AccessArticle
Predicting Student Performance and Deficiency in Mastering Knowledge Points in MOOCs Using Multi-Task Learning
Entropy 2019, 21(12), 1216; https://doi.org/10.3390/e21121216 - 12 Dec 2019
Cited by 2 | Viewed by 837
Abstract
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student [...] Read more.
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student comprehends learning material. Therefore, we propose a method for predicting student performance and mastery of knowledge points in MOOCs based on assignment-related online behavior; this allows for those providing academic support to intervene and improve learning outcomes of students facing difficulties. The proposed method was developed while using data from 1528 participants in a C Programming course, from which we extracted assignment-related features. We first applied a multi-task multi-layer long short-term memory-based student performance predicting method with cross-entropy as the loss function to predict students’ overall performance and mastery of each knowledge point. Our method incorporates the attention mechanism, which might better reflect students’ learning behavior and performance. Our method achieves an accuracy of 92.52% for predicting students’ performance and a recall rate of 94.68%. Students’ actions, such as submission times and plagiarism, were related to their performance in the MOOC, and the results demonstrate that our method predicts the overall performance and knowledge points that students cannot master well. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

Open AccessArticle
A Novel Improved Feature Extraction Technique for Ship-Radiated Noise Based on IITD and MDE
Entropy 2019, 21(12), 1215; https://doi.org/10.3390/e21121215 - 12 Dec 2019
Cited by 4 | Viewed by 625
Abstract
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion [...] Read more.
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion entropy (MDE). The proposed feature extraction technique is named IITD-MDE. First, IITD is applied to decompose the ship-radiated noise signal into a series of intrinsic scale components (ISCs). Then, we select the ISC with the main information through the correlation analysis, and calculate the MDE value as feature vectors. Finally, the feature vectors are input into the support vector machine (SVM) for ship classification. The experimental results indicate that the recognition rate of the proposed technique reaches 86% accuracy. Therefore, compared with the other feature extraction methods, the proposed method provides a new solution for classifying different types of ships effectively. Full article
Show Figures

Figure 1

Open AccessArticle
A Two Phase Method for Solving the Distribution Problem in a Fuzzy Setting
Entropy 2019, 21(12), 1214; https://doi.org/10.3390/e21121214 - 11 Dec 2019
Viewed by 847
Abstract
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical [...] Read more.
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical method for solving this problem is proposed. This method is implemented using the α -cut approximation of fuzzy values and the probability approach to interval comparison. The method allows us to provide the straightforward fuzzy extension of a simplex method. It is important that the results are fuzzy values. To validate our approach, these results were compared with those obtained using the competing method and those we got using the Monte–Carlo method. In the second phase, the results obtained in the first one (the fuzzy profit) are used as the natural constraints on the parameters of multiobjective task. In our approach to the solution of distribution problem, the fuzzy local criteria based on the overall profit and contracts breaching risks are used. The particular local criteria are aggregated with the use of most popular aggregation modes. To obtain a compromise solution, the compromise general criterion is introduced, which is the aggregation of aggregating modes with the use of level-2 fuzzy sets. As the result, a new two phase method for solving the fuzzy, nonlinear, multiobjective distribution problem aggregating the fuzzy local criteria based on the overall profit and contracts breaching risks has been developed. Based on the comparison of the results obtained using our method with those obtained by competing one, and on the results of the sensitivity analysis, we can conclude that the method may be successfully used in applications. Numerical examples illustrate the proposed method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Improving Neural Machine Translation by Filtering Synthetic Parallel Data
Entropy 2019, 21(12), 1213; https://doi.org/10.3390/e21121213 - 11 Dec 2019
Viewed by 979
Abstract
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired [...] Read more.
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired sentences or translation errors. In this paper, we propose a novel approach to filter this noise from synthetic data. For each sentence pair of the synthetic data, we compute a semantic similarity score using bilingual word embeddings. By selecting sentence pairs according to these scores, we obtain better synthetic parallel data. Experimental results on the IWSLT 2017 Korean→English translation task show that despite using much less data, our method outperforms the baseline NMT system with back-translation by up to 0.72 and 0.62 Bleu points for tst2016 and tst2017, respectively. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Dissipation in Non-Steady State Regulatory Circuits
Entropy 2019, 21(12), 1212; https://doi.org/10.3390/e21121212 - 10 Dec 2019
Cited by 1 | Viewed by 735
Abstract
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out [...] Read more.
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out of equilibrium and dissipate energy. We briefly review the probabilistic measures of information and dissipation and use simple models to discuss and illustrate trade-offs between information and dissipation in biological circuits. We find that circuits with non-steady state initial conditions can transmit more information at small readout delays than steady state circuits. The dissipative cost of this additional information proves marginal compared to the steady state dissipation. Feedback does not significantly increase the transmitted information for out of steady state circuits but does decrease dissipative costs. Lastly, we discuss the case of bursty gene regulatory circuits that, even in the fast switching limit, function out of equilibrium. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Open AccessArticle
On the Statistical Mechanics of Life: Schrödinger Revisited
Entropy 2019, 21(12), 1211; https://doi.org/10.3390/e21121211 - 10 Dec 2019
Cited by 4 | Viewed by 1791
Abstract
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can [...] Read more.
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can accompany an increase in macroscopic order. We view metabolism in living things as microscopic variables directly driven by the second law of thermodynamics, while viewing the macroscopic variables of structure, complexity and homeostasis as mechanisms that are entropically favored because they open channels for entropy to grow via metabolism. This perspective reverses the conventional relation between structure and metabolism, by emphasizing the role of structure for metabolism rather than the converse. Structure extends in time, preserving information along generations, particularly in the genetic code, but also in human culture. We argue that increasing complexity is an inevitable tendency for systems with these dynamics and explain this with the notion of metastable states, which are enclosed regions of the phase-space that we call “bubbles,” and channels between these, which are discovered by random motion of the system. We consider that more complex systems inhabit larger bubbles (have more available states), and also that larger bubbles are more easily entered and less easily exited than small bubbles. The result is that the system entropically wanders into ever-larger bubbles in the foamy phase space, becoming more complex over time. This formulation makes intuitive why the increase in order/complexity over time is often stepwise and sometimes collapses catastrophically, as in biological extinction. Full article
(This article belongs to the Special Issue Biological Statistical Mechanics)
Show Figures

Figure 1

Open AccessArticle
The Static Standing Postural Stability Measured by Average Entropy
Entropy 2019, 21(12), 1210; https://doi.org/10.3390/e21121210 - 10 Dec 2019
Cited by 1 | Viewed by 719
Abstract
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center [...] Read more.
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center of pressure (COP) trajectories were collected from 11 subjects under four kinds of balance conditions, from stable to unstable: bipedal with open eyes, bipedal with closed eyes, unipedal with open eyes, and unipedal with closed eyes. The AE, entropy of entropy (EoE), and MSE methods were used to analyze these COP data, and EoE was found to be a good measure of complexity. The AE of the 11 subjects sequentially increased by 100% as the balance conditions progressed from stable to unstable, but the results of EoE and MSE did not follow this trend. Therefore, AE, rather than EoE or MSE, is a good measure of static standing postural stability. Furthermore, the comparison of EoE and AE plots exhibited an inverted U curve, which is another example of a complexity versus disorder inverted U curve. Full article
Show Figures

Figure 1

Open AccessArticle
Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring
Entropy 2019, 21(12), 1209; https://doi.org/10.3390/e21121209 - 10 Dec 2019
Cited by 1 | Viewed by 690
Abstract
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum [...] Read more.
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum Likelihood) and Bayesian methods. For Bayesian approaches, the Bayesian estimates are obtained based on both asymmetric (General Entropy, Linex) and symmetric (Squared Error) loss functions. Due to the complex form of Bayes estimates, we cannot get an explicit solution. Therefore, the Lindley method as well as Importance Sampling procedure is applied. Furthermore, using Importance Sampling method, the Highest Posterior Density credible intervals of entropy are constructed. As a comparison, the asymptotic intervals of entropy are also gained. Finally, a simulation study is implemented and a real data set analysis is performed to apply the previous methods. Full article
Show Figures

Figure 1

Open AccessArticle
Learning from Both Experts and Data
Entropy 2019, 21(12), 1208; https://doi.org/10.3390/e21121208 - 10 Dec 2019
Cited by 2 | Viewed by 721
Abstract
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is [...] Read more.
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is common to rely first on an a priori from initial domain knowledge before proceeding to an online data acquisition. We are particularly interested in the intermediate regime, where we do not have enough data to do without the initial a priori of the experts, but enough to correct it if necessary. We present here a novel way to tackle this issue, with a method providing an objective way to choose the weight to be given to experts compared to data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop