Previous Issue
Volume 21, November

Table of Contents

Entropy, Volume 21, Issue 12 (December 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Finite Amplitude Stability of Internal Steady Flows of the Giesekus Viscoelastic Rate-Type Fluid
Entropy 2019, 21(12), 1219; https://doi.org/10.3390/e21121219 (registering DOI) - 13 Dec 2019
Abstract
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number [...] Read more.
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number that guarantee the unconditional asymptotic stability of the corresponding steady internal flow, wherein the distance between the steady flow field and the perturbed flow field is measured with the help of the Bures–Wasserstein distance between positive definite matrices. The application of the theoretical results is documented in the finite amplitude stability analysis of Taylor–Couette flow. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Open AccessArticle
Ordering of Trotterization: Impact on Errors in Quantum Simulation of Electronic Structure
Entropy 2019, 21(12), 1218; https://doi.org/10.3390/e21121218 (registering DOI) - 13 Dec 2019
Abstract
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter [...] Read more.
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter number. However, this increases the length of the quantum circuits required, which may be impractical. It is therefore desirable to find methods of reducing the Trotter error through alternate means. The Trotter error is dependent on the order in which individual term unitaries are applied. Due to the factorial growth in the number of possible orderings with respect to the number of terms, finding an optimal strategy for ordering Trotter sequences is difficult. In this paper, we propose three ordering strategies, and assess their impact on the Trotter error incurred. Initially, we exhaustively examine the possible orderings for molecular hydrogen in a STO-3G basis. We demonstrate how the optimal ordering scheme depends on the compatibility graph of the Hamiltonian, and show how it varies with increasing bond length. We then use 44 molecular Hamiltonians to evaluate two strategies based on coloring their incompatibility graphs, while considering the properties of the obtained colorings. We find that the Trotter error for most for systems involving heavy atoms, using a reference magnitude ordering, is less than 1 kcal/mol. Relative to this, the difference between ordering schemes can be substantial, being approximately on the order of millihartrees. The coloring-based ordering schemes are reasonably promising—particularly for systems involving heavy atoms—however further work is required to increase dependence on the magnitude of terms. Finally, we consider ordering strategies based on the norm of the Trotter error operator, including an iterative method for generating the new error operator terms added upon insertion of a term into an ordered Hamiltonian. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Open AccessEditorial
The Fractional View of Complexity
Entropy 2019, 21(12), 1217; https://doi.org/10.3390/e21121217 (registering DOI) - 13 Dec 2019
Viewed by 50
Abstract
Fractal analysis and fractional differential equations have been proven as useful tools for describing the dynamics of complex phenomena characterized by long memory and spatial heterogeneity [...] Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
Open AccessArticle
Predicting Student Performance and Deficiency in Mastering Knowledge Points in MOOCs Using Multi-Task Learning
Entropy 2019, 21(12), 1216; https://doi.org/10.3390/e21121216 - 12 Dec 2019
Viewed by 95
Abstract
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student [...] Read more.
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student comprehends learning material. Therefore, we propose a method for predicting student performance and mastery of knowledge points in MOOCs based on assignment-related online behavior; this allows for those providing academic support to intervene and improve learning outcomes of students facing difficulties. The proposed method was developed while using data from 1528 participants in a C Programming course, from which we extracted assignment-related features. We first applied a multi-task multi-layer long short-term memory-based student performance predicting method with cross-entropy as the loss function to predict students’ overall performance and mastery of each knowledge point. Our method incorporates the attention mechanism, which might better reflect students’ learning behavior and performance. Our method achieves an accuracy of 92.52% for predicting students’ performance and a recall rate of 94.68%. Students’ actions, such as submission times and plagiarism, were related to their performance in the MOOC, and the results demonstrate that our method predicts the overall performance and knowledge points that students cannot master well. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

Open AccessArticle
A Novel Improved Feature Extraction Technique for Ship-Radiated Noise Based on IITD and MDE
Entropy 2019, 21(12), 1215; https://doi.org/10.3390/e21121215 - 12 Dec 2019
Viewed by 97
Abstract
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion [...] Read more.
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion entropy (MDE). The proposed feature extraction technique is named IITD-MDE. First, IITD is applied to decompose the ship-radiated noise signal into a series of intrinsic scale components (ISCs). Then, we select the ISC with the main information through the correlation analysis, and calculate the MDE value as feature vectors. Finally, the feature vectors are input into the support vector machine (SVM) for ship classification. The experimental results indicate that the recognition rate of the proposed technique reaches 86% accuracy. Therefore, compared with the other feature extraction methods, the proposed method provides a new solution for classifying different types of ships effectively. Full article
Show Figures

Figure 1

Open AccessArticle
A Two Phase Method for Solving the Distribution Problem in a Fuzzy Setting
Entropy 2019, 21(12), 1214; https://doi.org/10.3390/e21121214 - 11 Dec 2019
Viewed by 133
Abstract
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical [...] Read more.
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical method for solving this problem is proposed. This method is implemented using the α -cut approximation of fuzzy values and the probability approach to interval comparison. The method allows us to provide the straightforward fuzzy extension of a simplex method. It is important that the results are fuzzy values. To validate our approach, these results were compared with those obtained using the competing method and those we got using the Monte–Carlo method. In the second phase, the results obtained in the first one (the fuzzy profit) are used as the natural constraints on the parameters of multiobjective task. In our approach to the solution of distribution problem, the fuzzy local criteria based on the overall profit and contracts breaching risks are used. The particular local criteria are aggregated with the use of most popular aggregation modes. To obtain a compromise solution, the compromise general criterion is introduced, which is the aggregation of aggregating modes with the use of level-2 fuzzy sets. As the result, a new two phase method for solving the fuzzy, nonlinear, multiobjective distribution problem aggregating the fuzzy local criteria based on the overall profit and contracts breaching risks has been developed. Based on the comparison of the results obtained using our method with those obtained by competing one, and on the results of the sensitivity analysis, we can conclude that the method may be successfully used in applications. Numerical examples illustrate the proposed method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Improving Neural Machine Translation by Filtering Synthetic Parallel Data
Entropy 2019, 21(12), 1213; https://doi.org/10.3390/e21121213 - 11 Dec 2019
Viewed by 169
Abstract
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired [...] Read more.
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired sentences or translation errors. In this paper, we propose a novel approach to filter this noise from synthetic data. For each sentence pair of the synthetic data, we compute a semantic similarity score using bilingual word embeddings. By selecting sentence pairs according to these scores, we obtain better synthetic parallel data. Experimental results on the IWSLT 2017 Korean→English translation task show that despite using much less data, our method outperforms the baseline NMT system with back-translation by up to 0.72 and 0.62 Bleu points for tst2016 and tst2017, respectively. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Dissipation in Non-Steady State Regulatory Circuits
Entropy 2019, 21(12), 1212; https://doi.org/10.3390/e21121212 - 10 Dec 2019
Viewed by 139
Abstract
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out [...] Read more.
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out of equilibrium and dissipate energy. We briefly review the probabilistic measures of information and dissipation and use simple models to discuss and illustrate trade-offs between information and dissipation in biological circuits. We find that circuits with non-steady state initial conditions can transmit more information at small readout delays than steady state circuits. The dissipative cost of this additional information proves marginal compared to the steady state dissipation. Feedback does not significantly increase the transmitted information for out of steady state circuits but does decrease dissipative costs. Lastly, we discuss the case of bursty gene regulatory circuits that, even in the fast switching limit, function out of equilibrium. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Open AccessArticle
On the Statistical Mechanics of Life: Schrödinger Revisited
Entropy 2019, 21(12), 1211; https://doi.org/10.3390/e21121211 - 10 Dec 2019
Viewed by 155
Abstract
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can [...] Read more.
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can accompany an increase in macroscopic order. We view metabolism in living things as microscopic variables directly driven by the second law of thermodynamics, while viewing the macroscopic variables of structure, complexity and homeostasis as mechanisms that are entropically favored because they open channels for entropy to grow via metabolism. This perspective reverses the conventional relation between structure and metabolism, by emphasizing the role of structure for metabolism rather than the converse. Structure extends in time, preserving information along generations, particularly in the genetic code, but also in human culture. We argue that increasing complexity is an inevitable tendency for systems with these dynamics and explain this with the notion of metastable states, which are enclosed regions of the phase-space that we call “bubbles,” and channels between these, which are discovered by random motion of the system. We consider that more complex systems inhabit larger bubbles (have more available states), and also that larger bubbles are more easily entered and less easily exited than small bubbles. The result is that the system entropically wanders into ever-larger bubbles in the foamy phase space, becoming more complex over time. This formulation makes intuitive why the increase in order/complexity over time is often stepwise and sometimes collapses catastrophically, as in biological extinction. Full article
(This article belongs to the Special Issue Biological Statistical Mechanics)
Show Figures

Figure 1

Open AccessArticle
The Static Standing Postural Stability Measured by Average Entropy
Entropy 2019, 21(12), 1210; https://doi.org/10.3390/e21121210 - 10 Dec 2019
Viewed by 140
Abstract
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center [...] Read more.
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center of pressure (COP) trajectories were collected from 11 subjects under four kinds of balance conditions, from stable to unstable: bipedal with open eyes, bipedal with closed eyes, unipedal with open eyes, and unipedal with closed eyes. The AE, entropy of entropy (EoE), and MSE methods were used to analyze these COP data, and EoE was found to be a good measure of complexity. The AE of the 11 subjects sequentially increased by 100% as the balance conditions progressed from stable to unstable, but the results of EoE and MSE did not follow this trend. Therefore, AE, rather than EoE or MSE, is a good measure of static standing postural stability. Furthermore, the comparison of EoE and AE plots exhibited an inverted U curve, which is another example of a complexity versus disorder inverted U curve. Full article
Show Figures

Figure 1

Open AccessArticle
Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring
Entropy 2019, 21(12), 1209; https://doi.org/10.3390/e21121209 - 10 Dec 2019
Viewed by 178
Abstract
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum [...] Read more.
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum Likelihood) and Bayesian methods. For Bayesian approaches, the Bayesian estimates are obtained based on both asymmetric (General Entropy, Linex) and symmetric (Squared Error) loss functions. Due to the complex form of Bayes estimates, we cannot get an explicit solution. Therefore, the Lindley method as well as Importance Sampling procedure is applied. Furthermore, using Importance Sampling method, the Highest Posterior Density credible intervals of entropy are constructed. As a comparison, the asymptotic intervals of entropy are also gained. Finally, a simulation study is implemented and a real data set analysis is performed to apply the previous methods. Full article
Show Figures

Figure 1

Open AccessArticle
Learning from Both Experts and Data
Entropy 2019, 21(12), 1208; https://doi.org/10.3390/e21121208 - 10 Dec 2019
Viewed by 151
Abstract
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is [...] Read more.
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is common to rely first on an a priori from initial domain knowledge before proceeding to an online data acquisition. We are particularly interested in the intermediate regime, where we do not have enough data to do without the initial a priori of the experts, but enough to correct it if necessary. We present here a novel way to tackle this issue, with a method providing an objective way to choose the weight to be given to experts compared to data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Towards Generation of Cat States in Trapped Ions Set-Ups via FAQUAD Protocols and Dynamical Decoupling
Entropy 2019, 21(12), 1207; https://doi.org/10.3390/e21121207 - 09 Dec 2019
Viewed by 150
Abstract
The high fidelity generation of strongly entangled states of many particles, such as cat states, is a particularly demanding challenge. One approach is to drive the system, within a certain final time, as adiabatically as possible, in order to avoid the generation of [...] Read more.
The high fidelity generation of strongly entangled states of many particles, such as cat states, is a particularly demanding challenge. One approach is to drive the system, within a certain final time, as adiabatically as possible, in order to avoid the generation of unwanted excitations. However, excitations can also be generated by the presence of dissipative effects such as dephasing. Here we compare the effectiveness of Local Adiabatic and the FAst QUasi ADiabatic protocols in achieving a high fidelity for a target superposition state both with and without dephasing. In particular, we consider trapped ions set-ups in which each spin interacts with all the others with the uniform coupling strength or with a power-law coupling. In order to mitigate the effects of dephasing, we complement the adiabatic protocols with dynamical decoupling and we test its effectiveness. The protocols we study could be readily implemented with state-of-the-art techniques. Full article
(This article belongs to the Special Issue Shortcuts to Adiabaticity)
Open AccessArticle
Patterns of Heart Rate Dynamics in Healthy Aging Population: Insights from Machine Learning Methods
Entropy 2019, 21(12), 1206; https://doi.org/10.3390/e21121206 - 09 Dec 2019
Viewed by 203
Abstract
Costa et. al (Frontiers in Physiology (2017) 8255) proved that abnormal features of heart rate variability (HRV) can be discerned by the presence of particular patterns in a signal of time intervals between subsequent heart contractions, called RR intervals. In the following, the [...] Read more.
Costa et. al (Frontiers in Physiology (2017) 8255) proved that abnormal features of heart rate variability (HRV) can be discerned by the presence of particular patterns in a signal of time intervals between subsequent heart contractions, called RR intervals. In the following, the statistics of these patterns, quantified using entropic tools, are explored in order to uncover the specifics of the dynamics of heart contraction based on RR intervals. The 33 measures of HRV (standard and new ones) were estimated from four hour nocturnal recordings obtained from 181 healthy people of different ages and analyzed with the machine learning methods. The validation of the methods was based on the results obtained from shuffled data. The exploratory factor analysis provided five factors driving the HRV. We hypothesize that these factors could be related to the commonly assumed physiological sources of HRV: (i) activity of the vagal nervous system; (ii) dynamical balance in the autonomic nervous system; (iii) sympathetic activity; (iv) homeostatic stability; and (v) humoral effects. In particular, the indices describing patterns: their total volume, as well as their distribution, showed important aspects of the organization of the ANS control: the presence or absence of a strong correlation between the patterns’ indices, which distinguished the original rhythms of people from their shuffled representatives. Supposing that the dynamic organization of RR intervals is age dependent, classification with the support vector machines was performed. The classification results proved to be strongly dependent on the parameters of the methods used, therefore determining that the age group was not obvious. Full article
Show Figures

Figure 1

Open AccessArticle
An Entropy-Based Cross-Efficiency under Variable Returns to Scale
Entropy 2019, 21(12), 1205; https://doi.org/10.3390/e21121205 - 07 Dec 2019
Viewed by 258
Abstract
Cross-efficiency evaluation is an effective methodology for discriminating among a set of decision-making units (DMUs) through both self- and peer-evaluation methods. This evaluation technique is usually used for data envelopment analysis (DEA) models with constant returns to scale due to the fact that [...] Read more.
Cross-efficiency evaluation is an effective methodology for discriminating among a set of decision-making units (DMUs) through both self- and peer-evaluation methods. This evaluation technique is usually used for data envelopment analysis (DEA) models with constant returns to scale due to the fact that negative efficiencies never happen in this case. For cases of variable returns to scale (VRSs), the evaluation may generate negative cross-efficiencies. However, when the production technology is known to be VRS, a VRS model must be used. In this case, negative efficiencies may occur. Negative efficiencies are unreasonable and cause difficulties in calculating the final cross-efficiency. In this paper, we propose a cross-efficiency evaluation method, with the technology of VRS. The cross-efficiency intervals of DMUs were derived from the associated aggressive and benevolent formulations. More importantly, the proposed approach does not produce negative efficiencies. For comparison of DMUs with their cross-efficiency intervals, a numerical index is required. Since the concept of entropy is an effective tool to measure the uncertainty, this concept was employed to build an index for ranking DMUs with cross efficiency intervals. A real-case example was used to illustrate the approach proposed in this paper. Full article
Open AccessArticle
Modeling Expected Shortfall Using Tail Entropy
Entropy 2019, 21(12), 1204; https://doi.org/10.3390/e21121204 - 07 Dec 2019
Viewed by 216
Abstract
Given the recent replacement of value-at-risk as the regulatory standard measure of risk with expected shortfall (ES) undertaken by the Basel Committee on Banking Supervision, it is imperative that ES gives correct estimates for the value of expected levels of losses in crisis [...] Read more.
Given the recent replacement of value-at-risk as the regulatory standard measure of risk with expected shortfall (ES) undertaken by the Basel Committee on Banking Supervision, it is imperative that ES gives correct estimates for the value of expected levels of losses in crisis situations. However, the measurement of ES is affected by a lack of observations in the tail of the distribution. While kernel-based smoothing techniques can be used to partially circumvent this problem, in this paper we propose a simple nonparametric tail measure of risk based on information entropy and compare its backtesting performance with that of other standard ES models. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessFeature PaperArticle
A Portable Wireless Device for Cyclic Alternating Pattern Estimation from an EEG Monopolar Derivation
Entropy 2019, 21(12), 1203; https://doi.org/10.3390/e21121203 - 07 Dec 2019
Viewed by 180
Abstract
Quality of sleep can be assessed by analyzing the cyclic alternating pattern, a long-lasting periodic activity that is composed of two alternate electroencephalogram patterns, which is considered to be a marker of sleep instability. Experts usually score this pattern through a visual examination [...] Read more.
Quality of sleep can be assessed by analyzing the cyclic alternating pattern, a long-lasting periodic activity that is composed of two alternate electroencephalogram patterns, which is considered to be a marker of sleep instability. Experts usually score this pattern through a visual examination of each one-second epoch of an electroencephalogram signal, a repetitive and time-consuming task that is prone to errors. To address these issues, a home monitoring device was developed for automatic scoring of the cyclic alternating pattern by analyzing the signal from one electroencephalogram derivation. Three classifiers, specifically, two recurrent networks (long short-term memory and gated recurrent unit) and one one-dimension convolutional neural network, were developed and tested to determine which was more suitable for the cyclic alternating pattern phase’s classification. It was verified that the network based on the long short-term memory attained the best results with an average accuracy, sensitivity, specificity and area under the receiver operating characteristic curve of, respectively, 76%, 75%, 77% and 0.752. The classified epochs were then fed to a finite state machine to determine the cyclic alternating pattern cycles and the performance metrics were 76%, 71%, 84% and 0.778, respectively. The performance achieved is in the higher bound of the experts’ expected agreement range and considerably higher than the inter-scorer agreement of multiple experts, implying the usability of the device developed for clinical analysis. Full article
(This article belongs to the Special Issue Intelligent Tools and Applications in Engineering and Mathematics)
Show Figures

Figure 1

Open AccessArticle
Nonasymptotic Upper Bounds on Binary Single Deletion Codes via Mixed Integer Linear Programming
Entropy 2019, 21(12), 1202; https://doi.org/10.3390/e21121202 - 06 Dec 2019
Viewed by 191
Abstract
The size of the largest binary single deletion code has been unknown for more than 50 years. It is known that Varshamov–Tenengolts (VT) code is an optimum single deletion code for block length n 10 ; however, only a few upper bounds [...] Read more.
The size of the largest binary single deletion code has been unknown for more than 50 years. It is known that Varshamov–Tenengolts (VT) code is an optimum single deletion code for block length n 10 ; however, only a few upper bounds of the size of single deletion code are proposed for larger n. We provide improved upper bounds using Mixed Integer Linear Programming (MILP) relaxation technique. Especially, we show the size of single deletion code is smaller than or equal to 173 when the block length n is 11. In the second half of the paper, we propose a conjecture that is equivalent to the long-lasting conjecture that “VT code is optimum for all n”. This equivalent formulation of the conjecture contains small sub-problems that can be numerically verified. We provide numerical results that support the conjecture. Full article
(This article belongs to the Special Issue Information Theory and Graph Signal Processing)
Open AccessArticle
Entropy Rate Estimation for English via a Large Cognitive Experiment Using Mechanical Turk
Entropy 2019, 21(12), 1201; https://doi.org/10.3390/e21121201 - 06 Dec 2019
Viewed by 212
Abstract
The entropy rate h of a natural language quantifies the complexity underlying the language. While recent studies have used computational approaches to estimate this rate, their results rely fundamentally on the performance of the language model used for prediction. On the other hand, [...] Read more.
The entropy rate h of a natural language quantifies the complexity underlying the language. While recent studies have used computational approaches to estimate this rate, their results rely fundamentally on the performance of the language model used for prediction. On the other hand, in 1951, Shannon conducted a cognitive experiment to estimate the rate without the use of any such artifact. Shannon’s experiment, however, used only one subject, bringing into question the statistical validity of his value of h = 1.3 bits per character for the English language entropy rate. In this study, we conducted Shannon’s experiment on a much larger scale to reevaluate the entropy rate h via Amazon’s Mechanical Turk, a crowd-sourcing service. The online subjects recruited through Mechanical Turk were each asked to guess the succeeding character after being given the preceding characters until obtaining the correct answer. We collected 172,954 character predictions and analyzed these predictions with a bootstrap technique. The analysis suggests that a large number of character predictions per context length, perhaps as many as 10 3 , would be necessary to obtain a convergent estimate of the entropy rate, and if fewer predictions are used, the resulting h value may be underestimated. Our final entropy estimate was h 1.22 bits per character. Full article
(This article belongs to the Special Issue Information Theory and Language)
Show Figures

Figure 1

Open AccessArticle
Nonlinear Heat Transport in Superlattices with Mobile Defects
Entropy 2019, 21(12), 1200; https://doi.org/10.3390/e21121200 - 06 Dec 2019
Viewed by 219
Abstract
We consider heat conduction in a superlattice with mobile defects, which reduce the thermal conductivity of the material. If the defects may be dragged by the heat flux, and if they are stopped at the interfaces of the superlattice, it is seen that [...] Read more.
We consider heat conduction in a superlattice with mobile defects, which reduce the thermal conductivity of the material. If the defects may be dragged by the heat flux, and if they are stopped at the interfaces of the superlattice, it is seen that the effective thermal resistance of the layers will depend on the heat flux. Thus, the concentration dependence of the transport coefficients plus the mobility of the defects lead to a strongly nonlinear behavior of heat transport, which may be used in some cases as a basis for thermal transistors. Full article
(This article belongs to the Special Issue Selected Papers from 15th Joint European Thermodynamics Conference)
Open AccessArticle
Application of Continuous Wavelet Transform and Convolutional Neural Network in Decoding Motor Imagery Brain-Computer Interface
Entropy 2019, 21(12), 1199; https://doi.org/10.3390/e21121199 - 05 Dec 2019
Viewed by 274
Abstract
The motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has been receiving attention from neural engineering researchers and is being applied to various rehabilitation applications. However, the performance degradation caused by motor imagery EEG with very low single-to-noise ratio faces several application issues [...] Read more.
The motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has been receiving attention from neural engineering researchers and is being applied to various rehabilitation applications. However, the performance degradation caused by motor imagery EEG with very low single-to-noise ratio faces several application issues with the use of a BCI system. In this paper, we propose a novel motor imagery classification scheme based on the continuous wavelet transform and the convolutional neural network. Continuous wavelet transform with three mother wavelets is used to capture a highly informative EEG image by combining time-frequency and electrode location. A convolutional neural network is then designed to both classify motor imagery tasks and reduce computation complexity. The proposed method was validated using two public BCI datasets, BCI competition IV dataset 2b and BCI competition II dataset III. The proposed methods were found to achieve improved classification performance compared with the existing methods, thus showcasing the feasibility of motor imagery BCI. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Open AccessArticle
Scaling Behaviour and Critical Phase Transitions in Integrated Information Theory
Entropy 2019, 21(12), 1198; https://doi.org/10.3390/e21121198 - 05 Dec 2019
Viewed by 225
Abstract
Integrated Information Theory proposes a measure of conscious activity ( Φ ), characterised as the irreducibility of a dynamical system to the sum of its components. Due to its computational cost, current versions of the theory (IIT 3.0) are difficult to apply to [...] Read more.
Integrated Information Theory proposes a measure of conscious activity ( Φ ), characterised as the irreducibility of a dynamical system to the sum of its components. Due to its computational cost, current versions of the theory (IIT 3.0) are difficult to apply to systems larger than a dozen units, and, in general, it is not well known how integrated information scales as systems grow larger in size. In this article, we propose to study the scaling behaviour of integrated information in a simple model of a critical phase transition: an infinite-range kinetic Ising model. In this model, we assume a homogeneous distribution of couplings to simplify the computation of integrated information. This simplified model allows us to critically review some of the design assumptions behind the measure and connect its properties with well-known phenomena in phase transitions in statistical mechanics. As a result, we point to some aspects of the mathematical definitions of IIT that 3.0 fail to capture critical phase transitions and propose a reformulation of the assumptions made by integrated information measures. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Graphical abstract

Open AccessArticle
Comparison of the Trilateral Flash Cycle and Rankine Cycle with Organic Fluid Using the Pinch Point Temperature
Entropy 2019, 21(12), 1197; https://doi.org/10.3390/e21121197 - 05 Dec 2019
Viewed by 254
Abstract
Low-temperature heat utilization can be applied to waste heat from industrial processes or renewable energy sources such as geothermal and ocean energy. The most common low-temperature waste-heat recovery technology is the organic Rankine cycle (ORC). However, the phase change of ORC working fluid [...] Read more.
Low-temperature heat utilization can be applied to waste heat from industrial processes or renewable energy sources such as geothermal and ocean energy. The most common low-temperature waste-heat recovery technology is the organic Rankine cycle (ORC). However, the phase change of ORC working fluid for the heat extraction process causes a pinch-point problem, and the heat recovery cannot be efficiently used. To improve heat extraction and power generation, this study explored the cycle characteristics of the trilateral flash cycle (TFC) in a low-temperature heat source. A pinch-point-based methodology was developed for studying the optimal design point and operating conditions and for optimizing working fluid evaporation temperature and mass flow rate. According to the simulation results, the TFC system can recover more waste heat than ORC under the same operating conditions. The net power output of the TFC was approximately 30% higher than ORC but at a cost of higher pump power consumption. Additionally, the TFC was superior to ORC with an extremely low-temperature heat source (<80 °C), and the ideal efficiency was approximately 3% at the highest work output condition. The TFC system is economically beneficial for waste-heat recovery for low-temperature heat sources. Full article
(This article belongs to the Special Issue Exergetic and Thermoeconomic Analysis of Thermal Systems)
Open AccessArticle
A Survey on Using Kolmogorov Complexity in Cybersecurity
Entropy 2019, 21(12), 1196; https://doi.org/10.3390/e21121196 - 05 Dec 2019
Viewed by 205
Abstract
Security and privacy concerns are challenging the way users interact with devices. The number of devices connected to a home or enterprise network increases every day. Nowadays, the security of information systems is relevant as user information is constantly being shared and moving [...] Read more.
Security and privacy concerns are challenging the way users interact with devices. The number of devices connected to a home or enterprise network increases every day. Nowadays, the security of information systems is relevant as user information is constantly being shared and moving in the cloud; however, there are still many problems such as, unsecured web interfaces, weak authentication, insecure networks, lack of encryption, among others, that make services insecure. The software implementations that are currently deployed in companies should have updates and control, as cybersecurity threats increasingly appearing over time. There is already some research towards solutions and methods to predict new attacks or classify variants of previous known attacks, such as (algorithmic) information theory. This survey combines all relevant applications of this topic (also known as Kolmogorov Complexity) in the security and privacy domains. The use of Kolmogorov-based approaches is resource-focused without the need for specific knowledge of the topic under analysis. We have defined a taxonomy with already existing work to classify their different application areas and open up new research questions. Full article
(This article belongs to the Special Issue Shannon Information and Kolmogorov Complexity)
Show Figures

Figure 1

Open AccessArticle
The Interdependence of Autonomous Human-Machine Teams: The Entropy of Teams, But Not Individuals, Advances Science
Entropy 2019, 21(12), 1195; https://doi.org/10.3390/e21121195 - 05 Dec 2019
Viewed by 222
Abstract
Key concepts: We review interdependence theory measured by entropic forces, findings in support, and several examples from the field to advance a science of autonomous human-machine teams (A-HMTs) with artificial intelligence (AI). While theory is needed for the advent of autonomous HMTs, [...] Read more.
Key concepts: We review interdependence theory measured by entropic forces, findings in support, and several examples from the field to advance a science of autonomous human-machine teams (A-HMTs) with artificial intelligence (AI). While theory is needed for the advent of autonomous HMTs, social theory is predicated on methodological individualism, a statistical and qualitative science that neither generalizes to human teams nor HMTs. Maximum interdependence in human teams is associated with the performance of the best teams when compared to independent individuals; our research confirmed that the top global oil firms maximize interdependence by minimizing redundant workers, replicated for the top militaries in the world, adding that impaired interdependence is associated with proportionately less freedom, increased corruption, and poorer team performance. We advanced theory by confirming that the maximum interdependence in teams requires intelligence to overcome obstacles to maximum entropy production (MEP; e.g., navigating obstacles while abiding by military rules of engagement requires intelligence). Approach: With a case study, we model as harmonic the long-term oscillations driven by two federal agencies in conflict over closing two high-level radioactive waste tanks, ending when citizens recommended closing the tanks. Results: While contradicting rational consensus theory, our quasi-Nash equilibrium model generates the information for neutrals to decide; it suggests that HMTs should adopt how harmonic oscillations in free societies regulate human autonomy to improve decisions and social welfare. Full article
Show Figures

Figure 1

Open AccessArticle
The Eigenvalue Complexity of Sequences in the Real Domain
Entropy 2019, 21(12), 1194; https://doi.org/10.3390/e21121194 - 05 Dec 2019
Viewed by 209
Abstract
The eigenvalue is one of the important cryptographic complexity measures for sequences. However, the eigenvalue can only evaluate sequences with finite symbols—it is not applicable for real number sequences. Recently, chaos-based cryptography has received widespread attention for its perfect dynamical characteristics. However, dynamical [...] Read more.
The eigenvalue is one of the important cryptographic complexity measures for sequences. However, the eigenvalue can only evaluate sequences with finite symbols—it is not applicable for real number sequences. Recently, chaos-based cryptography has received widespread attention for its perfect dynamical characteristics. However, dynamical complexity does not completely equate to cryptographic complexity. The security of the chaos-based cryptographic algorithm is not fully guaranteed unless it can be proven or measured by cryptographic standards. Therefore, in this paper, we extended the eigenvalue complexity measure from the finite field to the real number field to make it applicable for the complexity measurement of real number sequences. The probability distribution, expectation, and variance of the eigenvalue of real number sequences are discussed both theoretically and experimentally. With the extension of eigenvalue, we can evaluate the cryptographic complexity of real number sequences, which have a great advantage for cryptographic usage, especially for chaos-based cryptography. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Open AccessArticle
Optimization of Condition Monitoring Decision Making by the Criterion of Minimum Entropy
Entropy 2019, 21(12), 1193; https://doi.org/10.3390/e21121193 - 04 Dec 2019
Viewed by 277
Abstract
Condition-based maintenance (CBM) is a promising technique for a wide variety of deteriorating systems. Condition-based maintenance’s effectiveness largely depends on the quality of condition monitoring. The majority of CBM mathematical models consider perfect inspections, in which the system condition is assumed to be [...] Read more.
Condition-based maintenance (CBM) is a promising technique for a wide variety of deteriorating systems. Condition-based maintenance’s effectiveness largely depends on the quality of condition monitoring. The majority of CBM mathematical models consider perfect inspections, in which the system condition is assumed to be determined error-free. This article presents a mathematical model of CBM with imperfect condition monitoring conducted at discrete times. Mathematical expressions were derived for evaluating the probabilities of correct and incorrect decisions when monitoring the system condition at a scheduled time. Further, these probabilities were incorporated into the equation of the Shannon entropy. The problem of determining the optimal preventive maintenance threshold at each inspection time by the criterion of the minimum of Shannon entropy was formulated. For the first time, the article showed that Shannon’s entropy is a convex function of the preventive maintenance threshold for each moment of condition monitoring. It was also shown that the probabilities of correct and incorrect decisions depend on the time and parameters of the degradation model. Numerical calculations show that the proposed approach to determining the optimal preventive maintenance threshold can significantly reduce uncertainty when deciding on the condition of the monitoring object. Full article
(This article belongs to the Section Multidisciplinary Applications)
Open AccessArticle
Low-Element Image Restoration Based on an Out-of-Order Elimination Algorithm
Entropy 2019, 21(12), 1192; https://doi.org/10.3390/e21121192 - 04 Dec 2019
Viewed by 191
Abstract
To reduce the consumption of receiving devices, a number of devices at the receiving end undergo low-element treatment (the number of devices at the receiving end is less than that at the transmitting ends). The underdetermined blind-source separation system is a classic low-element [...] Read more.
To reduce the consumption of receiving devices, a number of devices at the receiving end undergo low-element treatment (the number of devices at the receiving end is less than that at the transmitting ends). The underdetermined blind-source separation system is a classic low-element model at the receiving end. Blind signal extraction in an underdetermined system remains an ill-posed problem, as it is difficult to extract all the source signals. To realize fewer devices at the receiving end without information loss, this paper proposes an image restoration method for underdetermined blind-source separation based on an out-of-order elimination algorithm. Firstly, a chaotic system is used to perform hidden transmission of source signals, where the source signals can hardly be observed and confidentiality is guaranteed. Secondly, empirical mode decomposition is used to decompose and complement the missing observed signals, and the fast independent component analysis (FastICA) algorithm is used to obtain part of the source signals. Finally, all the source signals are successfully separated using the out-of-order elimination algorithm and the FastICA algorithm. The results show that the performance of the underdetermined blind separation algorithm is related to the configuration of the transceiver antenna. When the signal is 3 × 4antenna configuration, the algorithm in this paper is superior to the comparison algorithm in signal recovery, and its separation performance is better for a lower degree of missing array elements. The end result is that the algorithms discussed in this paper can effectively and completely extract all the source signals. Full article
Open AccessArticle
Variance Risk Identification and Treatment of Clinical Pathway by Integrated Bayesian Network and Association Rules Mining
Entropy 2019, 21(12), 1191; https://doi.org/10.3390/e21121191 - 04 Dec 2019
Viewed by 216
Abstract
With the continuous development of data mining techniques in the medical field, variance analysis in clinical pathways based on data mining approaches have attracted increasing attention from scholars and decision makers. However, studies on variance analysis and treatment of specific kinds of disease [...] Read more.
With the continuous development of data mining techniques in the medical field, variance analysis in clinical pathways based on data mining approaches have attracted increasing attention from scholars and decision makers. However, studies on variance analysis and treatment of specific kinds of disease are still relatively scarce. In order to reduce the hazard of postpartum hemorrhage after cesarean section, we conducted a detailed analysis on the relevant risk factors and treatment mechanisms, adopting the integrated Bayesian network and association rule mining approaches. By proposing a Bayesian network model based on regression analysis, we calculated the probability of risk factors determining the key factors that result in postpartum hemorrhage after cesarean section. In addition, we mined a few association rules regarding the treatment of postpartum hemorrhage on the basis of different clinical features. We divided the risk factors into primary and secondary risk factors by realizing the classification of different causes of postpartum hemorrhage after cesarean section and sorted the posterior probability to obtain the key factors in the primary and secondary risk factors: uterine atony and prolonged labor. The rules of clinical features associated with the management of postpartum hemorrhage during cesarean section were obtained. Finally, related strategies were proposed for improving medical service quality and enhancing the rescue efficiency of clinical pathways in China. Full article
Open AccessArticle
Magnetotelluric Signal-Noise Separation UsingIE-LZC and MP
Entropy 2019, 21(12), 1190; https://doi.org/10.3390/e21121190 - 04 Dec 2019
Viewed by 200
Abstract
Eliminating noise signals of the magnetotelluric (MT) method is bound to improve the quality of MT data. However, existing de-noising methods are designed for use in whole MT data sets, causing the loss of low-frequency information and severe mutation of the apparent resistivity-phase [...] Read more.
Eliminating noise signals of the magnetotelluric (MT) method is bound to improve the quality of MT data. However, existing de-noising methods are designed for use in whole MT data sets, causing the loss of low-frequency information and severe mutation of the apparent resistivity-phase curve in low-frequency bands. In this paper, we used information entropy (IE), the Lempel–Ziv complexity (LZC), and matching pursuit (MP) to distinguish and suppress MT noise signals. Firstly, we extracted IE and LZC characteristic parameters from each segment of the MT signal in the time-series. Then, the characteristic parameters were input into the FCM clustering to automatically distinguish between the signal and noise. Next, the MP de-noising algorithm was used independently to eliminate MT signal segments that were identified as interference. Finally, the identified useful signal segments were combined with the denoised data segments to reconstruct the signal. The proposed method was validated through clustering analysis based on the signal samples collected at the Qinghai test site and the measured sites, where the results were compared to those obtained using the remote reference method and independent use of the MP method. The findings show that strong interference is purposefully removed, and the apparent resistivity-phase curve is continuous and stable. Moreover, the processed data can accurately reflect the geoelectrical information and improve the level of geological interpretation. Full article
(This article belongs to the Special Issue Shannon Information and Kolmogorov Complexity)
Previous Issue
Back to TopTop