Next Issue
Volume 22, June
Previous Issue
Volume 22, April

Table of Contents

Entropy, Volume 22, Issue 5 (May 2020) – 99 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) This schematic illustrates the partition of a system’s states into internal states (blue) and [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
A New Parametric Life Distribution with Modified Bagdonavičius–Nikulin Goodness-of-Fit Test for Censored Validation, Properties, Applications, and Different Estimation Methods
Entropy 2020, 22(5), 592; https://doi.org/10.3390/e22050592 - 25 May 2020
Viewed by 273
Abstract
In this paper, we first study a new two parameter lifetime distribution. This distribution includes “monotone” and “non-monotone” hazard rate functions which are useful in lifetime data analysis and reliability. Some of its mathematical properties including explicit expressions for the ordinary and incomplete [...] Read more.
In this paper, we first study a new two parameter lifetime distribution. This distribution includes “monotone” and “non-monotone” hazard rate functions which are useful in lifetime data analysis and reliability. Some of its mathematical properties including explicit expressions for the ordinary and incomplete moments, generating function, Renyi entropy, δ-entropy, order statistics and probability weighted moments are derived. Non-Bayesian estimation methods such as the maximum likelihood, Cramer-Von-Mises, percentile estimation, and L-moments are used for estimating the model parameters. The importance and flexibility of the new distribution are illustrated by means of two applications to real data sets. Using the approach of the Bagdonavicius–Nikulin goodness-of-fit test for the right censored validation, we then propose and apply a modified chi-square goodness-of-fit test for the Burr X Weibull model. The modified goodness-of-fit statistics test is applied for the right censored real data set. Based on the censored maximum likelihood estimators on initial data, the modified goodness-of-fit test recovers the loss in information while the grouped data follows the chi-square distribution. The elements of the modified criteria tests are derived. A real data application is for validation under the uncensored scheme. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Storage Space Allocation Strategy for Digital Data with Message Importance
Entropy 2020, 22(5), 591; https://doi.org/10.3390/e22050591 - 25 May 2020
Viewed by 253
Abstract
This paper mainly focuses on the problem of lossy compression storage based on the data value that represents the subjective assessment of users when the storage size is still not enough after the conventional lossless data compression. To this end, we transform this [...] Read more.
This paper mainly focuses on the problem of lossy compression storage based on the data value that represents the subjective assessment of users when the storage size is still not enough after the conventional lossless data compression. To this end, we transform this problem to an optimization, which pursues the least importance-weighted reconstruction error in data reconstruction within limited total storage size, where the importance is adopted to characterize the data value from the viewpoint of users. Based on it, this paper puts forward an optimal allocation strategy in the storage of digital data by the exponential distortion measurement, which can make rational use of all the storage space. In fact, the theoretical results show that it is a kind of restrictive water-filling. It also characterizes the trade-off between the relative weighted reconstruction error and the available storage size. Consequently, if a relatively small part of total data value is allowed to lose, this strategy will improve the performance of data compression. Furthermore, this paper also presents that both the users’ preferences and the special characteristics of data distribution can trigger the small-probability event scenarios where only a fraction of data can cover the vast majority of users’ interests. Whether it is for one of the reasons above, the data with highly clustered message importance is beneficial to compression storage. In contrast, from the perspective of optimal storage space allocation based on data value, the data with a uniform information distribution is incompressible, which is consistent with that in the information theory. Full article
Show Figures

Figure 1

Open AccessArticle
Non-Linear Regression Modelling to Estimate the Global Warming Potential of a Newspaper
Entropy 2020, 22(5), 590; https://doi.org/10.3390/e22050590 - 25 May 2020
Viewed by 315
Abstract
Technological innovations are not enough by themselves to achieve social and environmental sustainability in companies. Sustainable development aims to determine the environmental impact of a product and the hidden price of products and services through the concept of radical transparency. This means that [...] Read more.
Technological innovations are not enough by themselves to achieve social and environmental sustainability in companies. Sustainable development aims to determine the environmental impact of a product and the hidden price of products and services through the concept of radical transparency. This means that companies should show and disclose the impact on the environment of any good or service. This way, the consumer can choose in a transparent manner, not only for the price. The use of the eco-label as a European eco-label, which bases its criteria on life cycle assessment, could provide an indicator of corporate social responsibility for a given product. However, it does not give a full guarantee that the product was obtained in a sustainable manner. The aim of this work is to provide a way of calculating the value of the environmental impacts of an industrial product, under different operating conditions, so that each company can provide detailed information on the impacts of its products, information that can form part of its "green product sheet". As a case study, the daily production of a newspaper, printed by coldset, has been chosen. Each process involved in production was configured with raw material and energy consumption information from production plants, manufacturer data and existing databases. Four non-linear regression models have been trained to estimate the impact of a newspaper’s circulation from five input variables (pages, grammage, height, paper type, and print run) with 5508 data samples each. These non-linear regression models were trained using the Levenberg–Marquardt nonlinear least squares algorithm. The mean absolute percentage errors (MAPE) obtained by all the non-linear regression models tested were less than 5%. Through the proposed correlations, it is possible to obtain a score that reports on the impact of the product for different operating conditions and several types of raw materials. Ecolabelling can be further developed by incorporating a scoring system for the impact caused by the product or process, using a standardised impact methodology. Full article
Show Figures

Figure 1

Open AccessArticle
Cryptanalysis and Improvement of a Chaotic Map-Based Image Encryption System Using Both Plaintext Related Permutation and Diffusion
Entropy 2020, 22(5), 589; https://doi.org/10.3390/e22050589 - 24 May 2020
Viewed by 288
Abstract
In theory, high key and high plaintext sensitivities are a must for a cryptosystem to resist the chosen/known plaintext and the differential attacks. High plaintext sensitivity can be achieved by ensuring that each encrypted result is plaintext-dependent. In this work, we make detailed [...] Read more.
In theory, high key and high plaintext sensitivities are a must for a cryptosystem to resist the chosen/known plaintext and the differential attacks. High plaintext sensitivity can be achieved by ensuring that each encrypted result is plaintext-dependent. In this work, we make detailed cryptanalysis on a published chaotic map-based image encryption system, where the encryption process is plaintext Image dependent. We show that some designing flaws make the published cryptosystem vulnerable to chosen-plaintext attack, and we then proposed an enhanced algorithm to overcome those flaws. Full article
Show Figures

Figure 1

Open AccessArticle
Systemic Importance of China’s Financial Institutions: A Jump Volatility Spillover Network Review
Entropy 2020, 22(5), 588; https://doi.org/10.3390/e22050588 - 24 May 2020
Viewed by 264
Abstract
The investigation of the systemic importance of financial institutions (SIFIs) has become a hot topic in the field of financial risk management. By making full use of 5-min high-frequency data, and with the help of the method of entropy weight technique for order [...] Read more.
The investigation of the systemic importance of financial institutions (SIFIs) has become a hot topic in the field of financial risk management. By making full use of 5-min high-frequency data, and with the help of the method of entropy weight technique for order preference by similarities to ideal solution (TOPSIS), this paper builds jump volatility spillover network of China’s financial institutions to measure the SIFIs. We find that: (i) state-owned depositories and large insurers display SIFIs according to the score of entropy weight TOPSIS; (ii) total connectedness of financial institution networks reveal that Industrial Bank, Ping An Bank and Pacific Securities play an important role when financial market is under pressure, especially during the subprime crisis, the European sovereign debt crisis and China’s stock market disaster; (iii) an interesting finding shows that some small financial institutions are also SIFIs during the financial crisis and cannot be ignored. Full article
(This article belongs to the Special Issue Complexity in Economic and Social Systems)
Show Figures

Figure 1

Open AccessArticle
Entropic Dynamics in Neural Networks, the Renormalization Group and the Hamilton-Jacobi-Bellman Equation
Entropy 2020, 22(5), 587; https://doi.org/10.3390/e22050587 - 23 May 2020
Viewed by 288
Abstract
We study the dynamics of information processing in the continuum depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by a NN is [...] Read more.
We study the dynamics of information processing in the continuum depth limit of deep feed-forward Neural Networks (NN) and find that it can be described in language similar to the Renormalization Group (RG). The association of concepts to patterns by a NN is analogous to the identification of the few variables that characterize the thermodynamic state obtained by the RG from microstates. To see this, we encode the information about the weights of a NN in a Maxent family of distributions. The location hyper-parameters represent the weights estimates. Bayesian learning of a new example determine new constraints on the generators of the family, yielding a new probability distribution which can be seen as an entropic dynamics of learning, yielding a learning dynamics where the hyper-parameters change along the gradient of the evidence. For a feed-forward architecture the evidence can be written recursively from the evidence up to the previous layer convoluted with an aggregation kernel. The continuum limit leads to a diffusion-like PDE analogous to Wilson’s RG but with an aggregation kernel that depends on the weights of the NN, different from those that integrate out ultraviolet degrees of freedom. This can be recast in the language of dynamical programming with an associated Hamilton–Jacobi–Bellman equation for the evidence, where the control is the set of weights of the neural network. Full article
Open AccessArticle
Differential Parametric Formalism for the Evolution of Gaussian States: Nonunitary Evolution and Invariant States
Entropy 2020, 22(5), 586; https://doi.org/10.3390/e22050586 - 23 May 2020
Viewed by 285
Abstract
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in [...] Read more.
In the differential approach elaborated, we study the evolution of the parameters of Gaussian, mixed, continuous variable density matrices, whose dynamics are given by Hermitian Hamiltonians expressed as quadratic forms of the position and momentum operators or quadrature components. Specifically, we obtain in generic form the differential equations for the covariance matrix, the mean values, and the density matrix parameters of a multipartite Gaussian state, unitarily evolving according to a Hamiltonian H ^ . We also present the corresponding differential equations, which describe the nonunitary evolution of the subsystems. The resulting nonlinear equations are used to solve the dynamics of the system instead of the Schrödinger equation. The formalism elaborated allows us to define new specific invariant and quasi-invariant states, as well as states with invariant covariance matrices, i.e., states were only the mean values evolve according to the classical Hamilton equations. By using density matrices in the position and in the tomographic-probability representations, we study examples of these properties. As examples, we present novel invariant states for the two-mode frequency converter and quasi-invariant states for the bipartite parametric amplifier. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness II)
Show Figures

Figure 1

Open AccessArticle
Spectral-Based SPD Matrix Representation for Signal Detection Using a Deep Neutral Network
Entropy 2020, 22(5), 585; https://doi.org/10.3390/e22050585 - 22 May 2020
Viewed by 257
Abstract
The symmetric positive definite (SPD) matrix has attracted much attention in classification problems because of its remarkable performance, which is due to the underlying structure of the Riemannian manifold with non-negative curvature as well as the use of non-linear geometric metrics, which have [...] Read more.
The symmetric positive definite (SPD) matrix has attracted much attention in classification problems because of its remarkable performance, which is due to the underlying structure of the Riemannian manifold with non-negative curvature as well as the use of non-linear geometric metrics, which have a stronger ability to distinguish SPD matrices and reduce information loss compared to the Euclidean metric. In this paper, we propose a spectral-based SPD matrix signal detection method with deep learning that uses time-frequency spectra to construct SPD matrices and then exploits a deep SPD matrix learning network to detect the target signal. Using this approach, the signal detection problem is transformed into a binary classification problem on a manifold to judge whether the input sample has target signal or not. Two matrix models are applied, namely, an SPD matrix based on spectral covariance and an SPD matrix based on spectral transformation. A simulated-signal dataset and a semi-physical simulated-signal dataset are used to demonstrate that the spectral-based SPD matrix signal detection method with deep learning has a gain of 1.7–3.3 dB under appropriate conditions. The results show that our proposed method achieves better detection performances than its state-of-the-art spectral counterparts that use convolutional neural networks. Full article
Show Figures

Figure 1

Open AccessArticle
On the Potential of Time Delay Neural Networks to Detect Indirect Coupling between Time Series
Entropy 2020, 22(5), 584; https://doi.org/10.3390/e22050584 - 21 May 2020
Viewed by 277
Abstract
Determining the coupling between systems remains a topic of active research in the field of complex science. Identifying the proper causal influences in time series can already be very challenging in the trivariate case, particularly when the interactions are non-linear. In this paper, [...] Read more.
Determining the coupling between systems remains a topic of active research in the field of complex science. Identifying the proper causal influences in time series can already be very challenging in the trivariate case, particularly when the interactions are non-linear. In this paper, the coupling between three Lorenz systems is investigated with the help of specifically designed artificial neural networks, called time delay neural networks (TDNNs). TDNNs can learn from their previous inputs and are therefore well suited to extract the causal relationship between time series. The performances of the TDNNs tested have always been very positive, showing an excellent capability to identify the correct causal relationships in absence of significant noise. The first tests on the time localization of the mutual influences and the effects of Gaussian noise have also provided very encouraging results. Even if further assessments are necessary, the networks of the proposed architecture have the potential to be a good complement to the other techniques available in the market for the investigation of mutual influences between time series. Full article
Show Figures

Figure 1

Open AccessArticle
Natural Time Analysis: The Area under the Receiver Operating Characteristic Curve of the Order Parameter Fluctuations Minima Preceding Major Earthquakes
Entropy 2020, 22(5), 583; https://doi.org/10.3390/e22050583 - 21 May 2020
Viewed by 262
Abstract
It has been reported that major earthquakes are preceded by Seismic Electric Signals (SES). Observations show that in the natural time analysis of an earthquake (EQ) catalog, an SES activity starts when the fluctuations of the order parameter of seismicity exhibit a minimum. [...] Read more.
It has been reported that major earthquakes are preceded by Seismic Electric Signals (SES). Observations show that in the natural time analysis of an earthquake (EQ) catalog, an SES activity starts when the fluctuations of the order parameter of seismicity exhibit a minimum. Fifteen distinct minima—observed simultaneously at two different natural time scales and deeper than a certain threshold—are found on analyzing the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the M9 Tohoku EQ occurrence) 1 to 3 months before large EQs. Six (out of 15) of these minima preceded all shallow EQs of magnitude 7.6 or larger, while nine are followed by smaller EQs. The latter false positives can be excluded by a proper procedure (J. Geophys. Res. Space Physics 2014, 119, 9192–9206) that considers aspects of EQ networks based on similar activity patterns. These results are studied here by means of the receiver operating characteristics (ROC) technique by focusing on the area under the ROC curve (AUC). If this area, which is currently considered an effective way to summarize the overall diagnostic accuracy of a test, has the value 1, it corresponds to a perfectly accurate test. Here, we find that the AUC is around 0.95 which is evaluated as outstanding. Full article
Show Figures

Figure 1

Open AccessArticle
A Dual Measure of Uncertainty: The Deng Extropy
Entropy 2020, 22(5), 582; https://doi.org/10.3390/e22050582 - 21 May 2020
Viewed by 306
Abstract
The extropy has recently been introduced as the dual concept of entropy. Moreover, in the context of the Dempster–Shafer evidence theory, Deng studied a new measure of discrimination, named the Deng entropy. In this paper, we define the Deng extropy and study its [...] Read more.
The extropy has recently been introduced as the dual concept of entropy. Moreover, in the context of the Dempster–Shafer evidence theory, Deng studied a new measure of discrimination, named the Deng entropy. In this paper, we define the Deng extropy and study its relation with Deng entropy, and examples are proposed in order to compare them. The behaviour of Deng extropy is studied under changes of focal elements. A characterization result is given for the maximum Deng extropy and, finally, a numerical example in pattern recognition is discussed in order to highlight the relevance of the new measure. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
Entropy 2020, 22(5), 581; https://doi.org/10.3390/e22050581 - 21 May 2020
Viewed by 341
Abstract
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the [...] Read more.
In this paper, we provide a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. However, as the blocklength decreases, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths over the additive white Gaussian noise (AWGN) channel, the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM) and SpSh are reviewed as energy-efficient shaping techniques. Numerical results show that both have smaller rate losses than CCDM. SpSh—whose sole objective is to maximize the energy efficiency—is shown to have the minimum rate loss amongst all, which is particularly apparent for ultra short blocklengths. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspectives of latency, storage and computations. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

Open AccessArticle
A Method to Present and Analyze Ensembles of Information Sources
Entropy 2020, 22(5), 580; https://doi.org/10.3390/e22050580 - 21 May 2020
Viewed by 311
Abstract
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with [...] Read more.
Information theory is a powerful tool for analyzing complex systems. In many areas of neuroscience, it is now possible to gather data from large ensembles of neural variables (e.g., data from many neurons, genes, or voxels). The individual variables can be analyzed with information theory to provide estimates of information shared between variables (forming a network between variables), or between neural variables and other variables (e.g., behavior or sensory stimuli). However, it can be difficult to (1) evaluate if the ensemble is significantly different from what would be expected in a purely noisy system and (2) determine if two ensembles are different. Herein, we introduce relatively simple methods to address these problems by analyzing ensembles of information sources. We demonstrate how an ensemble built of mutual information connections can be compared to null surrogate data to determine if the ensemble is significantly different from noise. Next, we show how two ensembles can be compared using a randomization process to determine if the sources in one contain more information than the other. All code necessary to carry out these analyses and demonstrations are provided. Full article
(This article belongs to the Special Issue Information Theory in Computational Neuroscience)
Show Figures

Figure 1

Open AccessArticle
Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model
Entropy 2020, 22(5), 579; https://doi.org/10.3390/e22050579 - 20 May 2020
Viewed by 444
Abstract
Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a [...] Read more.
Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky–Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the “leave-one-out” cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man–machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance. Full article
Show Figures

Figure 1

Open AccessArticle
Hybrid CUSUM Change Point Test for Time Series with Time-Varying Volatilities Based on Support Vector Regression
Entropy 2020, 22(5), 578; https://doi.org/10.3390/e22050578 - 20 May 2020
Viewed by 306
Abstract
This study considers the problem of detecting a change in the conditional variance of time series with time-varying volatilities based on the cumulative sum (CUSUM) of squares test using the residuals from support vector regression (SVR)-generalized autoregressive conditional heteroscedastic (GARCH) models. To compute [...] Read more.
This study considers the problem of detecting a change in the conditional variance of time series with time-varying volatilities based on the cumulative sum (CUSUM) of squares test using the residuals from support vector regression (SVR)-generalized autoregressive conditional heteroscedastic (GARCH) models. To compute the residuals, we first fit SVR-GARCH models with different tuning parameters utilizing a time series of training set. We then obtain the best SVR-GARCH model with the optimal tuning parameters via a time series of the validation set. Subsequently, based on the selected model, we obtain the residuals, as well as the estimates of the conditional volatility and employ these to construct the residual CUSUM of squares test. We conduct Monte Carlo simulation experiments to illustrate its validity with various linear and nonlinear GARCH models. A real data analysis with the S&P 500 index, Korea Composite Stock Price Index (KOSPI), and Korean won/U.S. dollar (KRW/USD) exchange rate datasets is provided to exhibit its scope of application. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Working Memory Training: Assessing the Efficiency of Mnemonic Strategies
Entropy 2020, 22(5), 577; https://doi.org/10.3390/e22050577 - 20 May 2020
Viewed by 267
Abstract
Recently, there has been increasing interest in techniques for enhancing working memory (WM), casting a new light on the classical picture of a rigid system. One reason is that WM performance has been associated with intelligence and reasoning, while its impairment showed correlations [...] Read more.
Recently, there has been increasing interest in techniques for enhancing working memory (WM), casting a new light on the classical picture of a rigid system. One reason is that WM performance has been associated with intelligence and reasoning, while its impairment showed correlations with cognitive deficits, hence the possibility of training it is highly appealing. However, results on WM changes following training are controversial, leaving it unclear whether it can really be potentiated. This study aims at assessing changes in WM performance by comparing it with and without training by a professional mnemonist. Two groups, experimental and control, participated in the study, organized in two phases. In the morning, both groups were familiarized with stimuli through an N-back task, and then attended a 2-hour lecture. For the experimental group, the lecture, given by the mnemonist, introduced memory encoding techniques; for the control group, it was a standard academic lecture about memory systems. In the afternoon, both groups were administered five tests, in which they had to remember the position of 16 items, when asked in random order. The results show much better performance in trained subjects, indicating the need to consider such possibility of enhancement, alongside general information-theoretic constraints, when theorizing about WM span. Full article
(This article belongs to the Special Issue What Limits Working Memory Performance?)
Show Figures

Figure 1

Open AccessArticle
Two-Dimensional Permutation Vectors’ (PV) Code for Optical Code Division Multiple Access Systems
Entropy 2020, 22(5), 576; https://doi.org/10.3390/e22050576 - 20 May 2020
Viewed by 269
Abstract
In this paper, we present a new algorithm to generate two-dimensional (2D) permutation vectors’ (PV) code for incoherent optical code division multiple access (OCDMA) system to suppress multiple access interference (MAI) and system complexity. The proposed code design approach is based on wavelength-hopping [...] Read more.
In this paper, we present a new algorithm to generate two-dimensional (2D) permutation vectors’ (PV) code for incoherent optical code division multiple access (OCDMA) system to suppress multiple access interference (MAI) and system complexity. The proposed code design approach is based on wavelength-hopping time-spreading (WHTS) technique for code generation. All possible combinations of PV code sets were attained by employing all permutations of the vectors with repetition of each vector weight (W) times. Further, 2D-PV code set was constructed by combining two code sequences of the 1D-PV code. The transmitter-receiver architecture of 2D-PV code-based WHTS OCDMA system is presented. Results indicated that the 2D-PV code provides increased cardinality by eliminating phase-induced intensity noise (PIIN) effects and multiple user data can be transmitted with minimum likelihood of interference. Simulation results validated the proposed system for an agreeable bit error rate (BER) of 10−9. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Open AccessArticle
Detecting Malware with Information Complexity
Entropy 2020, 22(5), 575; https://doi.org/10.3390/e22050575 - 20 May 2020
Viewed by 274
Abstract
Malware concealment is the predominant strategy for malware propagation. Black hats create variants of malware based on polymorphism and metamorphism. Malware variants, by definition, share some information. Although the concealment strategy alters this information, there are still patterns on the software. Given a [...] Read more.
Malware concealment is the predominant strategy for malware propagation. Black hats create variants of malware based on polymorphism and metamorphism. Malware variants, by definition, share some information. Although the concealment strategy alters this information, there are still patterns on the software. Given a zoo of labelled malware and benign-ware, we ask whether a suspect program is more similar to our malware or to our benign-ware. Normalized Compression Distance (NCD) is a generic metric that measures the shared information content of two strings. This measure opens a new front in the malware arms race, one where the countermeasures promise to be more costly for malware writers, who must now obfuscate patterns as strings qua strings, without reference to execution, in their variants. Our approach classifies disk-resident malware with 97.4% accuracy and a false positive rate of 3%. We demonstrate that its accuracy can be improved by combining NCD with the compressibility rates of executables using decision forests, paving the way for future improvements. We demonstrate that malware reported within a narrow time frame of a few days is more homogeneous than malware reported over two years, but that our method still classifies the latter with 95.2% accuracy and a 5% false positive rate. Due to its use of compression, the time and computation cost of our method is nontrivial. We show that simple approximation techniques can improve its running time by up to 63%. We compare our results to the results of applying the 59 anti-malware programs used on the VirusTotal website to our malware. Our approach outperforms each one used alone and matches that of all of them used collectively. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Open AccessArticle
Dynamical Complexity of the 2015 St. Patrick’s Day Magnetic Storm at Swarm Altitudes Using Entropy Measures
Entropy 2020, 22(5), 574; https://doi.org/10.3390/e22050574 - 19 May 2020
Viewed by 324
Abstract
The continuously expanding toolbox of nonlinear time series analysis techniques has recently highlighted the importance of dynamical complexity to understand the behavior of the complex solar wind–magnetosphere–ionosphere–thermosphere coupling system and its components. Here, we apply new such approaches, mainly a series of entropy [...] Read more.
The continuously expanding toolbox of nonlinear time series analysis techniques has recently highlighted the importance of dynamical complexity to understand the behavior of the complex solar wind–magnetosphere–ionosphere–thermosphere coupling system and its components. Here, we apply new such approaches, mainly a series of entropy methods to the time series of the Earth’s magnetic field measured by the Swarm constellation. We show successful applications of methods, originated from information theory, to quantitatively study complexity in the dynamical response of the topside ionosphere, at Swarm altitudes, focusing on the most intense magnetic storm of solar cycle 24, that is, the St. Patrick’s Day storm, which occurred in March 2015. These entropy measures are utilized for the first time to analyze data from a low-Earth orbit (LEO) satellite mission flying in the topside ionosphere. These approaches may hold great potential for improved space weather nowcasts and forecasts. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Taylor’s Law in Innovation Processes
Entropy 2020, 22(5), 573; https://doi.org/10.3390/e22050573 - 19 May 2020
Viewed by 456
Abstract
Taylor’s law quantifies the scaling properties of the fluctuations of the number of innovations occurring in open systems. Urn-based modeling schemes have already proven to be effective in modeling this complex behaviour. Here, we present analytical estimations of Taylor’s law exponents in such [...] Read more.
Taylor’s law quantifies the scaling properties of the fluctuations of the number of innovations occurring in open systems. Urn-based modeling schemes have already proven to be effective in modeling this complex behaviour. Here, we present analytical estimations of Taylor’s law exponents in such models, by leveraging on their representation in terms of triangular urn models. We also highlight the correspondence of these models with Poisson–Dirichlet processes and demonstrate how a non-trivial Taylor’s law exponent is a kind of universal feature in systems related to human activities. We base this result on the analysis of four collections of data generated by human activity: (i) written language (from a Gutenberg corpus); (ii) an online music website (Last.fm); (iii) Twitter hashtags; (iv) an online collaborative tagging system (Del.icio.us). While Taylor’s law observed in the last two datasets agrees with the plain model predictions, we need to introduce a generalization to fully characterize the behaviour of the first two datasets, where temporal correlations are possibly more relevant. We suggest that Taylor’s law is a fundamental complement to Zipf’s and Heaps’ laws in unveiling the complex dynamical processes underlying the evolution of systems featuring innovation. Full article
(This article belongs to the Special Issue Statistical Mechanics of Complex Systems)
Show Figures

Figure 1

Open AccessArticle
Estimation of Autoregressive Parameters from Noisy Observations Using Iterated Covariance Updates
Entropy 2020, 22(5), 572; https://doi.org/10.3390/e22050572 - 19 May 2020
Viewed by 294
Abstract
Estimating the parameters of the autoregressive (AR) random process is a problem that has been well-studied. In many applications, only noisy measurements of AR process are available. The effect of the additive noise is that the system can be modeled as an AR [...] Read more.
Estimating the parameters of the autoregressive (AR) random process is a problem that has been well-studied. In many applications, only noisy measurements of AR process are available. The effect of the additive noise is that the system can be modeled as an AR model with colored noise, even when the measurement noise is white, where the correlation matrix depends on the AR parameters. Because of the correlation, it is expedient to compute using multiple stacked observations. Performing a weighted least-squares estimation of the AR parameters using an inverse covariance weighting can provide significantly better parameter estimates, with improvement increasing with the stack depth. The estimation algorithm is essentially a vector RLS adaptive filter, with time-varying covariance matrix. Different ways of estimating the unknown covariance are presented, as well as a method to estimate the variances of the AR and observation noise. The notation is extended to vector autoregressive (VAR) processes. Simulation results demonstrate performance improvements in coefficient error and in spectrum estimation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Open AccessArticle
Improving Underwater Continuous-Variable Measurement-Device-Independent Quantum Key Distribution via Zero-Photon Catalysis
Entropy 2020, 22(5), 571; https://doi.org/10.3390/e22050571 - 19 May 2020
Viewed by 317
Abstract
Underwater quantumkey distribution (QKD) is tough but important formodern underwater communications in an insecure environment. It can guarantee secure underwater communication between submarines and enhance safety for critical network nodes. To enhance the performance of continuous-variable quantumkey distribution (CVQKD) underwater in terms ofmaximal [...] Read more.
Underwater quantumkey distribution (QKD) is tough but important formodern underwater communications in an insecure environment. It can guarantee secure underwater communication between submarines and enhance safety for critical network nodes. To enhance the performance of continuous-variable quantumkey distribution (CVQKD) underwater in terms ofmaximal transmission distance and secret key rate as well, we adopt measurement-device-independent (MDI) quantum key distribution with the zero-photon catalysis (ZPC) performed at the emitter of one side, which is the ZPC-based MDI-CVQKD. Numerical simulation shows that the ZPC-involved scheme, which is a Gaussian operation in essence, works better than the single photon subtraction (SPS)-involved scheme in the extreme asymmetric case. We find that the transmission of the ZPC-involved scheme is longer than that of the SPS-involved scheme. In addition, we consider the effects of temperature, salinity and solar elevation angle on the system performance in pure seawater. The maximal transmission distance decreases with the increase of temperature and the decrease of sunlight elevation angle, while it changes little over a broad range of salinity Full article
Show Figures

Figure 1

Open AccessArticle
Systems with Size and Energy Polydispersity: From Glasses to Mosaic Crystals
Entropy 2020, 22(5), 570; https://doi.org/10.3390/e22050570 - 19 May 2020
Viewed by 301
Abstract
We use Langevin dynamics simulations to study dense 2d systems of particles with both size and energy polydispersity. We compare two types of bidisperse systems which differ in the correlation between particle size and interaction parameters: in one system big particles have high [...] Read more.
We use Langevin dynamics simulations to study dense 2d systems of particles with both size and energy polydispersity. We compare two types of bidisperse systems which differ in the correlation between particle size and interaction parameters: in one system big particles have high interaction parameters and small particles have low interaction parameters, while in the other system the situation is reversed. We study the different phases of the two systems and compare them to those of a system with size but not energy bidispersity. We show that, depending on the strength of interaction between big and small particles, cooling to low temperatures yields either homogeneous glasses or mosaic crystals. We find that systems with low mixing interaction, undergo partial freezing of one of the components at intermediate temperatures, and that while this phenomenon is energy-driven in both size and energy bidisperse systems, it is controlled by entropic effects in systems with size bidispersity only. Full article
(This article belongs to the Special Issue Entropic Control of Soft Materials)
Show Figures

Figure 1

Open AccessArticle
Uniquely Satisfiable d-Regular (k,s)-SAT Instances
Entropy 2020, 22(5), 569; https://doi.org/10.3390/e22050569 - 19 May 2020
Viewed by 298
Abstract
Unique k-SAT is the promised version of k-SAT where the given formula has 0 or 1 solution and is proved to be as difficult as the general k-SAT. For any k 3 , s f ( k , [...] Read more.
Unique k-SAT is the promised version of k-SAT where the given formula has 0 or 1 solution and is proved to be as difficult as the general k-SAT. For any k 3 , s f ( k , d ) and ( s + d ) / 2 > k 1 , a parsimonious reduction from k-CNF to d-regular (k,s)-CNF is given. Here regular (k,s)-CNF is a subclass of CNF, where each clause of the formula has exactly k distinct variables, and each variable occurs in exactly s clauses. A d-regular (k,s)-CNF formula is a regular (k,s)-CNF formula, in which the absolute value of the difference between positive and negative occurrences of every variable is at most a nonnegative integer d. We prove that for all k 3 , f ( k , d ) u ( k , d ) + 1 and f ( k , d + 1 ) u ( k , d ) . The critical function f ( k , d ) is the maximal value of s, such that every d-regular (k,s)-CNF formula is satisfiable. In this study, u ( k , d ) denotes the minimal value of s such that there exists a uniquely satisfiable d-regular (k,s)-CNF formula. We further show that for s f ( k , d ) + 1 and ( s + d ) / 2 > k 1 , there exists a uniquely satisfiable d-regular ( k , s + 1 ) -CNF formula. Moreover, for k 7 , we have that u ( k , d ) f ( k , d ) + 1 . Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
A Formal Model for Adaptive Free Choice in Complex Systems
Entropy 2020, 22(5), 568; https://doi.org/10.3390/e22050568 - 19 May 2020
Viewed by 324
Abstract
In this article, I develop a formal model of free will for complex systems based on emergent properties and adaptive selection. The model is based on a process ontology in which a free choice is a singular process that takes a system from [...] Read more.
In this article, I develop a formal model of free will for complex systems based on emergent properties and adaptive selection. The model is based on a process ontology in which a free choice is a singular process that takes a system from one macrostate to another. I quantify the model by introducing a formal measure of the ‘freedom’ of a singular choice. The ‘free will’ of a system, then, is emergent from the aggregate freedom of the choice processes carried out by the system. The focus in this model is on the actual choices themselves viewed in the context of processes. That is, the nature of the system making the choices is not considered. Nevertheless, my model does not necessarily conflict with models that are based on internal properties of the system. Rather it takes a behavioral approach by focusing on the externalities of the choice process. Full article
(This article belongs to the Special Issue Models of Consciousness)
Show Figures

Figure 1

Open AccessArticle
Machine Learning Based Automated Segmentation and Hybrid Feature Analysis for Diabetic Retinopathy Classification Using Fundus Image
Entropy 2020, 22(5), 567; https://doi.org/10.3390/e22050567 - 19 May 2020
Viewed by 432
Abstract
The object of this study was to demonstrate the ability of machine learning (ML) methods for the segmentation and classification of diabetic retinopathy (DR). Two-dimensional (2D) retinal fundus (RF) images were used. The datasets of DR—that is, the mild, moderate, non-proliferative, proliferative, and [...] Read more.
The object of this study was to demonstrate the ability of machine learning (ML) methods for the segmentation and classification of diabetic retinopathy (DR). Two-dimensional (2D) retinal fundus (RF) images were used. The datasets of DR—that is, the mild, moderate, non-proliferative, proliferative, and normal human eye ones—were acquired from 500 patients at Bahawal Victoria Hospital (BVH), Bahawalpur, Pakistan. Five hundred RF datasets (sized 256 × 256) for each DR stage and a total of 2500 (500 × 5) datasets of the five DR stages were acquired. This research introduces the novel clustering-based automated region growing framework. For texture analysis, four types of features—histogram (H), wavelet (W), co-occurrence matrix (COM) and run-length matrix (RLM)—were extracted, and various ML classifiers were employed, achieving 77.67%, 80%, 89.87%, and 96.33% classification accuracies, respectively. To improve classification accuracy, a fused hybrid-feature dataset was generated by applying the data fusion approach. From each image, 245 pieces of hybrid feature data (H, W, COM, and RLM) were observed, while 13 optimized features were selected after applying four different feature selection techniques, namely Fisher, correlation-based feature selection, mutual information, and probability of error plus average correlation. Five ML classifiers named sequential minimal optimization (SMO), logistic (Lg), multi-layer perceptron (MLP), logistic model tree (LMT), and simple logistic (SLg) were deployed on selected optimized features (using 10-fold cross-validation), and they showed considerably high classification accuracies of 98.53%, 99%, 99.66%, 99.73%, and 99.73%, respectively. Full article
(This article belongs to the Special Issue Information-Theoretic Data Mining)
Show Figures

Figure 1

Open AccessArticle
Optimization for Software Implementation of Fractional Calculus Numerical Methods in an Embedded System
Entropy 2020, 22(5), 566; https://doi.org/10.3390/e22050566 - 18 May 2020
Viewed by 434
Abstract
In this article, some practical software optimization methods for implementations of fractional order backward difference, sum, and differintegral operator based on Grünwald–Letnikov definition are presented. These numerical algorithms are of great interest in the context of the evaluation of fractional-order differential equations in [...] Read more.
In this article, some practical software optimization methods for implementations of fractional order backward difference, sum, and differintegral operator based on Grünwald–Letnikov definition are presented. These numerical algorithms are of great interest in the context of the evaluation of fractional-order differential equations in embedded systems, due to their more convenient form compared to Caputo and Riemann–Liouville definitions or Laplace transforms, based on the discrete convolution operation. A well-known difficulty relates to the non-locality of the operator, implying continually increasing numbers of processed samples, which may reach the limits of available memory or lead to exceeding the desired computation time. In the study presented here, several promising software optimization techniques were analyzed and tested in the evaluation of the variable fractional-order backward difference and derivative on two different Arm® Cortex®-M architectures. Reductions in computation times of up to 75% and 87% were achieved compared to the initial implementation, depending on the type of Arm® core. Full article
(This article belongs to the Special Issue Entropy in Dynamic Systems II)
Show Figures

Figure 1

Open AccessReview
Order Indices and Entanglement Production in Quantum Systems
Entropy 2020, 22(5), 565; https://doi.org/10.3390/e22050565 - 18 May 2020
Viewed by 340
Abstract
The review is devoted to two important quantities characterizing many-body systems, order indices and the measure of entanglement production. Order indices describe the type of order distinguishing statistical systems. Contrary to the order parameters characterizing systems in the thermodynamic limit and describing long-range [...] Read more.
The review is devoted to two important quantities characterizing many-body systems, order indices and the measure of entanglement production. Order indices describe the type of order distinguishing statistical systems. Contrary to the order parameters characterizing systems in the thermodynamic limit and describing long-range order, the order indices are applicable to finite systems and classify all types of orders, including long-range, mid-range, and short-range orders. The measure of entanglement production quantifies the amount of entanglement produced in a many-partite system by a quantum operation. Despite that the notions of order indices and entanglement production seem to be quite different, there is an intimate relation between them, which is emphasized in the review. Full article
(This article belongs to the Special Issue Quantum Many-Body Dynamics in Physics, Chemistry, and Mathematics)
Open AccessFeature PaperArticle
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564 - 18 May 2020
Viewed by 441
Abstract
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees [...] Read more.
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories. Full article
Show Figures

Figure 1

Open AccessArticle
On Relations Between the Relative Entropy and χ2-Divergence, Generalizations and Applications
Entropy 2020, 22(5), 563; https://doi.org/10.3390/e22050563 (registering DOI) - 18 May 2020
Viewed by 421
Abstract
This paper is focused on a study of integral relations between the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics, a study of the implications of these relations, their information-theoretic applications, and some generalizations [...] Read more.
This paper is focused on a study of integral relations between the relative entropy and the chi-squared divergence, which are two fundamental divergence measures in information theory and statistics, a study of the implications of these relations, their information-theoretic applications, and some generalizations pertaining to the rich class of f-divergences. Applications that are studied in this paper refer to lossless compression, the method of types and large deviations, strong data–processing inequalities, bounds on contraction coefficients and maximal correlation, and the convergence rate to stationarity of a type of discrete-time Markov chains. Full article
Previous Issue
Next Issue
Back to TopTop