Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 21, Issue 2 (February 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The compositionally complex alloy Al10Co25Cr8Fe15Ni36Ti6 stands out for its potential application [...] Read more.
View options order results:
result details:
Displaying articles 1-119
Export citation of selected articles as:
Open AccessArticle Assessment of Landslide Susceptibility Using Integrated Ensemble Fractal Dimension with Kernel Logistic Regression Model
Entropy 2019, 21(2), 218; https://doi.org/10.3390/e21020218
Received: 17 January 2019 / Revised: 16 February 2019 / Accepted: 20 February 2019 / Published: 24 February 2019
Viewed by 457 | PDF Full-text (23141 KB) | HTML Full-text | XML Full-text
Abstract
The main aim of this study was to compare and evaluate the performance of fractal dimension as input data in the landslide susceptibility mapping of the Baota District, Yan’an City, China. First, a total of 632 points, including 316 landslide points and 316 [...] Read more.
The main aim of this study was to compare and evaluate the performance of fractal dimension as input data in the landslide susceptibility mapping of the Baota District, Yan’an City, China. First, a total of 632 points, including 316 landslide points and 316 non-landslide points, were located in the landslide inventory map. All points were divided into two parts according to the ratio of 70%:30%, with 70% (442) of the points used as the training dataset to train the models, and the remaining, namely the validation dataset, applied for validation. Second, 13 predisposing factors, including slope aspect, slope angle, altitude, lithology, mean annual precipitation (MAP), distance to rivers, distance to faults, distance to roads, normalized differential vegetation index (NDVI), topographic wetness index (TWI), plan curvature, profile curvature, and terrain roughness index (TRI), were selected. Then, the original numerical data, box-counting dimension, and correlation dimension corresponding to each predisposing factor were calculated to generate the input data and build three classification models, namely the kernel logistic regression model (KLR), kernel logistic regression based on box-counting dimension model (KLRbox-counting), and the kernel logistic regression based on correlation dimension model (KLRcorrelation). Next, the statistical indexes and the receiver operating characteristic (ROC) curve were employed to evaluate the models’ performance. Finally, the KLRcorrelation model had the highest area under the curve (AUC) values of 0.8984 and 0.9224, obtained by the training and validation datasets, respectively, indicating that the fractal dimension can be used as the input data for landslide susceptibility mapping with a better effect. Full article
Figures

Figure 1

Open AccessArticle Secrecy Performance Enhancement for Underlay Cognitive Radio Networks Employing Cooperative Multi-Hop Transmission with and without Presence of Hardware Impairments
Entropy 2019, 21(2), 217; https://doi.org/10.3390/e21020217
Received: 2 January 2019 / Revised: 13 February 2019 / Accepted: 20 February 2019 / Published: 24 February 2019
Viewed by 492 | PDF Full-text (1033 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we consider a cooperative multi-hop secured transmission protocol to underlay cognitive radio networks. In the proposed protocol, a secondary source attempts to transmit its data to a secondary destination with the assistance of multiple secondary relays. In addition, there exists [...] Read more.
In this paper, we consider a cooperative multi-hop secured transmission protocol to underlay cognitive radio networks. In the proposed protocol, a secondary source attempts to transmit its data to a secondary destination with the assistance of multiple secondary relays. In addition, there exists a secondary eavesdropper who tries to overhear the source data. Under a maximum interference level required by a primary user, the secondary source and relay nodes must adjust their transmit power. We first formulate effective signal-to-interference-plus-noise ratio (SINR) as well as secrecy capacity under the constraints of the maximum transmit power, the interference threshold and the hardware impairment level. Furthermore, when the hardware impairment level is relaxed, we derive exact and asymptotic expressions of end-to-end secrecy outage probability over Rayleigh fading channels by using the recursive method. The derived expressions were verified by simulations, in which the proposed scheme outperformed the conventional multi-hop direct transmission protocol. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Figures

Graphical abstract

Open AccessArticle Unification of Epistemic and Ontic Concepts of Information, Probability, and Entropy, Using Cognizers-System Model
Entropy 2019, 21(2), 216; https://doi.org/10.3390/e21020216
Received: 7 January 2019 / Revised: 16 February 2019 / Accepted: 20 February 2019 / Published: 24 February 2019
Viewed by 496 | PDF Full-text (721 KB) | HTML Full-text | XML Full-text
Abstract
Information and probability are common words used in scientific investigations. However, information and probability both involve epistemic (subjective) and ontic (objective) interpretations under the same terms, which causes controversy within the concept of entropy in physics and biology. There is another issue regarding [...] Read more.
Information and probability are common words used in scientific investigations. However, information and probability both involve epistemic (subjective) and ontic (objective) interpretations under the same terms, which causes controversy within the concept of entropy in physics and biology. There is another issue regarding the circularity between information (or data) and reality: The observation of reality produces phenomena (or events), whereas the reality is confirmed (or constituted) by phenomena. The ordinary concept of information presupposes reality as a source of information, whereas another type of information (known as it-from-bit) constitutes the reality from data (bits). In this paper, a monistic model, called the cognizers-system model (CS model), is employed to resolve these issues. In the CS model, observations (epistemic) and physical changes (ontic) are both unified as “cognition”, meaning a related state change. Information and probability, epistemic and ontic, are formalized and analyzed systematically using a common theoretical framework of the CS model or a related model. Based on the results, a perspective for resolving controversial issues of entropy originating from information and probability is presented. Full article
Figures

Figure 1

Open AccessArticle The Choice of an Appropriate Information Dissimilarity Measure for Hierarchical Clustering of River Streamflow Time Series, Based on Calculated Lyapunov Exponent and Kolmogorov Measures
Entropy 2019, 21(2), 215; https://doi.org/10.3390/e21020215
Received: 26 January 2019 / Revised: 11 February 2019 / Accepted: 22 February 2019 / Published: 23 February 2019
Viewed by 458 | PDF Full-text (3018 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of this paper was to choose an appropriate information dissimilarity measure for hierarchical clustering of daily streamflow discharge data, from twelve gauging stations on the Brazos River in Texas (USA), for the period 1989–2016. For that purpose, we selected and compared [...] Read more.
The purpose of this paper was to choose an appropriate information dissimilarity measure for hierarchical clustering of daily streamflow discharge data, from twelve gauging stations on the Brazos River in Texas (USA), for the period 1989–2016. For that purpose, we selected and compared the average-linkage clustering hierarchical algorithm based on the compression-based dissimilarity measure (NCD), permutation distribution dissimilarity measure (PDDM), and Kolmogorov distance (KD). The algorithm was also compared with K-means clustering based on Kolmogorov complexity (KC), the highest value of Kolmogorov complexity spectrum (KCM), and the largest Lyapunov exponent (LLE). Using a dissimilarity matrix based on NCD, PDDM, and KD for daily streamflow, the agglomerative average-linkage hierarchical algorithm was applied. The key findings of this study are that: (i) The KD clustering algorithm is the most suitable among others; (ii) ANOVA analysis shows that there exist highly significant differences between mean values of four clusters, confirming that the choice of the number of clusters was suitably done; and (iii) from the clustering we found that the predictability of streamflow data of the Brazos River given by the Lyapunov time (LT), corrected for randomness by Kolmogorov time (KT) in days, lies in the interval from two to five days. Full article
Figures

Figure 1

Open AccessArticle Macroscopic Cluster Organizations Change the Complexity of Neural Activity
Entropy 2019, 21(2), 214; https://doi.org/10.3390/e21020214
Received: 14 December 2018 / Revised: 11 February 2019 / Accepted: 19 February 2019 / Published: 23 February 2019
Viewed by 529 | PDF Full-text (41767 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
In this study, simulations are conducted using a network model to examine how the macroscopic network in the brain is related to the complexity of activity for each region. The network model is composed of multiple neuron groups, each of which consists of [...] Read more.
In this study, simulations are conducted using a network model to examine how the macroscopic network in the brain is related to the complexity of activity for each region. The network model is composed of multiple neuron groups, each of which consists of spiking neurons with different topological properties of a macroscopic network based on the Watts and Strogatz model. The complexity of spontaneous activity is analyzed using multiscale entropy, and the structural properties of the network are analyzed using complex network theory. Experimental results show that a macroscopic structure with high clustering and high degree centrality increases the firing rates of neurons in a neuron group and enhances intraconnections from the excitatory neurons to inhibitory neurons in a neuron group. As a result, the intensity of the specific frequency components of neural activity increases. This decreases the complexity of neural activity. Finally, we discuss the research relevance of the complexity of the brain activity. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Figures

Figure 1

Open AccessArticle Asymptotic Rate-Distortion Analysis of Symmetric Remote Gaussian Source Coding: Centralized Encoding vs. Distributed Encoding
Entropy 2019, 21(2), 213; https://doi.org/10.3390/e21020213
Received: 10 January 2019 / Revised: 17 February 2019 / Accepted: 20 February 2019 / Published: 23 February 2019
Viewed by 384 | PDF Full-text (327 KB) | HTML Full-text | XML Full-text
Abstract
Consider a symmetric multivariate Gaussian source with components, which are corrupted by independent and identically distributed Gaussian noises; these noisy components are compressed at a certain rate, and the compressed version is leveraged to reconstruct the source subject to a mean squared [...] Read more.
Consider a symmetric multivariate Gaussian source with components, which are corrupted by independent and identically distributed Gaussian noises; these noisy components are compressed at a certain rate, and the compressed version is leveraged to reconstruct the source subject to a mean squared error distortion constraint. The rate-distortion analysis is performed for two scenarios: centralized encoding (where the noisy source components are jointly compressed) and distributed encoding (where the noisy source components are separately compressed). It is shown, among other things, that the gap between the rate-distortion functions associated with these two scenarios admits a simple characterization in the large limit. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Figures

Figure 1

Open AccessArticle Attack Algorithm for a Keystore-Based Secret Key Generation Method
Entropy 2019, 21(2), 212; https://doi.org/10.3390/e21020212
Received: 19 January 2019 / Revised: 18 February 2019 / Accepted: 20 February 2019 / Published: 23 February 2019
Viewed by 409 | PDF Full-text (431 KB) | HTML Full-text | XML Full-text
Abstract
A new attack algorithm is proposed for a secure key generation and management method introduced by Yang and Wu. It was previously claimed that the key generation method of Yang and Wu using a keystore seed was information-theoretically secure and could solve the [...] Read more.
A new attack algorithm is proposed for a secure key generation and management method introduced by Yang and Wu. It was previously claimed that the key generation method of Yang and Wu using a keystore seed was information-theoretically secure and could solve the long-term key storage problem in cloud systems, thanks to the huge number of secure keys that the keystone seed can generate. Their key generation method, however, is considered to be broken if an attacker can recover the keystore seed. The proposed attack algorithm in this paper reconstructs the keystore seed of the Yang–Wu key generation method from a small number of collected keys. For example, when t = 5 and l = 2 7 , it was previously claimed that more than 2 53 secure keys could be generated, but the proposed attack algorithm can reconstruct the keystone seed based on only 84 collected keys. Hence, the Yang–Wu key generation method is not information-theoretically secure when the attacker can gather multiple keys and a critical amount of information about the keystone seed is leaked. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Figures

Figure 1

Open AccessArticle An Intuitionistic Evidential Method for Weight Determination in FMEA Based on Belief Entropy
Entropy 2019, 21(2), 211; https://doi.org/10.3390/e21020211
Received: 23 January 2019 / Revised: 14 February 2019 / Accepted: 20 February 2019 / Published: 22 February 2019
Viewed by 382 | PDF Full-text (377 KB) | HTML Full-text | XML Full-text
Abstract
Failure Mode and Effects Analysis (FMEA) has been regarded as an effective analysis approach to identify and rank the potential failure modes in many applications. However, how to determine the weights of team members appropriately, with the impact factor of domain experts’ uncertainty [...] Read more.
Failure Mode and Effects Analysis (FMEA) has been regarded as an effective analysis approach to identify and rank the potential failure modes in many applications. However, how to determine the weights of team members appropriately, with the impact factor of domain experts’ uncertainty in decision-making of FMEA, is still an open issue. In this paper, a new method to determine the weights of team members, which combines evidence theory, intuitionistic fuzzy sets (IFSs) and belief entropy, is proposed to analyze the failure modes. One of the advantages of the presented model is that the uncertainty of experts in the decision-making process is taken into consideration. The proposed method is data driven with objective and reasonable properties, which considers the risk of weights more completely. A numerical example is shown to illustrate the feasibility and availability of the proposed method. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Figures

Figure 1

Open AccessReview An Overview on Denial-of-Service Attacks in Control Systems: Attack Models and Security Analyses
Entropy 2019, 21(2), 210; https://doi.org/10.3390/e21020210
Received: 29 January 2019 / Revised: 16 February 2019 / Accepted: 19 February 2019 / Published: 22 February 2019
Viewed by 450 | PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we provide an overview of recent research efforts on networked control systems under denial-of-service attacks. Our goal is to discuss the utility of different attack modeling and analysis techniques proposed in the literature for addressing feedback control, state estimation, and [...] Read more.
In this paper, we provide an overview of recent research efforts on networked control systems under denial-of-service attacks. Our goal is to discuss the utility of different attack modeling and analysis techniques proposed in the literature for addressing feedback control, state estimation, and multi-agent consensus problems in the face of jamming attacks in wireless channels and malicious packet drops in multi-hop networks. We discuss several modeling approaches that are employed for capturing the uncertainty in denial-of-service attack strategies. We give an outlook on deterministic constraint-based modeling ideas, game-theoretic and optimization-based techniques and probabilistic modeling approaches. A special emphasis is placed on tail-probability based failure models, which have been recently used for describing jamming attacks that affect signal to interference-plus-noise ratios of wireless channels as well as transmission failures on multi-hop networks due to packet-dropping attacks and non-malicious issues. We explain the use of attack models in the security analysis of networked systems. In addition to the modeling and analysis problems, a discussion is provided also on the recent developments concerning the design of attack-resilient control and communication protocols. Full article
(This article belongs to the Special Issue Entropy in Networked Control)
Figures

Figure 1

Open AccessArticle Quantum Pumping with Adiabatically Modulated Barriers in Three-Band Pseudospin-1 Dirac–Weyl Systems
Entropy 2019, 21(2), 209; https://doi.org/10.3390/e21020209
Received: 28 November 2018 / Revised: 4 February 2019 / Accepted: 4 February 2019 / Published: 22 February 2019
Viewed by 402 | PDF Full-text (824 KB) | HTML Full-text | XML Full-text
Abstract
In this work, pumped currents of the adiabatically-driven double-barrier structure based on the pseudospin-1 Dirac–Weyl fermions are studied. As a result of the three-band dispersion and hence the unique properties of pseudospin-1 Dirac–Weyl quasiparticles, sharp current-direction reversal is found at certain parameter settings [...] Read more.
In this work, pumped currents of the adiabatically-driven double-barrier structure based on the pseudospin-1 Dirac–Weyl fermions are studied. As a result of the three-band dispersion and hence the unique properties of pseudospin-1 Dirac–Weyl quasiparticles, sharp current-direction reversal is found at certain parameter settings especially at the Dirac point of the band structure, where apexes of the two cones touch at the flat band. Such a behavior can be interpreted consistently by the Berry phase of the scattering matrix and the classical turnstile mechanism. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Figures

Figure 1

Open AccessArticle Study on Asphalt Pavement Surface Texture Degradation Using 3-D Image Processing Techniques and Entropy Theory
Entropy 2019, 21(2), 208; https://doi.org/10.3390/e21020208
Received: 4 February 2019 / Revised: 17 February 2019 / Accepted: 18 February 2019 / Published: 21 February 2019
Viewed by 423 | PDF Full-text (10466 KB) | HTML Full-text | XML Full-text
Abstract
Surface texture is a very important factor affecting the anti-skid performance of pavements. In this paper, entropy theory is introduced to study the decay behavior of the three-dimensional macrotexture and microtexture of road surfaces in service based on the field test data collected [...] Read more.
Surface texture is a very important factor affecting the anti-skid performance of pavements. In this paper, entropy theory is introduced to study the decay behavior of the three-dimensional macrotexture and microtexture of road surfaces in service based on the field test data collected over more than 2 years. Entropy is found to be feasible for evaluating the three-dimensional macrotexture and microtexture of an asphalt pavement surface. The complexity of the texture increases with the increase of entropy. Under the polishing action of the vehicle load, the entropy of the surface texture decreases gradually. The three-dimensional macrotexture decay characteristics of asphalt pavement surfaces are significantly different for different mixture designs. The macrotexture decay performance of asphalt pavement can be improved by designing appropriate mixtures. Compared with the traditional macrotexture parameter Mean Texture Depth (MTD) index, entropy contains more physical information and has a better correlation with the pavement anti-skid performance index. It has significant advantages in describing the relationship between macrotexture characteristics and the anti-skid performance of asphalt pavement. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Open AccessArticle Adaptive Synchronization of Fractional-Order Complex Chaotic system with Unknown Complex Parameters
Entropy 2019, 21(2), 207; https://doi.org/10.3390/e21020207
Received: 27 December 2018 / Revised: 11 February 2019 / Accepted: 19 February 2019 / Published: 21 February 2019
Cited by 1 | Viewed by 457 | PDF Full-text (2107 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the problem of synchronization of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. Based on the complex-variable inequality and stability theory for fractional-order complex-valued system, a new scheme is presented for adaptive synchronization of FOCCS with unknown complex parameters. [...] Read more.
This paper investigates the problem of synchronization of fractional-order complex-variable chaotic systems (FOCCS) with unknown complex parameters. Based on the complex-variable inequality and stability theory for fractional-order complex-valued system, a new scheme is presented for adaptive synchronization of FOCCS with unknown complex parameters. The proposed scheme not only provides a new method to analyze fractional-order complex-valued system but also significantly reduces the complexity of computation and analysis. Theoretical proof and simulation results substantiate the effectiveness of the presented synchronization scheme. Full article
Figures

Figure 1

Open AccessArticle Coherent Structure of Flow Based on Denoised Signals in T-junction Ducts with Vertical Blades
Entropy 2019, 21(2), 206; https://doi.org/10.3390/e21020206
Received: 24 December 2018 / Revised: 15 February 2019 / Accepted: 15 February 2019 / Published: 21 February 2019
Viewed by 402 | PDF Full-text (5258 KB) | HTML Full-text | XML Full-text
Abstract
The skin friction consumes some of the energy when a train is running, and the coherent structure plays an important role in the skin friction. In this paper, we focus on the coherent structure generated near the vent of a train. The intention [...] Read more.
The skin friction consumes some of the energy when a train is running, and the coherent structure plays an important role in the skin friction. In this paper, we focus on the coherent structure generated near the vent of a train. The intention is to investigate the effect of the vent on the generation of coherent structures. The ventilation system of a high-speed train is reasonably simplified as a T-junction duct with vertical blades. The velocity signal of the cross duct was measured in three different sections (upstream, mid-center and downstream), and then the coherent structure of the denoised signals was analyzed by continuous wavelet transform (CWT). The analysis indicates that the coherent structure frequencies become abundant and the energy peak decreases with the increase of the velocity ratio. As a result, we conclude that a higher velocity ratio is preferable to reduce the skin friction of the train. Besides, with the increase of velocity ratio, the dimensionless frequency St of the high-energy coherent structure does not change obviously and St = 3.09 × 10−4–4.51 × 10−4. Full article
Figures

Figure 1

Open AccessArticle Matching Users’ Preference under Target Revenue Constraints in Data Recommendation Systems
Entropy 2019, 21(2), 205; https://doi.org/10.3390/e21020205
Received: 30 January 2019 / Revised: 18 February 2019 / Accepted: 18 February 2019 / Published: 21 February 2019
Viewed by 438 | PDF Full-text (737 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the problem of finding a particular data recommendation strategy based on the user preference and a system expected revenue. To this end, we formulate this problem as an optimization by designing the recommendation mechanism as close to the user [...] Read more.
This paper focuses on the problem of finding a particular data recommendation strategy based on the user preference and a system expected revenue. To this end, we formulate this problem as an optimization by designing the recommendation mechanism as close to the user behavior as possible with a certain revenue constraint. In fact, the optimal recommendation distribution is the one that is the closest to the utility distribution in the sense of relative entropy and satisfies expected revenue. We show that the optimal recommendation distribution follows the same form as the message importance measure (MIM) if the target revenue is reasonable, i.e., neither too small nor too large. Therefore, the optimal recommendation distribution can be regarded as the normalized MIM, where the parameter, called importance coefficient, presents the concern of the system and switches the attention of the system over data sets with different occurring probability. By adjusting the importance coefficient, our MIM based framework of data recommendation can then be applied to systems with various system requirements and data distributions. Therefore, the obtained results illustrate the physical meaning of MIM from the data recommendation perspective and validate the rationality of MIM in one aspect. Full article
(This article belongs to the Special Issue Entropy and Information in Networks, from Societies to Cities)
Figures

Graphical abstract

Open AccessArticle On Entropic Framework Based on Standard and Fractional Phonon Boltzmann Transport Equations
Entropy 2019, 21(2), 204; https://doi.org/10.3390/e21020204
Received: 22 January 2019 / Revised: 16 February 2019 / Accepted: 18 February 2019 / Published: 21 February 2019
Viewed by 335 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
Generalized expressions of the entropy and related concepts in non-Fourier heat conduction have attracted increasing attention in recent years. Based on standard and fractional phonon Boltzmann transport equations (BTEs), we study entropic functionals including entropy density, entropy flux and entropy production rate. Using [...] Read more.
Generalized expressions of the entropy and related concepts in non-Fourier heat conduction have attracted increasing attention in recent years. Based on standard and fractional phonon Boltzmann transport equations (BTEs), we study entropic functionals including entropy density, entropy flux and entropy production rate. Using the relaxation time approximation and power series expansion, macroscopic approximations are derived for these entropic concepts. For the standard BTE, our results can recover the entropic frameworks of classical irreversible thermodynamics (CIT) and extended irreversible thermodynamics (EIT) as if there exists a well-defined effective thermal conductivity. For the fractional BTEs corresponding to the generalized Cattaneo equation (GCE) class, the entropy flux and entropy production rate will deviate from the forms in CIT and EIT. In these cases, the entropy flux and entropy production rate will contain fractional-order operators, which reflect memory effects. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Open AccessArticle Entropy Value-Based Pursuit Projection Cluster for the Teaching Quality Evaluation with Interval Number
Entropy 2019, 21(2), 203; https://doi.org/10.3390/e21020203
Received: 3 February 2019 / Revised: 19 February 2019 / Accepted: 19 February 2019 / Published: 21 February 2019
Viewed by 353 | PDF Full-text (1426 KB) | HTML Full-text | XML Full-text
Abstract
The issue motivating the paper is the quantification of students’ academic performance and learning achievement regarding teaching quality, under interval number condition, in order to establish a novel model for identifying, evaluating, and monitoring the major factors of the overall teaching quality. We [...] Read more.
The issue motivating the paper is the quantification of students’ academic performance and learning achievement regarding teaching quality, under interval number condition, in order to establish a novel model for identifying, evaluating, and monitoring the major factors of the overall teaching quality. We propose a projection pursuit cluster evaluation model, with entropy value method on the model weights. The weights of the model can then be obtained under the traditional real number conditions after a simulation process by Monte Carlo for transforming interval number to real number. This approach can not only simplify the evaluation of the interval number indicators but also give the weight of each index objectively. This model is applied to 5 teacher data collected from a China college with 4 primary indicators and 15 secondary sub-indicators. Results from the proposed approach are compared with the ones obtained by two alternative evaluating methods. The analysis carried out has contributed to having a better understanding of the education processes in order to promote performance in teaching. Full article
Figures

Figure 1

Open AccessArticle Multimode Decomposition and Wavelet Threshold Denoising of Mold Level Based on Mutual Information Entropy
Entropy 2019, 21(2), 202; https://doi.org/10.3390/e21020202
Received: 1 February 2019 / Revised: 18 February 2019 / Accepted: 19 February 2019 / Published: 21 February 2019
Viewed by 396 | PDF Full-text (3587 KB) | HTML Full-text | XML Full-text
Abstract
The continuous casting process is a continuous, complex phase transition process. The noise components of the continuous casting process are complex, the model is difficult to establish, and it is difficult to separate the noise and clear signals effectively. Owing to these demerits, [...] Read more.
The continuous casting process is a continuous, complex phase transition process. The noise components of the continuous casting process are complex, the model is difficult to establish, and it is difficult to separate the noise and clear signals effectively. Owing to these demerits, a hybrid algorithm combining Variational Mode Decomposition (VMD) and Wavelet Threshold denoising (WTD) is proposed, which involves multiscale resolution and adaptive features. First of all, the original signal is decomposed into several Intrinsic Mode Functions (IMFs) by Empirical Mode Decomposition (EMD), and the model parameter K of the VMD is obtained by analyzing the EMD results. Then, the original signal is decomposed by VMD based on the number of IMFs K, and the Mutual Information Entropy (MIE) between IMFs is calculated to identify the noise dominant component and the information dominant component. Next, the noise dominant component is denoised by WTD. Finally, the denoised noise dominant component and all information dominant components are reconstructed to obtain the denoised signal. In this paper, a comprehensive comparative analysis of EMD, Ensemble Empirical Mode Decomposition (EEMD), Complementary Empirical Mode Decomposition (CEEMD), EMD-WTD, Empirical Wavelet Transform (EWT), WTD, VMD, and VMD-WTD is carried out, and the denoising performance of the various methods is evaluated from four perspectives. The experimental results show that the hybrid algorithm proposed in this paper has a better denoising effect than traditional methods and can effectively separate noise and clear signals. The proposed denoising algorithm is shown to be able to effectively recognize different cast speeds. Full article
(This article belongs to the collection Wavelets, Fractals and Information Theory)
Figures

Figure 1

Open AccessArticle The Ordering of Shannon Entropies for the Multivariate Distributions and Distributions of Eigenvalues
Entropy 2019, 21(2), 201; https://doi.org/10.3390/e21020201
Received: 14 January 2019 / Revised: 17 February 2019 / Accepted: 18 February 2019 / Published: 20 February 2019
Viewed by 417 | PDF Full-text (300 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we prove the Shannon entropy inequalities for the multivariate distributions via the notion of convex ordering of two multivariate distributions. We further characterize the multivariate totally positive of order 2 (MTP2) property of the distribution [...] Read more.
In this paper, we prove the Shannon entropy inequalities for the multivariate distributions via the notion of convex ordering of two multivariate distributions. We further characterize the multivariate totally positive of order 2 ( M T P 2 ) property of the distribution functions of eigenvalues of both central Wishart and central MANOVA models, and of both noncentral Wishart and noncentral MANOVA models under the general population covariance matrix set-up. These results can be directly applied to both the comparisons of two Shannon entropy measures and the power monotonicity problem for the MANOVA problem. Full article
Open AccessArticle Amplitude Constrained MIMO Channels: Properties of Optimal Input Distributions and Bounds on the Capacity
Entropy 2019, 21(2), 200; https://doi.org/10.3390/e21020200
Received: 21 January 2019 / Revised: 6 February 2019 / Accepted: 13 February 2019 / Published: 19 February 2019
Viewed by 483 | PDF Full-text (486 KB) | HTML Full-text | XML Full-text
Abstract
In this work, the capacity of multiple-input multiple-output channels that are subject to constraints on the support of the input is studied. The paper consists of two parts. The first part focuses on the general structure of capacity-achieving input distributions. Known results are [...] Read more.
In this work, the capacity of multiple-input multiple-output channels that are subject to constraints on the support of the input is studied. The paper consists of two parts. The first part focuses on the general structure of capacity-achieving input distributions. Known results are surveyed and several new results are provided. With regard to the latter, it is shown that the support of a capacity-achieving input distribution is a small set in both a topological and a measure theoretical sense. Moreover, explicit conditions on the channel input space and the channel matrix are found such that the support of a capacity-achieving input distribution is concentrated on the boundary of the input space only. The second part of this paper surveys known bounds on the capacity and provides several novel upper and lower bounds for channels with arbitrary constraints on the support of the channel input symbols. As an immediate practical application, the special case of multiple-input multiple-output channels with amplitude constraints is considered. The bounds are shown to be within a constant gap to the capacity if the channel matrix is invertible and are tight in the high amplitude regime for arbitrary channel matrices. Moreover, in the regime of high amplitudes, it is shown that the capacity scales linearly with the minimum between the number of transmit and receive antennas, similar to the case of average power-constrained inputs. Full article
(This article belongs to the Special Issue Information Theory for Data Communications and Processing)
Figures

Figure 1

Open AccessArticle Multiscale Entropy Quantifies the Differential Effect of the Medium Embodiment on Older Adults Prefrontal Cortex during the Story Comprehension: A Comparative Analysis
Entropy 2019, 21(2), 199; https://doi.org/10.3390/e21020199
Received: 23 January 2019 / Revised: 15 February 2019 / Accepted: 16 February 2019 / Published: 19 February 2019
Viewed by 491 | PDF Full-text (6206 KB) | HTML Full-text | XML Full-text
Abstract
Todays’ communication media virtually impact and transform every aspect of our daily communication and yet the extent of their embodiment on our brain is unexplored. The study of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive [...] Read more.
Todays’ communication media virtually impact and transform every aspect of our daily communication and yet the extent of their embodiment on our brain is unexplored. The study of this topic becomes more crucial, considering the rapid advances in such fields as socially assistive robotics that envision the use of intelligent and interactive media for providing assistance through social means. In this article, we utilize the multiscale entropy (MSE) to investigate the effect of the physical embodiment on the older people’s prefrontal cortex (PFC) activity while listening to stories. We provide evidence that physical embodiment induces a significant increase in MSE of the older people’s PFC activity and that such a shift in the dynamics of their PFC activation significantly reflects their perceived feeling of fatigue. Our results benefit researchers in age-related cognitive function and rehabilitation who seek for the adaptation of these media in robot-assistive cognitive training of the older people. In addition, they offer a complementary information to the field of human-robot interaction via providing evidence that the use of MSE can enable the interactive learning algorithms to utilize the brain’s activation patterns as feedbacks for improving their level of interactivity, thereby forming a stepping stone for rich and usable human mental model. Full article
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications)
Figures

Figure 1

Open AccessArticle Attribute Selection Based on Constraint Gain and Depth Optimal for a Decision Tree
Entropy 2019, 21(2), 198; https://doi.org/10.3390/e21020198
Received: 2 January 2019 / Revised: 8 February 2019 / Accepted: 13 February 2019 / Published: 19 February 2019
Viewed by 401 | PDF Full-text (3224 KB) | HTML Full-text | XML Full-text
Abstract
Uncertainty evaluation based on statistical probabilistic information entropy is a commonly used mechanism for a heuristic method construction of decision tree learning. The entropy kernel potentially links its deviation and decision tree classification performance. This paper presents a decision tree learning algorithm based [...] Read more.
Uncertainty evaluation based on statistical probabilistic information entropy is a commonly used mechanism for a heuristic method construction of decision tree learning. The entropy kernel potentially links its deviation and decision tree classification performance. This paper presents a decision tree learning algorithm based on constrained gain and depth induction optimization. Firstly, the calculation and analysis of single- and multi-value event uncertainty distributions of information entropy is followed by an enhanced property of single-value event entropy kernel and multi-value event entropy peaks as well as a reciprocal relationship between peak location and the number of possible events. Secondly, this study proposed an estimated method for information entropy whose entropy kernel is replaced with a peak-shift sine function to establish a decision tree learning (CGDT) algorithm on the basis of constraint gain. Finally, by combining branch convergence and fan-out indices under an inductive depth of a decision tree, we built a constraint gained and depth inductive improved decision tree (CGDIDT) learning algorithm. Results show the benefits of the CGDT and CGDIDT algorithms. Full article
Figures

Figure 1

Open AccessArticle Magnetotelluric Signal-Noise Identification and Separation Based on ApEn-MSE and StOMP
Entropy 2019, 21(2), 197; https://doi.org/10.3390/e21020197
Received: 15 December 2018 / Revised: 4 February 2019 / Accepted: 13 February 2019 / Published: 19 February 2019
Viewed by 453 | PDF Full-text (4137 KB) | HTML Full-text | XML Full-text
Abstract
Natural magnetotelluric signals are extremely weak and susceptible to various types of noise pollution. To obtain more useful magnetotelluric data for further analysis and research, effective signal-noise identification and separation is critical. To this end, we propose a novel method of magnetotelluric signal-noise [...] Read more.
Natural magnetotelluric signals are extremely weak and susceptible to various types of noise pollution. To obtain more useful magnetotelluric data for further analysis and research, effective signal-noise identification and separation is critical. To this end, we propose a novel method of magnetotelluric signal-noise identification and separation based on ApEn-MSE and Stagewise orthogonal matching pursuit (StOMP). Parameters with good irregularity metrics are introduced: Approximate entropy (ApEn) and multiscale entropy (MSE), in combination with k-means clustering, can be used to accurately identify the data segments that are disturbed by noise. Stagewise orthogonal matching pursuit (StOMP) is used for noise suppression only in data segments identified as containing strong interference. Finally, we reconstructed the signal. The results show that the proposed method can better preserve the low-frequency slow-change information of the magnetotelluric signal compared with just using StOMP, thus avoiding the loss of useful information due to over-processing, while producing a smoother and more continuous apparent resistivity curve. Moreover, the results more accurately reflect the inherent electrical structure information of the measured site itself. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Figure 1

Open AccessFeature PaperArticle Centroid-Based Clustering with αβ-Divergences
Entropy 2019, 21(2), 196; https://doi.org/10.3390/e21020196
Received: 18 January 2019 / Revised: 6 February 2019 / Accepted: 14 February 2019 / Published: 19 February 2019
Viewed by 467 | PDF Full-text (956 KB) | HTML Full-text | XML Full-text
Abstract
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in [...] Read more.
Centroid-based clustering is a widely used technique within unsupervised learning algorithms in many research fields. The success of any centroid-based clustering relies on the choice of the similarity measure under use. In recent years, most studies focused on including several divergence measures in the traditional hard k-means algorithm. In this article, we consider the problem of centroid-based clustering using the family of α β -divergences, which is governed by two parameters, α and β . We propose a new iterative algorithm, α β -k-means, giving closed-form solutions for the computation of the sided centroids. The algorithm can be fine-tuned by means of this pair of values, yielding a wide range of the most frequently used divergences. Moreover, it is guaranteed to converge to local minima for a wide range of values of the pair ( α , β ). Our theoretical contribution has been validated by several experiments performed with synthetic and real data and exploring the ( α , β ) plane. The numerical results obtained confirm the quality of the algorithm and its suitability to be used in several practical applications. Full article
(This article belongs to the Special Issue Information Theory Applications in Signal Processing)
Figures

Figure 1

Open AccessArticle Spatial Organization of the Gene Regulatory Program: An Information Theoretical Approach to Breast Cancer Transcriptomics
Entropy 2019, 21(2), 195; https://doi.org/10.3390/e21020195
Received: 21 December 2018 / Revised: 29 January 2019 / Accepted: 1 February 2019 / Published: 19 February 2019
Viewed by 474 | PDF Full-text (2238 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Gene regulation may be studied from an information-theoretic perspective. Gene regulatory programs are representations of the complete regulatory phenomenon associated to each biological state. In diseases such as cancer, these programs exhibit major alterations, which have been associated with the spatial organization of [...] Read more.
Gene regulation may be studied from an information-theoretic perspective. Gene regulatory programs are representations of the complete regulatory phenomenon associated to each biological state. In diseases such as cancer, these programs exhibit major alterations, which have been associated with the spatial organization of the genome into chromosomes. In this work, we analyze intrachromosomal, or cis-, and interchromosomal, or trans-gene regulatory programs in order to assess the differences that arise in the context of breast cancer. We find that using information theoretic approaches, it is possible to differentiate cis-and trans-regulatory programs in terms of the changes that they exhibit in the breast cancer context, indicating that in breast cancer there is a loss of trans-regulation. Finally, we use these programs to reconstruct a possible spatial relationship between chromosomes. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Figures

Figure 1

Open AccessArticle Entropy Mapping Approach for Functional Reentry Detection in Atrial Fibrillation: An In-Silico Study
Entropy 2019, 21(2), 194; https://doi.org/10.3390/e21020194
Received: 10 January 2019 / Revised: 6 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
Viewed by 444 | PDF Full-text (3489 KB) | HTML Full-text | XML Full-text
Abstract
Catheter ablation of critical electrical propagation sites is a promising tool for reducing the recurrence of atrial fibrillation (AF). The spatial identification of the arrhythmogenic mechanisms sustaining AF requires the evaluation of electrograms (EGMs) recorded over the atrial surface. This work aims to [...] Read more.
Catheter ablation of critical electrical propagation sites is a promising tool for reducing the recurrence of atrial fibrillation (AF). The spatial identification of the arrhythmogenic mechanisms sustaining AF requires the evaluation of electrograms (EGMs) recorded over the atrial surface. This work aims to characterize functional reentries using measures of entropy to track and detect a reentry core. To this end, different AF episodes are simulated using a 2D model of atrial tissue. Modified Courtemanche human action potential and Fenton–Karma models are implemented. Action potential propagation is modeled by a fractional diffusion equation, and virtual unipolar EGM are calculated. Episodes with stable and meandering rotors, figure-of-eight reentry, and disorganized propagation with multiple reentries are generated. Shannon entropy ( S h E n ), approximate entropy ( A p E n ), and sample entropy ( S a m p E n ) are computed from the virtual EGM, and entropy maps are built. Phase singularity maps are implemented as references. The results show that A p E n and S a m p E n maps are able to detect and track the reentry core of rotors and figure-of-eight reentry, while the S h E n results are not satisfactory. Moreover, A p E n and S a m p E n consistently highlight a reentry core by high entropy values for all of the studied cases, while the ability of S h E n to characterize the reentry core depends on the propagation dynamics. Such features make the A p E n and S a m p E n maps attractive tools for the study of AF reentries that persist for a period of time that is similar to the length of the observation window, and reentries could be interpreted as AF-sustaining mechanisms. Further research is needed to determine and fully understand the relation of these entropy measures with fibrillation mechanisms other than reentries. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Figures

Graphical abstract

Open AccessFeature PaperReview From Spin Glasses to Negative-Weight Percolation
Entropy 2019, 21(2), 193; https://doi.org/10.3390/e21020193
Received: 22 January 2019 / Revised: 12 February 2019 / Accepted: 13 February 2019 / Published: 18 February 2019
Viewed by 414 | PDF Full-text (748 KB) | HTML Full-text | XML Full-text
Abstract
Spin glasses are prototypical random systems modelling magnetic alloys. One important way to investigate spin glass models is to study domain walls. For two dimensions, this can be algorithmically understood as the calculation of a shortest path, which allows for negative distances or [...] Read more.
Spin glasses are prototypical random systems modelling magnetic alloys. One important way to investigate spin glass models is to study domain walls. For two dimensions, this can be algorithmically understood as the calculation of a shortest path, which allows for negative distances or weights. This led to the creation of the negative weight percolation (NWP) model, which is presented here along with all necessary basics from spin glasses, graph theory and corresponding algorithms. The algorithmic approach involves a mapping to the classical matching problem for graphs. In addition, a summary of results is given, which were obtained during the past decade. This includes the study of percolation transitions in dimension from d = 2 up to and beyond the upper critical dimension d u = 6 , also for random graphs. It is shown that NWP is in a different universality class than standard percolation. Furthermore, the question of whether NWP exhibits properties of Stochastic–Loewner Evolution is addressed and recent results for directed NWP are presented. Full article
Figures

Figure 1

Open AccessArticle A Simple Secret Key Generation by Using a Combination of Pre-Processing Method with a Multilevel Quantization
Entropy 2019, 21(2), 192; https://doi.org/10.3390/e21020192
Received: 14 January 2019 / Revised: 6 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
Viewed by 423 | PDF Full-text (4056 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Limitations of the computational and energy capabilities of IoT devices provide new challenges in securing communication between devices. Physical layer security (PHYSEC) is one of the solutions that can be used to solve the communication security challenges. In this paper, we conducted an [...] Read more.
Limitations of the computational and energy capabilities of IoT devices provide new challenges in securing communication between devices. Physical layer security (PHYSEC) is one of the solutions that can be used to solve the communication security challenges. In this paper, we conducted an investigation on PHYSEC which utilizes channel reciprocity in generating a secret key, commonly known as secret key generation (SKG) schemes. Our research focused on the efforts to get a simple SKG scheme by eliminating the information reconciliation stage so as to reduce the high computational and communication cost. We exploited the pre-processing method by proposing a modified Kalman (MK) and performing a combination of the method with a multilevel quantization, i.e., combined multilevel quantization (CMQ). Our approach produces a simple SKG scheme for its significant increase in reciprocity so that an identical secret key between two legitimate users can be obtained without going through the information reconciliation stage. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Figures

Figure 1

Open AccessArticle Entropy Generation and Heat Transfer Performance in Microchannel Cooling
Entropy 2019, 21(2), 191; https://doi.org/10.3390/e21020191
Received: 23 January 2019 / Revised: 11 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
Viewed by 492 | PDF Full-text (12593 KB) | HTML Full-text | XML Full-text
Abstract
Owing to its relatively high heat transfer performance and simple configurations, liquid cooling remains the preferred choice for electronic cooling and other applications. In this cooling approach, channel design plays an important role in dictating the cooling performance of the heat sink. Most [...] Read more.
Owing to its relatively high heat transfer performance and simple configurations, liquid cooling remains the preferred choice for electronic cooling and other applications. In this cooling approach, channel design plays an important role in dictating the cooling performance of the heat sink. Most cooling channel studies evaluate the performance in view of the first thermodynamics aspect. This study is conducted to investigate flow behaviour and heat transfer performance of an incompressible fluid in a cooling channel with oblique fins with regards to first law and second law of thermodynamics. The effect of oblique fin angle and inlet Reynolds number are investigated. In addition, the performance of the cooling channels for different heat fluxes is evaluated. The results indicate that the oblique fin channel with 20° angle yields the highest figure of merit, especially at higher Re (250–1000). The entropy generation is found to be lowest for an oblique fin channel with 90° angle, which is about twice than that of a conventional parallel channel. Increasing Re decreases the entropy generation, while increasing heat flux increases the entropy generation. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics II )
Figures

Figure 1

Open AccessArticle Mixture of Experts with Entropic Regularization for Data Classification
Entropy 2019, 21(2), 190; https://doi.org/10.3390/e21020190
Received: 4 January 2019 / Revised: 4 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
Viewed by 410 | PDF Full-text (3604 KB) | HTML Full-text | XML Full-text
Abstract
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition. “Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by [...] Read more.
Today, there is growing interest in the automatic classification of a variety of tasks, such as weather forecasting, product recommendations, intrusion detection, and people recognition. “Mixture-of-experts” is a well-known classification technique; it is a probabilistic model consisting of local expert classifiers weighted by a gate network that is typically based on softmax functions, combined with learnable complex patterns in data. In this scheme, one data point is influenced by only one expert; as a result, the training process can be misguided in real datasets for which complex data need to be explained by multiple experts. In this work, we propose a variant of the regular mixture-of-experts model. In the proposed model, the cost classification is penalized by the Shannon entropy of the gating network in order to avoid a “winner-takes-all” output for the gating network. Experiments show the advantage of our approach using several real datasets, with improvements in mean accuracy of 3–6% in some datasets. In future work, we plan to embed feature selection into this model. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Figures

Figure 1

Open AccessArticle Nonrigid Medical Image Registration Using an Information Theoretic Measure Based on Arimoto Entropy with Gradient Distributions
Entropy 2019, 21(2), 189; https://doi.org/10.3390/e21020189
Received: 12 December 2018 / Revised: 2 February 2019 / Accepted: 14 February 2019 / Published: 18 February 2019
Viewed by 404 | PDF Full-text (3221 KB) | HTML Full-text | XML Full-text
Abstract
This paper introduces a new nonrigid registration approach for medical images applying an information theoretic measure based on Arimoto entropy with gradient distributions. A normalized dissimilarity measure based on Arimoto entropy is presented, which is employed to measure the independence between two images. [...] Read more.
This paper introduces a new nonrigid registration approach for medical images applying an information theoretic measure based on Arimoto entropy with gradient distributions. A normalized dissimilarity measure based on Arimoto entropy is presented, which is employed to measure the independence between two images. In addition, a regularization term is integrated into the cost function to obtain the smooth elastic deformation. To take the spatial information between voxels into account, the distance of gradient distributions is constructed. The goal of nonrigid alignment is to find the optimal solution of a cost function including a dissimilarity measure, a regularization term, and a distance term between the gradient distributions of two images to be registered, which would achieve a minimum value when two misaligned images are perfectly registered using limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) optimization scheme. To evaluate the test results of our presented algorithm in non-rigid medical image registration, experiments on simulated three-dimension (3D) brain magnetic resonance imaging (MR) images, real 3D thoracic computed tomography (CT) volumes and 3D cardiac CT volumes were carried out on elastix package. Comparison studies including mutual information (MI) and the approach without considering spatial information were conducted. These results demonstrate a slight improvement in accuracy of non-rigid registration. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Figures

Figure 1

Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top