Special Issue "Entropy in Signal Analysis"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (15 November 2017).

Special Issue Editors

Prof. Jose C. Principe
E-Mail Website
Guest Editor
Computational NeuroEngineering Lab, University of Florida, Gainesville, FL 32611, USA
Interests: information theoretic learning; kernel methods; adaptive signal processing; brain machine interfaces
Special Issues and Collections in MDPI journals
Prof. Badong Chen
E-Mail Website
Guest Editor
Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, 28 Xianning West Road, Xi'an 710049, P. R. China
Tel. 86-18392892686; Fax: +86-29-8266-8672
Interests: entropy; probability theory; estimation theory; machine learning; information theoretic learning; signal processing; system identification; adaptive filtering; kernel methods
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Signal analysis is a well-established enabling methodology that has a huge impact in many areas of science and engineering, such as system identification, data mining, target detection, feature extraction, and speech and video analysis. Early methods consisted in simple quantification of signal structure with time and or frequency descriptors (e.g., the Fast Fourier Transform), but the trend has been to progressively use more sophisticated techniques such as nonlinear dynamics. Quantifying the dynamics of motion (dimension and Lyapunov exponents) is very useful in describing the global properties, and select hyper-parameters for model based approaches.

Still, one of the important unanswered questions regards how to best quantify the evolution of signal dynamics. It is well established that probability density functions per se do not provide sufficient specificity because the time information is destroyed in the process. Linear and nonlinear parametric models (and the Fourier transform) only capture the time correlation over time, which is only the second moment of the pairwise probability density function across time. Higher order moments become very computational intensive. On the other hand, the formal way in statistics as exemplified in the Bayesian filter, uses conditional distributions and remains computationally very complex.

In this context, this special issue seeks contributions for signal analysis based on information theory and its descriptors, such as entropy, correntropy, mutual information, divergences and so on, which can capture higher-order statistics and information content of signals rather than simply their energy. In particular, recently, it has been shown that the correntropy function is a generalized correlation that quantifies sums of higher order moments of the signal, which opens the door for new spectral definitions that quantify more precisely the signal structure. Extensions to traditional model based techniques will also be of interest for the Special Issue. Topics of interest generally include (but not limited to) the applications of entropy (and other related measures) to the following areas:

  • Spectral Analysis;
  • Complexity Analysis;
  • Causality Analysis;
  • Period Estimation;
  • Outlier Detection;
  • Dynamic System Modeling;
  • Non-Gaussian Signal Analysis and Processing;
  • Real-Word Applications.

Prof. Dr. Jose C. Principe
Prof. Dr. Badong Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Multivariate Matching Pursuit Decomposition and Normalized Gabor Entropy for Quantification of Preictal Trends in Epilepsy
Entropy 2018, 20(6), 419; https://doi.org/10.3390/e20060419 - 31 May 2018
Abstract
Quantification of the complexity of signals recorded concurrently from multivariate systems, such as the brain, plays an important role in the study and characterization of their state and state transitions. Multivariate analysis of the electroencephalographic signals (EEG) over time is conceptually most promising [...] Read more.
Quantification of the complexity of signals recorded concurrently from multivariate systems, such as the brain, plays an important role in the study and characterization of their state and state transitions. Multivariate analysis of the electroencephalographic signals (EEG) over time is conceptually most promising in unveiling the global dynamics of dynamical brain disorders such as epilepsy. We employed a novel methodology to study the global complexity of the epileptic brain en route to seizures. The developed measures of complexity were based on Multivariate Matching Pursuit (MMP) decomposition of signals in terms of time–frequency Gabor functions (atoms) and Shannon entropy. The measures were first validated on simulation data (Lorenz system) and then applied to EEGs from preictal (before seizure onsets) periods, recorded by intracranial electrodes from eight patients with temporal lobe epilepsy and a total of 42 seizures, in search of global trends of complexity before seizures onset. Out of five Gabor measures of complexity we tested, we found that our newly defined measure, the normalized Gabor entropy (NGE), was able to detect statistically significant (p < 0.05) nonlinear trends of the mean global complexity across all patients over 1 h periods prior to seizures’ onset. These trends pointed to a slow decrease of the epileptic brain’s global complexity over time accompanied by an increase of the variance of complexity closer to seizure onsets. These results show that the global complexity of the epileptic brain decreases at least 1 h prior to seizures and imply that the employed methodology and measures could be useful in identifying different brain states, monitoring of seizure susceptibility over time, and potentially in seizure prediction. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Amplitude- and Fluctuation-Based Dispersion Entropy
Entropy 2018, 20(3), 210; https://doi.org/10.3390/e20030210 - 20 Mar 2018
Cited by 14
Abstract
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of [...] Read more.
Dispersion entropy (DispEn) is a recently introduced entropy metric to quantify the uncertainty of time series. It is fast and, so far, it has demonstrated very good performance in the characterisation of time series. It includes a mapping step, but the effect of different mappings has not been studied yet. Here, we investigate the effect of linear and nonlinear mapping approaches in DispEn. We also inspect the sensitivity of different parameters of DispEn to noise. Moreover, we develop fluctuation-based DispEn (FDispEn) as a measure to deal with only the fluctuations of time series. Furthermore, the original and fluctuation-based forbidden dispersion patterns are introduced to discriminate deterministic from stochastic time series. Finally, we compare the performance of DispEn, FDispEn, permutation entropy, sample entropy, and Lempel–Ziv complexity on two physiological datasets. The results show that DispEn is the most consistent technique to distinguish various dynamics of the biomedical signals. Due to their advantages over existing entropy methods, DispEn and FDispEn are expected to be broadly used for the characterization of a wide variety of real-world time series. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/2326. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Correntropy Based Matrix Completion
Entropy 2018, 20(3), 171; https://doi.org/10.3390/e20030171 - 06 Mar 2018
Cited by 2
Abstract
This paper studies the matrix completion problems when the entries are contaminated by non-Gaussian noise or outliers. The proposed approach employs a nonconvex loss function induced by the maximum correntropy criterion. With the help of this loss function, we develop a rank constrained, [...] Read more.
This paper studies the matrix completion problems when the entries are contaminated by non-Gaussian noise or outliers. The proposed approach employs a nonconvex loss function induced by the maximum correntropy criterion. With the help of this loss function, we develop a rank constrained, as well as a nuclear norm regularized model, which is resistant to non-Gaussian noise and outliers. However, its non-convexity also leads to certain difficulties. To tackle this problem, we use the simple iterative soft and hard thresholding strategies. We show that when extending to the general affine rank minimization problems, under proper conditions, certain recoverability results can be obtained for the proposed algorithms. Numerical experiments indicate the improved performance of our proposed approach. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
A Joint Fault Diagnosis Scheme Based on Tensor Nuclear Norm Canonical Polyadic Decomposition and Multi-Scale Permutation Entropy for Gears
Entropy 2018, 20(3), 161; https://doi.org/10.3390/e20030161 - 03 Mar 2018
Cited by 5
Abstract
Gears are key components in rotation machinery and its fault vibration signals usually show strong nonlinear and non-stationary characteristics. It is not easy for classical time–frequency domain analysis methods to recognize different gear working conditions. Therefore, this paper presents a joint fault diagnosis [...] Read more.
Gears are key components in rotation machinery and its fault vibration signals usually show strong nonlinear and non-stationary characteristics. It is not easy for classical time–frequency domain analysis methods to recognize different gear working conditions. Therefore, this paper presents a joint fault diagnosis scheme for gear fault classification via tensor nuclear norm canonical polyadic decomposition (TNNCPD) and multi-scale permutation entropy (MSPE). Firstly, the one-dimensional vibration data of different gear fault conditions is converted into a three-dimensional tensor data, and a new tensor canonical polyadic decomposition method based on nuclear norm and convex optimization called TNNCPD is proposed to extract the low rank component of the data, which represents the feature information of the measured signal. Then, the MSPE of the extracted feature information about different gear faults can be calculated as the feature vector in order to recognize fault conditions. Finally, this researched scheme is validated by practical gear vibration data of different fault conditions. The result demonstrates that the proposed scheme can effectively recognize different gear fault conditions. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Graphical abstract

Open AccessArticle
An Analysis of the Value of Information When Exploring Stochastic, Discrete Multi-Armed Bandits
Entropy 2018, 20(3), 155; https://doi.org/10.3390/e20030155 - 28 Feb 2018
Cited by 4
Abstract
In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy [...] Read more.
In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy information are associated with exploration-dominant searches of the space and yield high rewards. Low amounts of policy information favor the exploitation of existing knowledge. Information, in this criterion, is quantified by a parameter that can be varied during search. We demonstrate that a simulated-annealing-like update of this parameter, with a sufficiently fast cooling schedule, leads to a regret that is logarithmic with respect to the number of arm pulls. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Multivariate Entropy Characterizes the Gene Expression and Protein-Protein Networks in Four Types of Cancer
Entropy 2018, 20(3), 154; https://doi.org/10.3390/e20030154 - 28 Feb 2018
Abstract
There is an important urgency to detect cancer at early stages to treat it, to improve the patients’ lifespans, and even to cure it. In this work, we determined the entropic contributions of genes in cancer networks. We detected sudden changes in entropy [...] Read more.
There is an important urgency to detect cancer at early stages to treat it, to improve the patients’ lifespans, and even to cure it. In this work, we determined the entropic contributions of genes in cancer networks. We detected sudden changes in entropy values in melanoma, hepatocellular carcinoma, pancreatic cancer, and squamous lung cell carcinoma associated to transitions from healthy controls to cancer. We also identified the most relevant genes involved in carcinogenic process of the four types of cancer with the help of entropic changes in local networks. Their corresponding proteins could be used as potential targets for treatments and as biomarkers of cancer. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Big Data Blind Separation
Entropy 2018, 20(3), 150; https://doi.org/10.3390/e20030150 - 27 Feb 2018
Abstract
Data or signal separation is one of the critical areas of data analysis. In this work, the problem of non-negative data separation is considered. The problem can be briefly described as follows: given X R m × N , find A [...] Read more.
Data or signal separation is one of the critical areas of data analysis. In this work, the problem of non-negative data separation is considered. The problem can be briefly described as follows: given X R m × N , find A R m × n and S R + n × N such that X = A S . Specifically, the problem with sparse locally dominant sources is addressed in this work. Although the problem is well studied in the literature, a test to validate the locally dominant assumption is not yet available. In addition to that, the typical approaches available in the literature sequentially extract the elements of the mixing matrix. In this work, a mathematical modeling-based approach is presented that can simultaneously validate the assumption, and separate the given mixture data. In addition to that, a correntropy-based measure is proposed to reduce the model size. The approach presented in this paper is suitable for big data separation. Numerical experiments are conducted to illustrate the performance and validity of the proposed approach. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Information Thermodynamics Derives the Entropy Current of Cell Signal Transduction as a Model of a Binary Coding System
Entropy 2018, 20(2), 145; https://doi.org/10.3390/e20020145 - 24 Feb 2018
Cited by 5
Abstract
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study [...] Read more.
The analysis of cellular signaling cascades based on information thermodynamics has recently developed considerably. A signaling cascade may be considered a binary code system consisting of two types of signaling molecules that carry biological information, phosphorylated active, and non-phosphorylated inactive forms. This study aims to evaluate the signal transduction step in cascades from the viewpoint of changes in mixing entropy. An increase in active forms may induce biological signal transduction through a mixing entropy change, which induces a chemical potential current in the signaling cascade. We applied the fluctuation theorem to calculate the chemical potential current and found that the average entropy production current is independent of the step in the whole cascade. As a result, the entropy current carrying signal transduction is defined by the entropy current mobility. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Electricity Consumption Forecasting Scheme via Improved LSSVM with Maximum Correntropy Criterion
Entropy 2018, 20(2), 112; https://doi.org/10.3390/e20020112 - 08 Feb 2018
Cited by 2
Abstract
In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make [...] Read more.
In recent years, with the deepening of China’s electricity sales side reform and electricity market opening up gradually, the forecasting of electricity consumption (FoEC) becomes an extremely important technique for the electricity market. At present, how to forecast the electricity accurately and make an evaluation of results scientifically are still key research topics. In this paper, we propose a novel prediction scheme based on the least-square support vector machine (LSSVM) model with a maximum correntropy criterion (MCC) to forecast the electricity consumption (EC). Firstly, the electricity characteristics of various industries are analyzed to determine the factors that mainly affect the changes in electricity, such as the gross domestic product (GDP), temperature, and so on. Secondly, according to the statistics of the status quo of the small sample data, the LSSVM model is employed as the prediction model. In order to optimize the parameters of the LSSVM model, we further use the local similarity function MCC as the evaluation criterion. Thirdly, we employ the K-fold cross-validation and grid searching methods to improve the learning ability. In the experiments, we have used the EC data of Shaanxi Province in China to evaluate the proposed prediction scheme, and the results show that the proposed prediction scheme outperforms the method based on the traditional LSSVM model. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Granger Causality and Jensen–Shannon Divergence to Determine Dominant Atrial Area in Atrial Fibrillation
Entropy 2018, 20(1), 57; https://doi.org/10.3390/e20010057 - 12 Jan 2018
Cited by 4
Abstract
Atrial fibrillation (AF) is already the most commonly occurring arrhythmia. Catheter pulmonary vein ablation has emerged as a treatment that is able to make the arrhythmia disappear; nevertheless, recurrence to arrhythmia is very frequent. In this study, it is proposed to perform an [...] Read more.
Atrial fibrillation (AF) is already the most commonly occurring arrhythmia. Catheter pulmonary vein ablation has emerged as a treatment that is able to make the arrhythmia disappear; nevertheless, recurrence to arrhythmia is very frequent. In this study, it is proposed to perform an analysis of the electrical signals recorded from bipolar catheters at three locations, pulmonary veins and the right and left atria, before to and during the ablation procedure. Principal Component Analysis (PCA) was applied to reduce data dimension and Granger causality and divergence techniques were applied to analyse connectivity along the atria, in three main regions: pulmonary veins, left atrium (LA) and right atrium (RA). The results showed that, before the procedure, patients with recurrence in the arrhythmia had greater connectivity between atrial areas. Moreover, during the ablation procedure, in patients with recurrence in the arrhythmial both atria were more connected than in patients that maintained sinus rhythms. These results can be helpful for procedures designing to end AF. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Entropy Measures for Stochastic Processes with Applications in Functional Anomaly Detection
Entropy 2018, 20(1), 33; https://doi.org/10.3390/e20010033 - 11 Jan 2018
Cited by 2
Abstract
We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These [...] Read more.
We propose a definition of entropy for stochastic processes. We provide a reproducing kernel Hilbert space model to estimate entropy from a random sample of realizations of a stochastic process, namely functional data, and introduce two approaches to estimate minimum entropy sets. These sets are relevant to detect anomalous or outlier functional data. A numerical experiment illustrates the performance of the proposed method; in addition, we conduct an analysis of mortality rate curves as an interesting application in a real-data context to explore functional anomaly detection. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise
Entropy 2017, 19(12), 648; https://doi.org/10.3390/e19120648 - 29 Nov 2017
Cited by 6
Abstract
As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter [...] Read more.
As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Real-Time Robust Voice Activity Detection Using the Upper Envelope Weighted Entropy Measure and the Dual-Rate Adaptive Nonlinear Filter
Entropy 2017, 19(11), 487; https://doi.org/10.3390/e19110487 - 28 Oct 2017
Cited by 2
Abstract
Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low, [...] Read more.
Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low, at 0 dB and below. Others are developed to operate in offline mode or are impractical for implementation in actual devices due to high computational complexity. This paper proposes the upper envelope weighted entropy (UEWE) measure as a means to enable high separation of speech and non-speech segments in voice communication. The asymmetric nonlinear filter (ANF) is employed in UEWE to extract the adaptive weight factor that is subsequently used to compensate the noise effect. In addition, this paper also introduces a dual-rate adaptive nonlinear filter (DANF) with high adaptivity to rapid time-varying noise for computation of the decision threshold. Performance comparison with standard and recent VADs shows that the proposed algorithm is superior especially in real-time practical applications. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Complexity Analysis of Neonatal EEG Using Multiscale Entropy: Applications in Brain Maturation and Sleep Stage Classification
Entropy 2017, 19(10), 516; https://doi.org/10.3390/e19100516 - 26 Sep 2017
Cited by 8
Abstract
Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to [...] Read more.
Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to investigate how EEG complexity evolves during electrocortical maturation and whether complexity features can be used to classify sleep stages. First , we developed a regression model that estimates the postmenstrual age (PMA) using a combination of complexity features. Then, these features are used to build a sleep stage classifier. The analysis is performed on a database consisting of 97 EEG recordings from 26 prematurely born infants, recorded between 27 and 42 weeks PMA. The results of the regression analysis revealed a significant positive correlation between the EEG complexity and the infant’s age. Moreover, the PMA of the neonate could be estimated with a root mean squared error of 1.88 weeks. The sleep stage classifier was able to discriminate quiet sleep from nonquiet sleep with an area under the curve (AUC) of 90%. These results suggest that the complexity of the brain dynamics is a highly useful index for brain maturation quantification and neonatal sleep stage classification. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Traction Inverter Open Switch Fault Diagnosis Based on Choi–Williams Distribution Spectral Kurtosis and Wavelet-Packet Energy Shannon Entropy
Entropy 2017, 19(9), 504; https://doi.org/10.3390/e19090504 - 16 Sep 2017
Cited by 3
Abstract
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively [...] Read more.
In this paper, a new approach for fault detection and location of open switch faults in the closed-loop inverter fed vector controlled drives of Electric Multiple Units is proposed. Spectral kurtosis (SK) based on Choi–Williams distribution (CWD) as a statistical tool can effectively indicate the presence of transients and locations in the frequency domain. Wavelet-packet energy Shannon entropy (WPESE) is appropriate for the transient changes detection of complex non-linear and non-stationary signals. Based on the analyses of currents in normal and fault conditions, SK based on CWD and WPESE are combined with the DC component method. SK based on CWD and WPESE are used for the fault detection, and the DC component method is used for the fault localization. This approach can diagnose the specific locations of faulty Insulated Gate Bipolar Transistors (IGBTs) with high accuracy, and it requires no additional devices. Experiments on the RT-LAB platform are carried out and the experimental results verify the feasibility and effectiveness of the diagnosis method. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Log Likelihood Spectral Distance, Entropy Rate Power, and Mutual Information with Applications to Speech Coding
Entropy 2017, 19(9), 496; https://doi.org/10.3390/e19090496 - 14 Sep 2017
Cited by 1
Abstract
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential [...] Read more.
We provide a new derivation of the log likelihood spectral distance measure for signal processing using the logarithm of the ratio of entropy rate powers. Using this interpretation, we show that the log likelihood ratio is equivalent to the difference of two differential entropies, and further that it can be written as the difference of two mutual informations. These latter two expressions allow the analysis of signals via the log likelihood ratio to be extended beyond spectral matching to the study of their statistical quantities of differential entropy and mutual information. Examples from speech coding are presented to illustrate the utility of these new results. These new expressions allow the log likelihood ratio to be of interest in applications beyond those of just spectral matching for speech. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
A Characterization of the Domain of Beta-Divergence and Its Connection to Bregman Variational Model
Entropy 2017, 19(9), 482; https://doi.org/10.3390/e19090482 - 09 Sep 2017
Abstract
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative [...] Read more.
In image and signal processing, the beta-divergence is well known as a similarity measure between two positive objects. However, it is unclear whether or not the distance-like structure of beta-divergence is preserved, if we extend the domain of the beta-divergence to the negative region. In this article, we study the domain of the beta-divergence and its connection to the Bregman-divergence associated with the convex function of Legendre type. In fact, we show that the domain of beta-divergence (and the corresponding Bregman-divergence) include negative region under the mild condition on the beta value. Additionally, through the relation between the beta-divergence and the Bregman-divergence, we can reformulate various variational models appearing in image processing problems into a unified framework, namely the Bregman variational model. This model has a strong advantage compared to the beta-divergence-based model due to the dual structure of the Bregman-divergence. As an example, we demonstrate how we can build up a convex reformulated variational model with a negative domain for the classic nonconvex problem, which usually appears in synthetic aperture radar image processing problems. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
A Sparse Multiwavelet-Based Generalized Laguerre–Volterra Model for Identifying Time-Varying Neural Dynamics from Spiking Activities
Entropy 2017, 19(8), 425; https://doi.org/10.3390/e19080425 - 20 Aug 2017
Cited by 2
Abstract
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying [...] Read more.
Modeling of a time-varying dynamical system provides insights into the functions of biological neural networks and contributes to the development of next-generation neural prostheses. In this paper, we have formulated a novel sparse multiwavelet-based generalized Laguerre–Volterra (sMGLV) modeling framework to identify the time-varying neural dynamics from multiple spike train data. First, the significant inputs are selected by using a group least absolute shrinkage and selection operator (LASSO) method, which can capture the sparsity within the neural system. Second, the multiwavelet-based basis function expansion scheme with an efficient forward orthogonal regression (FOR) algorithm aided by mutual information is utilized to rapidly capture the time-varying characteristics from the sparse model. Quantitative simulation results demonstrate that the proposed sMGLV model in this paper outperforms the initial full model and the state-of-the-art modeling methods in tracking performance for various time-varying kernels. Analyses of experimental data show that the proposed sMGLV model can capture the timing of transient changes accurately. The proposed framework will be useful to the study of how, when, and where information transmission processes across brain regions evolve in behavior. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique
Entropy 2017, 19(7), 365; https://doi.org/10.3390/e19070365 - 20 Jul 2017
Cited by 27
Abstract
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been [...] Read more.
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Non-Linear Stability Analysis of Real Signals from Nuclear Power Plants (Boiling Water Reactors) Based on Noise Assisted Empirical Mode Decomposition Variants and the Shannon Entropy
Entropy 2017, 19(7), 359; https://doi.org/10.3390/e19070359 - 14 Jul 2017
Cited by 4
Abstract
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical [...] Read more.
There are currently around 78 nuclear power plants (NPPs) in the world based on Boiling Water Reactors (BWRs). The current parameter to assess BWR instability issues is the linear Decay Ratio (DR). However, it is well known that BWRs are complex non-linear dynamical systems that may even exhibit chaotic dynamics that normally preclude the use of the DR when the BWR is working at a specific operating point during instability. In this work a novel methodology based on an adaptive Shannon Entropy estimator and on Noise Assisted Empirical Mode Decomposition variants is presented. This methodology was developed for real-time implementation of a stability monitor. This methodology was applied to a set of signals stemming from several NPPs reactors (Ringhals-Sweden, Forsmark-Sweden and Laguna Verde-Mexico) under commercial operating conditions, that experienced instabilities events, each one of a different nature. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Time-Shift Multiscale Entropy Analysis of Physiological Signals
Entropy 2017, 19(6), 257; https://doi.org/10.3390/e19060257 - 05 Jun 2017
Cited by 9
Abstract
Measures of predictability in physiological signals using entropy measures have been widely applied in many areas of research. Multiscale entropy expresses different levels of either approximate entropy or sample entropy by means of multiple factors for generating multiple time series, enabling the capture [...] Read more.
Measures of predictability in physiological signals using entropy measures have been widely applied in many areas of research. Multiscale entropy expresses different levels of either approximate entropy or sample entropy by means of multiple factors for generating multiple time series, enabling the capture of more useful information than using a scalar value produced by the two entropy methods. This paper presents the use of different time shifts on various intervals of time series to discover different entropy patterns of the time series. Examples and experimental results using white noise, 1/ f noise, photoplethysmography, and electromyography signals suggest the validity and better performance of the proposed time-shift multiscale entropy analysis of physiological signals than the multiscale entropy. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Designing Labeled Graph Classifiers by Exploiting the Rényi Entropy of the Dissimilarity Representation
Entropy 2017, 19(5), 216; https://doi.org/10.3390/e19050216 - 09 May 2017
Abstract
Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, [...] Read more.
Representing patterns as labeled graphs is becoming increasingly common in the broad field of computational intelligence. Accordingly, a wide repertoire of pattern recognition tools, such as classifiers and knowledge discovery procedures, are nowadays available and tested for various datasets of labeled graphs. However, the design of effective learning procedures operating in the space of labeled graphs is still a challenging problem, especially from the computational complexity viewpoint. In this paper, we present a major improvement of a general-purpose classifier for graphs, which is conceived on an interplay between dissimilarity representation, clustering, information-theoretic techniques, and evolutionary optimization algorithms. The improvement focuses on a specific key subroutine devised to compress the input data. We prove different theorems which are fundamental to the setting of the parameters controlling such a compression operation. We demonstrate the effectiveness of the resulting classifier by benchmarking the developed variants on well-known datasets of labeled graphs, considering as distinct performance indicators the classification accuracy, computing time, and parsimony in terms of structural complexity of the synthesized classification models. The results show state-of-the-art standards in terms of test set accuracy and a considerable speed-up for what concerns the computing time. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Recognition of Traveling Surges in HVDC with Wavelet Entropy
Entropy 2017, 19(5), 184; https://doi.org/10.3390/e19050184 - 26 Apr 2017
Cited by 3
Abstract
Traveling surges are commonly adopted in protection devices of high-voltage direct current (HVDC) transmission systems. Lightning strikes also can produce large-amplitude traveling surges which lead to the malfunction of relays. To ensure the reliable operation of protection devices, recognition of traveling surges must [...] Read more.
Traveling surges are commonly adopted in protection devices of high-voltage direct current (HVDC) transmission systems. Lightning strikes also can produce large-amplitude traveling surges which lead to the malfunction of relays. To ensure the reliable operation of protection devices, recognition of traveling surges must be considered. Wavelet entropy, which can reveal time-frequency distribution features, is a potential tool for traveling surge recognition. In this paper, the effectiveness of wavelet entropy in characterizing traveling surges is demonstrated by comparing its representations of different kinds of surges and discussing its stability with the effects of propagation distance and fault resistance. A wavelet entropy-based recognition method is proposed and tested by simulated traveling surges. The results show wavelet entropy can discriminate fault traveling surges with a good recognition rate. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
Tensor Singular Spectrum Decomposition Algorithm Based on Permutation Entropy for Rolling Bearing Fault Diagnosis
Entropy 2017, 19(4), 139; https://doi.org/10.3390/e19040139 - 23 Mar 2017
Cited by 32
Abstract
Mechanical vibration signal mapped into a high-dimensional space tends to exhibit a special distribution and movement characteristics, which can further reveal the dynamic behavior of the original time series. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure [...] Read more.
Mechanical vibration signal mapped into a high-dimensional space tends to exhibit a special distribution and movement characteristics, which can further reveal the dynamic behavior of the original time series. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, the tensor decomposition algorithm has broad application prospects in signal processing. High-dimensional tensor can be obtained from a one-dimensional vibration signal by using phase space reconstruction, which is called the tensorization of data. As a new signal decomposition method, tensor-based singular spectrum algorithm (TSSA) fully combines the advantages of phase space reconstruction and tensor decomposition. However, TSSA has some problems, mainly in estimating the rank of tensor and selecting the optimal reconstruction tensor. In this paper, the improved TSSA algorithm based on convex-optimization and permutation entropy (PE) is proposed. Firstly, aiming to accurately estimate the rank of tensor decomposition, this paper presents a convex optimization algorithm using non-convex penalty functions based on singular value decomposition (SVD). Then, PE is employed to evaluate the desired tensor and improve the denoising performance. In order to verify the effectiveness of proposed algorithm, both numerical simulation and experimental bearing failure data are analyzed. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Open AccessArticle
A Soft Parameter Function Penalized Normalized Maximum Correntropy Criterion Algorithm for Sparse System Identification
Entropy 2017, 19(1), 45; https://doi.org/10.3390/e19010045 - 23 Jan 2017
Cited by 25
Abstract
A soft parameter function penalized normalized maximum correntropy criterion (SPF-NMCC) algorithm is proposed for sparse system identification. The proposed SPF-NMCC algorithm is derived on the basis of the normalized adaptive filter theory, the maximum correntropy criterion (MCC) algorithm and zero-attracting techniques. A soft [...] Read more.
A soft parameter function penalized normalized maximum correntropy criterion (SPF-NMCC) algorithm is proposed for sparse system identification. The proposed SPF-NMCC algorithm is derived on the basis of the normalized adaptive filter theory, the maximum correntropy criterion (MCC) algorithm and zero-attracting techniques. A soft parameter function is incorporated into the cost function of the traditional normalized MCC (NMCC) algorithm to exploit the sparsity properties of the sparse signals. The proposed SPF-NMCC algorithm is mathematically derived in detail. As a result, the proposed SPF-NMCC algorithm can provide an efficient zero attractor term to effectively attract the zero taps and near-zero coefficients to zero, and, hence, it can speed up the convergence. Furthermore, the estimation behaviors are obtained by estimating a sparse system and a sparse acoustic echo channel. Computer simulation results indicate that the proposed SPF-NMCC algorithm can achieve a better performance in comparison with the MCC, NMCC, LMS (least mean square) algorithms and their zero attraction forms in terms of both convergence speed and steady-state performance. Full article
(This article belongs to the Special Issue Entropy in Signal Analysis)
Show Figures

Figure 1

Back to TopTop