Next Issue
Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Entropy, Volume 19, Issue 6 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Estimation of flood magnitude for a given recurrence interval T (T-year flood) at a specific [...] Read more.
Displaying articles 1-53
Export citation of selected articles as:
Open AccessArticle The Entropic Linkage between Equity and Bond Market Dynamics
Entropy 2017, 19(6), 292; https://doi.org/10.3390/e19060292
Received: 29 April 2017 / Revised: 17 June 2017 / Accepted: 17 June 2017 / Published: 21 June 2017
PDF Full-text (2440 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to
[...] Read more.
An alternative derivation of the yield curve based on entropy or the loss of information as it is communicated through time is introduced. Given this focus on entropy growth in communication the Shannon entropy will be utilized. Additionally, Shannon entropy’s close relationship to the Kullback–Leibler divergence is used to provide a more precise understanding of this new yield curve. The derivation of the entropic yield curve is completed with the use of the Burnashev reliability function which serves as a weighting between the true and error distributions. The deep connections between the entropic yield curve and the popular Nelson–Siegel specification are also examined. Finally, this entropically derived yield curve is used to provide an estimate of the economy’s implied information processing ratio. This information theoretic ratio offers a new causal link between bond and equity markets, and is a valuable new tool for the modeling and prediction of stock market behavior. Full article
(This article belongs to the Special Issue Entropic Applications in Economics and Finance)
Figures

Figure 1

Open AccessArticle Enthalpy of Mixing in Al–Tb Liquid
Entropy 2017, 19(6), 290; https://doi.org/10.3390/e19060290
Received: 10 April 2017 / Revised: 14 June 2017 / Accepted: 15 June 2017 / Published: 21 June 2017
PDF Full-text (5166 KB) | HTML Full-text | XML Full-text
Abstract
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy
[...] Read more.
The liquid-phase enthalpy of mixing for Al–Tb alloys is measured for 3, 5, 8, 10, and 20 at% Tb at selected temperatures in the range from 1364 to 1439 K. Methods include isothermal solution calorimetry and isoperibolic electromagnetic levitation drop calorimetry. Mixing enthalpy is determined relative to the unmixed pure (Al and Tb) components. The required formation enthalpy for the Al3Tb phase is computed from first-principles calculations. Based on our measurements, three different semi-empirical solution models are offered for the excess free energy of the liquid, including regular, subregular, and associate model formulations. These models are also compared with the Miedema model prediction of mixing enthalpy. Full article
(This article belongs to the Special Issue Thermodynamics in Material Science)
Figures

Figure 1

Open AccessArticle The Mehler-Fock Transform in Signal Processing
Entropy 2017, 19(6), 289; https://doi.org/10.3390/e19060289
Received: 25 April 2017 / Revised: 13 June 2017 / Accepted: 15 June 2017 / Published: 20 June 2017
PDF Full-text (943 KB) | HTML Full-text | XML Full-text
Abstract
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is
[...] Read more.
Many signals can be described as functions on the unit disk (ball). In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N) as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values) to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis) of the group SU(1,1) leads to an integral transform, the Mehler-Fock-transform (MFT), that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space
Entropy 2017, 19(6), 288; https://doi.org/10.3390/e19060288
Received: 27 April 2017 / Revised: 7 June 2017 / Accepted: 17 June 2017 / Published: 20 June 2017
PDF Full-text (340 KB) | HTML Full-text | XML Full-text
Abstract
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation
[...] Read more.
We tackle the problem of template estimation when data have been randomly deformed under a group action in the presence of noise. In order to estimate the template, one often minimizes the variance when the influence of the transformations have been removed (computation of the Fréchet mean in the quotient space). The consistency bias is defined as the distance (possibly zero) between the orbit of the template and the orbit of one element which minimizes the variance. In the first part, we restrict ourselves to isometric group action, in this case the Hilbertian distance is invariant under the group action. We establish an asymptotic behavior of the consistency bias which is linear with respect to the noise level. As a result the inconsistency is unavoidable as soon as the noise is enough. In practice, template estimation with a finite sample is often done with an algorithm called “max-max”. In the second part, also in the case of isometric group finite, we show the convergence of this algorithm to an empirical Karcher mean. Our numerical experiments show that the bias observed in practice can not be attributed to the small sample size or to a convergence problem but is indeed due to the previously studied inconsistency. In a third part, we also present some insights of the case of a non invariant distance with respect to the group action. We will see that the inconsistency still holds as soon as the noise level is large enough. Moreover we prove the inconsistency even when a regularization term is added. Full article
(This article belongs to the Special Issue Information Geometry II)
Figures

Figure 1

Open AccessArticle Information Technology Project Portfolio Implementation Process Optimization Based on Complex Network Theory and Entropy
Entropy 2017, 19(6), 287; https://doi.org/10.3390/e19060287
Received: 4 April 2017 / Revised: 12 June 2017 / Accepted: 13 June 2017 / Published: 19 June 2017
PDF Full-text (6965 KB) | HTML Full-text | XML Full-text
Abstract
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency,
[...] Read more.
In traditional information technology project portfolio management (ITPPM), managers often pay more attention to the optimization of portfolio selection in the initial stage. In fact, during the portfolio implementation process, there are still issues to be optimized. Organizing cooperation will enhance the efficiency, although it brings more immediate risk due to the complex variety of links between projects. In order to balance the efficiency and risk, an optimization method is presented based on the complex network theory and entropy, which will assist portfolio managers in recognizing the structure of the portfolio and determine the cooperation range. Firstly, a complex network model for an IT project portfolio is constructed, in which the project is simulated as an artificial life agent. At the same time, the portfolio is viewed as a small scale of society. Following this, social network analysis is used to detect and divide communities in order to estimate the roles of projects between different portfolios. Based on these, the efficiency and the risk are measured using entropy and are balanced through searching for adequate hierarchy community divisions. Thus, the activities of cooperation in organizations, risk management, and so on—which are usually viewed as an important art—can be discussed and conducted based on quantity calculations. Full article
(This article belongs to the Special Issue Complex Systems and Fractional Dynamics)
Figures

Graphical abstract

Open AccessArticle Assessing Probabilistic Inference by Comparing the Generalized Mean of the Model and Source Probabilities
Entropy 2017, 19(6), 286; https://doi.org/10.3390/e19060286
Received: 31 May 2017 / Revised: 10 June 2017 / Accepted: 12 June 2017 / Published: 19 June 2017
PDF Full-text (2284 KB) | HTML Full-text | XML Full-text
Abstract
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the
[...] Read more.
An approach to the assessment of probabilistic inference is described which quantifies the performance on the probability scale. From both information and Bayesian theory, the central tendency of an inference is proven to be the geometric mean of the probabilities reported for the actual outcome and is referred to as the “Accuracy”. Upper and lower error bars on the accuracy are provided by the arithmetic mean and the −2/3 mean. The arithmetic is called the “Decisiveness” due to its similarity with the cost of a decision and the −2/3 mean is called the “Robustness”, due to its sensitivity to outlier errors. Visualization of inference performance is facilitated by plotting the reported model probabilities versus the histogram calculated source probabilities. The visualization of the calibration between model and source is summarized on both axes by the arithmetic, geometric, and −2/3 means. From information theory, the performance of the inference is related to the cross-entropy between the model and source distribution. Just as cross-entropy is the sum of the entropy and the divergence; the accuracy of a model can be decomposed into a component due to the source uncertainty and the divergence between the source and model. Translated to the probability domain these quantities are plotted as the average model probability versus the average source probability. The divergence probability is the average model probability divided by the average source probability. When an inference is over/under-confident, the arithmetic mean of the model increases/decreases, while the −2/3 mean decreases/increases, respectively. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessArticle Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis
Entropy 2017, 19(6), 265; https://doi.org/10.3390/e19060265
Received: 5 May 2017 / Revised: 7 June 2017 / Accepted: 7 June 2017 / Published: 19 June 2017
Cited by 2 | PDF Full-text (1708 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation.
[...] Read more.
Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS) prediction and evaluation. The artificial neural network (ANN) was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Figures

Graphical abstract

Open AccessArticle On the Simplification of Statistical Mechanics for Space Plasmas
Entropy 2017, 19(6), 285; https://doi.org/10.3390/e19060285
Received: 17 May 2017 / Revised: 15 June 2017 / Accepted: 16 June 2017 / Published: 18 June 2017
Cited by 1 | PDF Full-text (1195 KB) | HTML Full-text | XML Full-text
Abstract
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin
[...] Read more.
Space plasmas are frequently described by kappa distributions. Non-extensive statistical mechanics involves the maximization of the Tsallis entropic form under the constraints of canonical ensemble, considering also a dyadic formalism between the ordinary and escort probability distributions. This paper addresses the statistical origin of kappa distributions, and shows that they can be connected with non-extensive statistical mechanics without considering the dyadic formalism of ordinary/escort distributions. While this concept does significantly simplify the usage of the theory, it costs the definition of a dyadic entropic formulation, in order to preserve the consistency between statistical mechanics and thermodynamics. Therefore, the simplification of the theory by means of avoiding dyadic formalism is impossible within the framework of non-extensive statistical mechanics. Full article
(This article belongs to the collection Advances in Applied Statistical Mechanics)
Figures

Figure 1

Open AccessArticle Multiscale Entropy Analysis of Unattended Oximetric Recordings to Assist in the Screening of Paediatric Sleep Apnoea at Home
Entropy 2017, 19(6), 284; https://doi.org/10.3390/e19060284
Received: 6 May 2017 / Revised: 12 June 2017 / Accepted: 14 June 2017 / Published: 17 June 2017
Cited by 1 | PDF Full-text (2045 KB) | HTML Full-text | XML Full-text
Abstract
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as
[...] Read more.
Untreated paediatric obstructive sleep apnoea syndrome (OSAS) can severely affect the development and quality of life of children. In-hospital polysomnography (PSG) is the gold standard for a definitive diagnosis though it is relatively unavailable and particularly intrusive. Nocturnal portable oximetry has emerged as a reliable technique for OSAS screening. Nevertheless, additional evidences are demanded. Our study is aimed at assessing the usefulness of multiscale entropy (MSE) to characterise oximetric recordings. We hypothesise that MSE could provide relevant information of blood oxygen saturation (SpO2) dynamics in the detection of childhood OSAS. In order to achieve this goal, a dataset composed of unattended SpO2 recordings from 50 children showing clinical suspicion of OSAS was analysed. SpO2 was parameterised by means of MSE and conventional oximetric indices. An optimum feature subset composed of five MSE-derived features and four conventional clinical indices were obtained using automated bidirectional stepwise feature selection. Logistic regression (LR) was used for classification. Our optimum LR model reached 83.5% accuracy (84.5% sensitivity and 83.0% specificity). Our results suggest that MSE provides relevant information from oximetry that is complementary to conventional approaches. Therefore, MSE may be useful to improve the diagnostic ability of unattended oximetry as a simplified screening test for childhood OSAS. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle LSTM-CRF for Drug-Named Entity Recognition
Entropy 2017, 19(6), 283; https://doi.org/10.3390/e19060283
Received: 1 April 2017 / Revised: 9 June 2017 / Accepted: 9 June 2017 / Published: 17 June 2017
Cited by 1 | PDF Full-text (279 KB) | HTML Full-text | XML Full-text
Abstract
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific
[...] Read more.
Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge. Full article
(This article belongs to the Section Information Theory)
Figures

Figure 1

Open AccessFeature PaperArticle Correntropy-Based Pulse Rate Variability Analysis in Children with Sleep Disordered Breathing
Entropy 2017, 19(6), 282; https://doi.org/10.3390/e19060282
Received: 6 May 2017 / Revised: 7 June 2017 / Accepted: 9 June 2017 / Published: 16 June 2017
PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside
[...] Read more.
Pulse rate variability (PRV), an alternative measure of heart rate variability (HRV), is altered during obstructive sleep apnea. Correntropy spectral density (CSD) is a novel spectral analysis that includes nonlinear information. We recruited 160 children and recorded SpO2 and photoplethysmography (PPG), alongside standard polysomnography. PPG signals were divided into 1-min epochs and apnea/hypoapnea (A/H) epochs labeled. CSD was applied to the pulse-to-pulse interval time series (PPIs) and five features extracted: the total spectral power (TP: 0.01–0.6 Hz), the power in the very low frequency band (VLF: 0.01–0.04 Hz), the normalized power in the low and high frequency bands (LFn: 0.04–0.15 Hz, HFn: 0.15–0.6 Hz), and the LF/HF ratio. Nonlinearity was assessed with the surrogate data technique. Multivariate logistic regression models were developed for CSD and power spectral density (PSD) analysis to detect epochs with A/H events. The CSD-based features and model identified epochs with and without A/H events more accurately relative to PSD-based analysis (area under the curve (AUC) 0.72 vs. 0.67) due to the nonlinearity of the data. In conclusion, CSD-based PRV analysis provided enhanced performance in detecting A/H epochs, however, a combination with overnight SpO2 analysis is suggested for optimal results. Full article
(This article belongs to the Special Issue Entropy and Sleep Disorders)
Figures

Figure 1

Open AccessArticle An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation
Entropy 2017, 19(6), 281; https://doi.org/10.3390/e19060281
Received: 30 April 2017 / Revised: 11 June 2017 / Accepted: 13 June 2017 / Published: 15 June 2017
Cited by 1 | PDF Full-text (2740 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel
[...] Read more.
In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application II)
Figures

Figure 1

Open AccessArticle Approximation of Stochastic Quasi-Periodic Responses of Limit Cycles in Non-Equilibrium Systems under Periodic Excitations and Weak Fluctuations
Entropy 2017, 19(6), 280; https://doi.org/10.3390/e19060280
Received: 17 May 2017 / Revised: 14 June 2017 / Accepted: 14 June 2017 / Published: 15 June 2017
PDF Full-text (5968 KB) | HTML Full-text | XML Full-text
Abstract
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed
[...] Read more.
A semi-analytical method is proposed to calculate stochastic quasi-periodic responses of limit cycles in non-equilibrium dynamical systems excited by periodic forces and weak random fluctuations, approximately. First, a kind of 1/N-stroboscopic map is introduced to discretize the quasi-periodic torus into closed curves, which are then approximated by periodic points. Using a stochastic sensitivity function of discrete time systems, the transverse dispersion of these circles can be quantified. Furthermore, combined with the longitudinal distribution of the circles, the probability density function of these closed curves in stroboscopic sections can be determined. The validity of this approach is shown through a van der Pol oscillator and Brusselator. Full article
(This article belongs to the Special Issue Complex Systems, Non-Equilibrium Dynamics and Self-Organisation)
Figures

Figure 1

Open AccessArticle Entropy Generation Rates through the Dissipation of Ordered Regions in Helium Boundary-Layer Flows
Entropy 2017, 19(6), 278; https://doi.org/10.3390/e19060278
Received: 23 March 2017 / Revised: 9 June 2017 / Accepted: 12 June 2017 / Published: 15 June 2017
Cited by 1 | PDF Full-text (5791 KB) | HTML Full-text | XML Full-text
Abstract
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic
[...] Read more.
The results of the computation of entropy generation rates through the dissipation of ordered regions within selected helium boundary layer flows are presented. Entropy generation rates in helium boundary layer flows for five cases of increasing temperature and pressure are considered. The basic format of a turbulent spot is used as the flow model. Statistical processing of the time-dependent series solutions of the nonlinear, coupled Lorenz-type differential equations for the spectral velocity wave components in the three-dimensional boundary layer configuration yields the local volumetric entropy generation rates. Extension of the computational method to the transition from laminar to fully turbulent flow is discussed. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics)
Figures

Figure 1

Open AccessArticle Weak Fault Diagnosis of Wind Turbine Gearboxes Based on MED-LMD
Entropy 2017, 19(6), 277; https://doi.org/10.3390/e19060277
Received: 12 April 2017 / Revised: 2 June 2017 / Accepted: 7 June 2017 / Published: 15 June 2017
Cited by 3 | PDF Full-text (2861 KB) | HTML Full-text | XML Full-text
Abstract
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed
[...] Read more.
In view of the problem that the fault signal of the rolling bearing is weak and the fault feature is difficult to extract in the strong noise environment, a method based on minimum entropy deconvolution (MED) and local mean deconvolution (LMD) is proposed to extract the weak fault features of the rolling bearing. Through the analysis of the simulation signal, we find that LMD has many limitations for the feature extraction of weak signals under strong background noise. In order to eliminate the noise interference and extract the characteristics of the weak fault, MED is employed as the pre-filter to remove noise. This method is applied to the weak fault feature extraction of rolling bearings; that is, using MED to reduce the noise of the wind turbine gearbox test bench under strong background noise, and then using the LMD method to decompose the denoised signals into several product functions (PFs), and finally analyzing the PF components that have strong correlation by a cyclic autocorrelation function. The finding is that the failure of the wind power gearbox is generated from the micro-bending of the high-speed shaft and the pitting of the #10 bearing outer race at the output end of the high-speed shaft. This method is compared with LMD, which shows the effectiveness of this method. This paper provides a new method for the extraction of multiple faults and weak features in strong background noise. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory III)
Figures

Figure 1

Back to Top