Next Article in Journal
Mobile Phone Usage Detection by ANN Trained with a Metaheuristic Algorithm
Next Article in Special Issue
A Novel Fault Feature Recognition Method for Time-Varying Signals and Its Application to Planetary Gearbox Fault Diagnosis under Variable Speed Conditions
Previous Article in Journal
An EMG Patch for the Real-Time Monitoring of Muscle-Fatigue Conditions During Exercise
Previous Article in Special Issue
Sensor Data-Driven Bearing Fault Diagnosis Based on Deep Convolutional Neural Networks and S-Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Low-Delay Lightweight Recurrent Neural Network (LLRNN) for Rotating Machinery Fault Diagnosis

1
Chongqing Key Laboratory of Software Theory and Technology, Chongqing University, Chongqing 400044, China
2
College of Computer Science, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(14), 3109; https://doi.org/10.3390/s19143109
Submission received: 15 May 2019 / Revised: 18 June 2019 / Accepted: 5 July 2019 / Published: 14 July 2019
(This article belongs to the Special Issue Sensors for Fault Diagnosis)

Abstract

:
Fault diagnosis is critical to ensuring the safety and reliable operation of rotating machinery systems. Long short-term memory networks (LSTM) have received a great deal of attention in this field. Most of the LSTM-based fault diagnosis methods have too many parameters and calculation, resulting in large memory occupancy and high calculation delay. Thus, this paper proposes a low-delay lightweight recurrent neural network (LLRNN) model for mechanical fault diagnosis, based on a special LSTM cell structure with a forget gate. The input vibration signal is segmented into several shorter sub-signals in order to shorten the length of the time sequence. Then, these sub-signals are sent into the network directly and converted into the final diagnostic results without any manual participation. Compared with some existing methods, our experiments illustrate that the proposed method has less memory space occupancy and lower computational delay while maintaining the same level of accuracy.

1. Introduction

Mechanical fault diagnosis—analysing data collected by sensors and predicting the health of mechanical systems—has become a research hotspot in industry [1,2]. The existing methods can be approximately divided into three categories: physics-based “white-box” models, data-driven artificial intelligence (AI) methods (“black-box”), and the combination of above two kinds, referred to as “grey-box” models. The performance of physics-based models is heavily dependent on the quality of domain knowledge about the practical mechanical systems. In reality, mechanical equipment works in complex production environments, and the collected data are seriously disturbed by a variety of noise. The models built in an ideal environment may not work in such a complex environment. A data-driven model can update parameters using real-time data [3]. This flexibility has made them become the focus of fault diagnosis research. Although white-box models are dependent on the quality of domain knowledge, this additional information could reduce the solution space and enhance the performance of black-box models. Some grey-box models have earned relatively good results in fault diagnosis. Zhou et al. [4] used neighbourhood components analysis to reduce the dimensionality of original features, then applied coupled hidden Markov model (CHMM) to bearing fault diagnosis. Jung et al. [5] exploit the multi-scale energy analysis of discrete wavelet transformation to obtain a low-dimensional feature subset of data as the input of a k-nearest neighbours ( k-NN) algorithm for bearing fault classification. Gangsar et al. [6] proposed a method for the fault diagnosis of induction motors by combining the wavelet packet transform (WPT) and support vector machine (SVM).
The traditional AI methods are also shallow models and always work on the original feature representation without creating new features during the learning process [7]. It is difficult to characterise the inherent non-linear relationship of complex mechanical systems effectively while using them. In order to further improve the diagnostic performance, deep learning has recently been applied to mechanical fault diagnosis [8]. Deep learning can automatically extract advanced features from the original data through hierarchical learning, and achieve end-to-end learning without any manual participation. Compared with the traditional AI methods, the deep learning method has lower dependence on the knowledge of feature design. At present, there are three main methods that are widely used in the field of mechanical fault diagnosis: automatic encoder (AutoEncoder), convolutional neural network (CNN), and recurrent neural network (RNN). The AutoEncoder can learn rich representation features and reduce data dimensionality, and has has received a great deal of attention in this research field. Ahmed et al. [9] proposed an unsupervised feature learning algorithm using stacked AutoEncoders to learn feature representations from compressed measurements. Lu et al. [10] presented a detailed empirical study of stacked denoising AutoEncoders with three hidden layers for fault diagnosis. Junbo et al. [11] used a digital wavelet frame and nonlinear soft threshold method to process vibration signals and used an AutoEncoder on the preprocessed signal to carry out roller bearing fault diagnosis. However, adjusting the model parameters requires a large amount of data and time, and it is difficult to judge whether the learned features are related to the target task.
CNNs can automatically select the feature of data without any artificial designs, and has been widely applied in fault diagnosis. Wen et al. [12] transformed a raw signal into a square matrix through non-overlapping cutting and normalised the value to 0–255, which was regarded as an image directly using 2D LeNet-5 for the fault prediction of gears and bearings. Liu et al. [13] cut the raw signal into a square matrix by changing the interval distance, which was used directly for fault detection in the 2D-CNN structure training. Sun et al. [14] used multi-scale information extracted by dual tree-complex wavelet transform (DT-CWT) to form a matrix and combined it with a CNN for gear fault diagnosis. Guo et al. [15] used a novel diagnosis method involving a CNN to directly classify a continuous wavelet transform scalogram (CWTS). CNNs can effectively extract local features of data and process high-dimensional data, but most CNN architectures are heavily over-parametrised and computationally complex, taking a lot of time to train the network [16].
The initial parameter values affect the final performance of a CNN. Besides, CNNs are unable to remember historical data information. When predicting, a CNN must process historical input data repeatedly, bringing unnecessary computational consumption. The RNN is a framework for processing sequence data. It analyses the combination of historical data feature and current input data, then extracts the new feature information and makes decisions. The processing form of RNNs is more suitable for mechanical fault diagnosis. RNNs have the ability to memorise historical information, and only need to process the input data once. Therefore, they can detect the health status of mechanical systems in real time. However, the vanishing gradient problem during the back-propagation of model training hinders the performance of RNNs. This means that traditional RNNs may not capture long-term dependencies. Long short-term memory networks (LSTMs), which can extract long-term dependence features, avoid the gradient disappearance problem cleverly through the gate mechanism and have been successfully applied in various fields, including image captioning [17], speech recognition [18], and natural language processing [19]. Nevertheless, it is also difficult to train LSTMs when the data sequence is too long. LSTM-based methods also have the problem of excessive parameters and computational complexity. The LSTM structure can be further simplified to reduce the calculation time. In order to solve the above problems, this paper proposes a low-delay lightweight recurrent neural network (LLRNN) model for mechanical fault diagnosis, making the following two main contributions: (1) The design of a lightweight network structure based on a special LSTM cell with only a forget gate, reducing the parameters and calculation of the network. (2) It studies the influence of step length (i.e., the length of each sub-segment of a sequence signal, as shown in Figure 1 D x ) and step number (number of sub-segment of a sequence signal as shown in Figure 1 L/ D x ) of the sequence data on the performance of the model, including accuracy, noise immunity, and calculation delay, on the basis of the characteristics of the vibration signal. Two bearing data sets, provided by Case Western Reserve University (CWRU)’s Bearing Data Center and the Center for Intelligent Maintenance Systems (IMS), University of Cincinnati, respectively, were used to verify the performance of the proposed algorithm. Compared with the LSTM-based methods and some CNN-based models, in our experiments the proposed algorithm took up less storage space and had shorter calculation delay under the same accuracy and noise immunity, and was more suitable for real-time fault diagnosis.
The rest of this paper is arranged as follows. In Section 2, the RNN variants and their applications are reviewed. Then, the LLRNN and its analysis are presented in Section 3. In the following Section 4, the experimental results using two data sets are illustrated. Finally, the conclusion is provided in Section 5.

2. Related Work

2.1. Application of RNN in Fault Detection

RNNs are mainly used to process sequence data, because they can store the feature information of historical data in the internal state (i.e., memory). Then, they combine current input data with memory and extract new features, as shown in Figure 2a. RNNs can be trained via backpropagation through time, but the vanishing gradient problem during backpropagation makes it difficult to capture long-term dependencies. The LSTM is an efficient RNN variant structure to solve this problem. It avoids long-term dependence problems through the gate mechanism and has the ability to extract long-term dependent feature information effectively [20]. The LSTM cell structure is shown in Figure 2b, and the calculation process is shown in Equation (1).
f t = σ ( W f h · h t 1 + W f x · x t + b f ) i t = σ ( W i h · h t 1 + W i x · x t + b i ) g t = tanh ( W g h · h t 1 + W g x · x t + b g ) o t = σ ( W o h · h t 1 + W o x · x t + b o ) C t = f t C t 1 + i t g t h t = o t tanh C t
where f t denotes the forget gate, i t the input gate, o t the output gate, and g t the new candidate memory state. h t 1 and h t denote the hidden states of the previous moment and the current moment. x t denotes the input of the current moment. C t 1 and C t denote the memory states of the previous moment and the current moment. W * h and W * x denote the parameters related to h t 1 and x t in the corresponding gate. and b * denotes the bias in the corresponding gate. “·” and “⊙” denote the operations of matrix multiplication and point multiplication, respectively.
The gate mechanism of LSTM is redundant, causing excessive parameters and calculations. In order to solve this problem, researchers have proposed several variants [21,22,23,24]. Gated recurrent unit (GRU) [24], the most successful variant, combines the forget gate and the input gate into an update gate and mixes the memory state and the hidden state. The GRU cell structure is shown in Figure 2c and the calculation process is shown in Equation (2):
z t = σ ( W z h · h t 1 + W z x · x t + b z ) r t = σ ( W r h · h t 1 + W r x · x t + b r ) g t = tanh ( W g h · ( r t h t 1 ) + W g x · x t + b g ) h t = ( 1 z t ) h t 1 + z t g t ,
where r t denotes the reset gate, controlling the influence level of h t 1 on g t . z t denotes the update gate, controlling the update of the memory state. The GRU can achieve performance comparable to LSTM with one less gate.
LSTMs have been successfully applied in many fields, and have also received much attention in the field of mechanical fault detection. Zhao et al. [25] combined a CNN with a bi-directional LSTM to propose a novel machine health monitoring system used for tool wearing prediction. Park et al. [26] employed an LSTM model in an edge device for industrial robot manipulator fault detection. Yuan et al. [27] applied an LSTM for aero engine fault diagnosis and remaining useful life (RUL) prediction. Zhang et al. [28] combined LSTM with Monte Carlo simulation for lithium-ion batteries’ RUL prediction. Cui et al. [29] combined fast Fourier transform (FFT) and RNN for bearing fault diagnosis. Wang et al. [30] applied LSTM to gear fault diagnosis. Liu et al. [31] proposed a GRU-based method for rolling bearings fault diagnosis by comparing the reconstruction errors generated from multiple-dimension time-sequence data. This paper proposes a lightweight model with low delay based on a special LSTM cell structure for rotating mechanism fault diagnosis.

2.2. Introduction of JANET

Der Westhuizen et al. [32] proposed a new LSTM cell structure with only one forget gate, namely, “just another network” (JANET). Not only does JANET provide computational savings, but also outperforms the standard LSTM on multiple benchmark data sets and competes with some of the best contemporary models. The JANET cell structure is shown in Figure 3, and the calculation process is shown in Equation (3).
f t = σ ( W f h · h t 1 + W f x · x t + b f ) g t = tanh ( W g h · h t 1 + W g x · x t + b g ) C t = f t C t 1 + ( 1 f t ) g t h t = C t
JANET retains the most important forget gate f t in LSTM. Since f t determines which information should be discarded, the information that f t does not suggest to drop should be retained. Therefore, ( 1 f t ) is approximately regarded as i t , reducing both the parameters and calculation caused by the input gate. The output gate selects the useful information in the memory state C t and passes it to the hidden state h t . In fact, this task can be handed over to the forget gate of the next moment. Based on this idea, JANET cancels the output gate and merges the hidden state h t and memory state C t , just like the GRU. JANET has a simpler structure and less calculation.

3. The Proposed Method

3.1. Model Structure

In this paper, a low-delay lightweight recurrent neural network (LLRNN) model for rotating machinery fault diagnosis is designed based on a JANET cell, and the overall flowchart is shown in Figure 4a. The input vibration signal is segmented into several shorter sub-signals. Then, these sub-signals are sent to the network directly and converted into the final classification results, as shown in Figure 4b. As an end-to-end model, the proposed model can turn the data collected by sensors into the finally desired prediction results without any manual participation, which could be used for real-time monitoring and has two improvements: (1) It is based on a simpler cell structure with fewer parameters, and the proposed model consumes less storage space while maintaining its performance. (2) Segmenting the signal before sending it into the network not only reduces the training difficulty of the network, but also improves the noise immunity.

3.2. Network Structure

Using the output of the previous layer as the input of next layer, an RNN can also be stacked to form a deeper structure similar to a CNN, as shown in Figure 5a. The feature extraction ability of the model can be improved and the learned features have more semantic information. However, this operation brings a risk of over-fitting and more calculation. After logically expanding, the actual calculation process of an l- l a y e r RNN is described in Figure 5b. The length of each x t is D x , and T = s t e p s = L/ D x . In subsequent experiments, the effects of various parameters on the performance of the model are verified, including accuracy, noise immunity and computational delay.

3.3. Model Analysis

3.3.1. The Parameters and Calculation of Different Cell Structures

As shown in Equation (3), the calculation process of each gate is essentially a fully connected layer. Let D h denote the quantity of hidden units in each gate, and D x denote the input data dimension. The parameters and calculation cost are defined as in Equations (4) and (5):
P a r a m s g a t e = ( D h + D x ) · D h + D h
F L O P s g a t e = 2 · ( D h + D x ) · D h
where P a r a m s g a t e and F L O P s g a t e (FLOP: floating-point operations) denote the parameters and calculation amount of each gate, respectively.
The parameter quantity of the cell is the sum of that of all the gates inside it. LSTM has four gates, GRU has three, and JANET has only two. Their parameters are shown in Equations (6)–(8). JANET has only half the number of parameters that LSTM does, and two-thirds those of GRU:
P a r a m s L S T M = 4 · P a r a m s g a t e
P a r a m s G R U = 3 · P a r a m s g a t e
P a r a m s J A N E T = 2 · P a r a m s g a t e
The calculation of the cell in each moment equals to the sum of the that of all the gates, plus the calculation cost of information interaction between the respective gates, which essentially are some point addition and point multiplication operations. LSTM has four multiplication interactions, GRU has five, and JANET has four. The total calculations are shown in Equations (9)–(11):
F L O P s L S T M = ( 4 · F L O P s g a t e + 4 · D h ) · s t e p s
F L O P s G R U = ( 3 · F L O P s g a t e + 5 · D h ) · s t e p s
F L O P s J A N E T = ( 3 · F L O P s g a t e + 4 · D h ) · s t e p s
where s t e p s denotes the number of sub-segments of a sequence signal, and s t e p s = L/ D x , as shown in Figure 1.

3.3.2. Analysis of Step Length

As shown in Equations (8) and (11), D h and D x affect both the calculation and parameters, and steps affects only the calculation. When the signal length L of the input model is fixed, s t e p s = L/ D x and the Equations (12) and (13) can be obtained by substituting it into Equations (8) and (11):
P a r a m s J A N E T = 2 · D h · D x + 2 · D h · ( D h + 1 )
F L O P s J A N E T = 4 · D h · ( D h + 1 ) D x · L + 4 · D h · L
Equation (12) shows that the total parameters of the model are positively correlated with D x , and the occupancy of parameters and storage space increases with D x . On the contrary, Equation (13) shows that the total calculation of the model is negatively correlated with D x , and the increment of D x will reduce the calculation. In addition to the computation, the waiting time caused by data dependencies also affects the total computing time of the model. The RNN has a serious problem of calculation delay when processing long sequence data. Because the calculation of h t must wait for h t 1 , the parallel computing power of the GPU cannot solve it. When training the network using a vibration signal x = { x 1 , x 2 , , x t , , x L } , the model takes a long time to process such a long time sequence if every point is used as the input of each moment. It is difficult to train the network. In addition, there is too little effective information that can be extracted, and the extracted features are seriously disturbed by noise if the input vibration signal is single-point. Noise interference not only changes the amplitude information of the signal, but may even change the direction information of the vibration. As a result, the accuracy of the algorithm is reduced.
In order to solve these problems, we segment the vibration signal of length L into steps-segment sub-signals of length D x , as shown in Figure 1. When D x -point vibration information is used as the input of each moment, the step number of the time sequence will reduce from L to L/ D x and the total calculation decreases according to Equation (13).
Both the fault shock signal and noise signal are superimposed on the normal signal. The operation of each gate in the cell is a vector multiplied by a matrix, which is essentially a weighted summation of each point of input. It plays a certain smoothing effect, the same as a mean operation, as shown in Equation (14):
S d a t a = S v i b r a t i o n + S n o i s e S v i b r a t i o n = S n o r n a m l + S f a u l t O p e r a t i o n g a t e = i D x w ( i ) · S n o r n a m l ( i ) + i D x w ( i ) · S f a u l t ( i ) + i D x w ( i ) · S n o i s e ( i )
where S d a t a denotes the signal of input data, S v i b r a t i o n and S n o i s e denote the signal of vibration part and noise part in S d a t a , and S n o r m a l and S f a u l t denote the signal of the normal part and the fault part in S v i b r a t i o n . Most noise is subject to the zero-mean distribution and E [ S n o i s e ] = 0 . This means that the larger D x is, the closer i D x w ( i ) · S n o i s e ( i ) is to 0. Although the E [ S f a u l t ] 0 , for the most part the value of S f a u l t is 0, except for the moments where a fault shock happens. If D x is too large, i D x w ( i ) · S n o i s e ( i ) will be very small. It is difficult for a model to extract the feature of faults. Consequently, the D x should be appropriate—not too large or too small. These analyses were verified in the subsequent experiments.

4. Experiment and Analysis

Two bearing data sets were used to verify the performance of the proposed algorithm and the previous analyses. The data sets were provided by Case Western Reserve University (CWRU)’s Bearing Data Center [33] and the Center for Intelligent Maintenance Systems (IMS), University of Cincinnati [34]. The hardware resources of the computing device used for the experiment were as follows, CPU: Intel Core i7-8700k, 3.7 GHz, six-core twelve threads; GPU: NVIDIA 1080Ti, 11G × 2 ; Memory: 32 GB; Storage: 2 TB.
The software environment was as follows, Ubuntu 16.04 system; TensorFlow 1.10 framework, Python programming language.

4.1. The Impact of Model Structure on Performance

4.1.1. Introduction of the CWRU Data Set

To validate the previous analyses, 12-kHz drive-end data collected by Case Western Reserve University (CWRU)’s Bearing Data Center were used. Figure 6 shows the test rig used for data collection. The data set contains four different categories, namely, normal bearings, bearings with a faulty ball (ball), bearings with a faulty inner race (inner) and bearings with a faulty outer race (outer). For each type of fault, there are three fault diameters, 0.007 inch, 0.014 inch and 0.021 inch, respectively. Thus, there are 10 classifications in this dataset.
Due to the limited experimental data, the overlapping sampling method was used to enhance the data according to references [35,36], as shown in Figure 7.
The data set was divided into four subsets, corresponding to four different loads, namely load 0, load 1, load 2, and load 3. As shown in Table 1, every category of data subset under each load contained 800 training samples, 100 test samples, and 100 validation samples, for a total of 8000 training samples, 1000 test samples, and 1000 validation samples.
Under normal circumstances, bearing vibration signals are affected by the surrounding ambient noise. The CWRU dataset selected in this study was collected in an environment with a relatively low level of ambient noise and therefore cannot reflect the performance of the fault diagnosis algorithm in an actual environment. In addition, there are a number of noise sources in an actual environment, and it is impossible to obtain training samples under all the conditions in various noise environments. Therefore, noise was added to the samples in the raw test set to simulate data from actual conditions. Using the resultant data for testing could produce results closer to those obtained under actual industrial production conditions. Accordingly, white Gaussian noise from 10 dB to −4 dB was added to the data to simulate actual conditions. Signal-to-noise ratio (SNR) is defined as in Equation (15), and was used when adding noise:
S N R = 10 log 10 P s i g n a l P n o i s e
where P s i g n a l and P n o i s e are the intensities of the raw and noise signals, respectively.
To examine the noise immunity of the proposed algorithm, the learning model with the highest accuracy for the validation set in 1000 iterations was selected. Noise-containing data were randomly generated 10 times for each sample in the test set of the dataset, and were used for testing and statistically analysing the experimental results, which were represented in the form of the mean.

4.1.2. Experimental Results and Analysis

To verify the analysis of the impact of step length on performance in Section 3.3.2, the data subset under load 0 was selected for experimentation, with the network structure having 128 hidden units and 1 layer. We repeated the experiment 10 times with each parameter configuration, and calculated the mean value of the results, as shown in Table 2.
As the D x increased, the validation accuracy and test accuracy slightly increased. When D x exceeded 256, the accuracy began to decrease due to the smoothing effect caused by the lengthening of D x , which made the model less sensitive to the detail change information of the vibration signal. When D x increased from 512 to 1024, the increment of noise immunity was greater than the decrement of feature extraction capability, so the noise immunity ability rebounded. The D x was set to 64 to preserve the noise immunity of the model.
The number of hidden units determines the ability of the RNN model to extract feature information; too few leads to under-fitting, and too many causes over-fitting. The extracted feature is finally transformed into a vector, which represents the category, through a fully connected layer as shown in Figure 4b. This process also has a smoothing effect and an impact on noise immunity. As shown in Table 3, as the number of hidden units increased, the test accuracy first rose and then fell. The reason for this is that the generalisation ability increased with the complexity of the model, then decreased due to over-fitting when the number of cells exceeded 128. Unlike the impact of D x , the decrement of noise immunity was not due to the reduced feature extraction capabilities, but to the over-fitting to noise-free training data.
When hidden units increased from 16 to 256, the theoretical calculation increased by 50 times, but the real training time only increased by less than three percent, and the test time nearly doubled. These gaps were caused by the following two reasons: (1) The increment of hidden units only brought the calculation for each moment without increasing the waiting time. (2) The GPU has parallel computing power; 16 units or 256 units were calculated synchronously, and the time consumed was the same as that for calculating only one unit. Only when calculating the sum of all the results of each unit did the calculating time increase slightly.
RNNs can also be stacked like CNNs to build deeper models, as shown in Figure 5b. The number of hidden units in the network was set to 128 for the experiment due to the highest noise immunity performance, and the effects of different layer numbers on the model performance are shown in Table 4.
The model had the highest test accuracy and noise immunity at two layers due to the appropriate model complexity. The model with more layers had lower performance caused by over-fitting, and both space occupancy and calculation delay increased without any increment of test accuracy or noise immunity.
As shown in the above experiments, all the analyses of the impact of network structure and step length on performance were verified. As D x increased, the calculation delay decreased, the accuracy first rose then fell, and the noise immunity increased. As D h increased, the calculation delay on the CPU decreased, and both the accuracy and noise immunity increased then decreased. As the number of layers increased, the calculation delay increased, and both the accuracy and noise immunity increased then decreased. The trend of performance was almost the same as assumed in Section 3. The proposed method had the most satisfactory result for fault diagnosis on the CWRU data set in the condition that D x was equal to 64, D h was equal to 128, and l a y e r was equal to 2.

4.2. The Universality of the Proposed Method

In the previous experiments, the analyses of the impact of network parameters on the performance were verified, and the proposed method had a satisfactory result on the CWRU data set. However, the mechanical system in the CWRU bearing data experiment was relatively simple, and the fault was also caused by human damage, whose vibration characteristics may be different from that of natural wear in an actual production environment. To verify the universality of the model, extra experiments were performed using the IMS data set, which was collected in a more complex mechanical system with natural wear faults. LSTM and GRU were used for experimental comparison to verify whether the simplification of the cell structure would reduce the network performance.

4.2.1. Introduction of the IMS Data Set

This dataset was provided by the Center for Intelligent Maintenance Systems (IMS), University of Cincinnati, and shared on the website of the Prognostic Data Repository of NASA [37]. The structure of the mechanical system is shown in Figure 8, and the data records the entire wear process of the bearing.
The bearings experienced “increase–decrease–increase” degradation trends. This behaviour was due to the “self-healing” nature of the damage [38]. First, the amplitude of vibration increased because of the impact caused by the initial surface defect (e.g., spalling or cracks). Then, the initial defect was smoothed by continuous rolling contact, leading to the decrease of the impact amplitude. Finally, the damage spread over a broader area, and the vibration amplitude increased again. During the self-healing period, the amplitude of the fault bearing was similar to that of the normal bearing, making it difficult to detect the fault during the self-healing period, as shown in Figure 9.
At the end of the experiment, the inner race defect, outer race defect, and roller element defect were detected manually [34]. As shown in Figure 9, the red curve indicates the wear process of the outer race defect in bearing 1. The self-healing appeared after the failure on the fifth day, and its amplitude was basically the same as that of the normal bearing (green curve). In order to increase the difficulty of diagnosis, we chose the bearings in the self-healing period as fault data, and the normal bearing with similar amplitude as normal data. Also, a length of 1024 was directly sampled as a sample instead of overlapping sampling due to the relatively sufficient size of the IMS data set. The fault data categories are shown in Table 5.

4.2.2. Experimental Results and Analysis

In order to verify whether the simplification of the cell structure had too much of a negative impact on performance, a comparison experiment was performed using GRU and LSTM with the same number of hidden units and almost-calculated quantities.
The comparison results of the CWRU data set are shown in Table 6 and Table 7. When the number of hidden units was 128 and the number of layers was 1, neither the test accuracy nor noise immunity of GRU 1 and LSTM 1 were improved much compared with LLRNN. However, the parameter quantity of GRU 1 increased by one-half and that of LSTM 1 increased by one time, the test delay of GRU 1 increased by 50%, and that of LSTM 1 increased by 25%. The GRU 1 had less calculation than the LSTM 1 but the calculation delay of GRU 1 was higher than that of LSTM 1 . This is because the four gates of the LSTM 1 were independent and could be calculated in parallel, while the calculation of g t in GRU 1 must wait for the output of r t , as shown in Figure 2c, resulting in more calculation time than LSTM 1 .
Under a structure of one layer with parameter quantity and calculation amount similar to LLRNN, the performances of GRU 2 and LSTM 2 were slightly lower than that of LLRNN, while the delay of LSTM 2 was 13% higher and that of GRU 2 was 30% higher. In the cell structure work flow, the calculation of the LSTM 2 activation function tanh brought calculation and computational delays, resulting in the calculation time of LSTM 2 being longer than that of LLRNN. Under the structure of two layers with the parameter quantity and calculation amount similar to LLRNN, the performances of GRU 3 and LSTM 3 were improved compared with GRU 2 and LSTM 2 . Although the performance was similar to that of LLRNN, the delay of LSTM 3 increased by 24% and that of GRU 3 increased by 37%. As shown in Table 6, CNN-based models [12,13,14,15] gained competitive performance in no-noise test data, but they took up more storage and consumed more computing time due mainly to the over-complexity of the network structure. Although reference [14] had similar parameters to LLRNN, its theoretical calculation was six times greater and its real computing time was approximately ten times greater compared with the proposed method. Besides, the accuracies of CNN models fell rapidly in the noisy environment while the accuracy of LLRNN still exceeded 90%, as shown in Table 7. The model structure and training method of SVM both differ from those for neural networks, so it is meaningless to discuss the training time of reference [6]. Not only were the test accuracies and noise immunities of SVM much lower than those of LLRNN, but also the test time of the former was dozens of times greater than that of the latter. The primary reason is that the multi-category task of SVM is broken down into several binary tasks, which results in extra computation.
The experimental results of the IMS data set are shown in Table 8. The self-repair of the mechanical system made the data characteristics of the faulty bearing similar to those of the normal bearing. Although the test accuracy gained satisfactory results, the complexity of the system made the data more difficult to distinguish in high-noise environments. The test accuracy in the simulation environment of 0 dB was lower than that of the CWRU data set at least 6%. Like the experimental results of the CWRU dataset, the performance of LLRNN was not worse than any structure using LSTM or GRU.
In contrast to the experimental results of the CWRU data, two CNN models gained competitive noise immunity. However, their computing time was still much higher than that of proposed method. The other two still could not work in a noisy environment—especially reference [14], whose prediction accuracy was similar to random guess in a severely noisy environment, at around 25%. The SVM model also behaved poorly; its performance was similar to that in the CWRU data set. However, its test time was much lower than that in the CWRU data set. Because there are four categories of IMS data and the quantity of decomposed binary tasks is only four-tenths that of CWRU, it had much less calculation. According to the results of the above experiments, the ingenious simplification of the JANET cell did not cause serious performance degradation. The calculation delay of LLRNN was at least 10% lower than any network structure using LSTM or GRU while maintaining a satisfactory performance. Compared with some CNN and SVM models, the proposed method not only gained higher prediction accuracy and noise immunity, but also spent the least amount of computing time.

5. Conclusions

In this paper, a low-delay lightweight recurrent neural network (LLRNN) model is proposed for mechanical fault diagnosis, which is an end-to-end processing model. From input data to output diagnostics, the process executes automatically without any manual involvement. Thus, the diagnostic quality is not dependent on expert experience. According to the work flow of the JANET cell structure, this paper analysed the influence of several factors (e.g., D x , s t e p s , D h , and l a y e r s ) on the performance of the model, including accuracy, noise immunity, and calculation delay. The relationship between these factors and performance was verified in the experiment. The proposed method obtained the highest accuracy and noise immunity performance when l a y e r = 2, D x = 64, and D h = 128. This could give some guidance to model design in related fields. The experimental results of the two data sets CWRU and IMS prove that the simplified structure of LLRNN could achieve performance comparable to any network model using LSTM or GRU, and the computational delay decreased by at least 10%, which is more suitable for real-time fault detection systems.
The proposed method still requires some improvements due to the data dependencies of the RNN work flow. As shown in Figure 5b and Equation (3), the calculation of h t must wait for h t 1 . Let t g a t e denote the time consumed in calculating each gate. Although the GPU has parallel computing power, it could only calculate each gate synchronously with the limitation of data dependencies. Ignoring the consumption of communication between each gate, the time consumed each moment could approximate t g a t e . When dealing with time series data with length of steps, the GPU must wait steps times caused by the data dependencies, so the time of calculating each data sample is t g a t e × s t e p s . If the dependence between h t and h t 1 could be eliminated, the calculation of each moment could be performed independently and the GPU could fully utilise its parallel computing power without any waiting. The calculation of steps moments could be dealt with at the same time. Consequently, the time of calculating each data sample could be further reduced to approximately t g a t e . Now that the CPU of the edge device is basically multi-core and has certain parallel computing capabilities, it means that the test time could decrease as well.

Author Contributions

Conceptualization, W.L.; Validation, W.L.; Writing—original draft, W.L.; Writing—review and editing, L.Y., and P.G.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

LLRNNLow-Delay Lightweight Recurrent Neural Network
LSTMLong Short-Term Memory Network
AutoEncoderAutomatic Encoder
DT-CWTDual Tree-Complex Wavelet Transform
CWTSContinuous Wavelet Transform Scalogram
AIArtificial Intelligence
CHMMCoupled Hidden Markov Model
k-NNk-Nearest Neighbour
WPTWavelet Packet Transform
SVMSupport Vector Machine
CNNConvolutional Neural Network
RNNRecurrent Neural Network
GRUGated Recurrent Unit
RULRemaining Useful Life
FFTFast Fourier Transform
JANETJust Another NETwork
FLOPsFLOating-Point Operations
CPUCentral Processing Unit
GPUGraphics Processing Unit

References

  1. Yin, S.; Li, X.; Gao, H.; Kaynak, O. Data-Based Techniques Focused on Modern Industry: An Overview. IEEE Trans. Ind. Electron. 2015, 62, 657–667. [Google Scholar] [CrossRef]
  2. Qiao, W.; Lu, D. A Survey on Wind Turbine Condition Monitoring and Fault Diagnosis—Part I: Components and Subsystems. IEEE Trans. Ind. Electron. 2015, 62, 6536–6545. [Google Scholar] [CrossRef]
  3. Khan, S.; Yairi, T. A review on the application of deep learning in system health management. Mech. Syst. Signal Process. 2018, 107, 241–265. [Google Scholar] [CrossRef]
  4. Zhou, H.; Chen, J.; Dong, G.; Wang, H.; Yuan, H. Bearing fault recognition method based on neighbourhood component analysis and coupled hidden Markov model. Mech. Syst. Signal Process. 2016, 66–67, 568–581. [Google Scholar] [CrossRef]
  5. Jung, U.; Koh, B.H. Wavelet energy-based visualization and classification of high-dimensional signal for bearing fault detection. Knowl. Inf. Syst. 2015, 44, 197–215. [Google Scholar] [CrossRef]
  6. Gangsar, P.; Tiwari, R. Multi-fault Diagnosis of Induction Motor at Intermediate Operating Conditions using Wavelet Packet Transform and Support Vector Machine. J. Dyn. Syst. Meas. Control 2018, 140, 081014. [Google Scholar] [CrossRef]
  7. Zhou, Z.; Feng, J. Deep Forest. arXiv 2018, arXiv:1702.08835v1. [Google Scholar] [CrossRef]
  8. Liu, R.; Yang, B.; Zio, E.; Chen, X. Artificial intelligence for fault diagnosis of rotating machinery: A review. Mech. Syst. Signal Process. 2018, 108, 33–47. [Google Scholar] [CrossRef]
  9. Ahmed, H.; Wong, M.; Nandi, A. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features. Mech. Syst. Signal Process. 2018, 99, 459–477. [Google Scholar] [CrossRef]
  10. Lu, C.; Wang, Z.Y.; Qin, W.L.; Ma, J. Fault diagnosis of rotary machinery components using a stacked denoising autoencoder-based health state identification. Signal Process. 2017, 130, 377–388. [Google Scholar] [CrossRef]
  11. Junbo, T.; Weining, L.; Juneng, A.; Xueqian, W. Fault diagnosis method study in roller bearing based on wavelet transform and stacked auto-encoder. In Proceedings of the 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, China, 23–25 May 2015; pp. 4608–4613. [Google Scholar] [CrossRef]
  12. Wen, L.; Li, X.; Gao, L.; Zhang, Y. A New Convolutional Neural Network-Based Data-Driven Fault Diagnosis Method. IEEE Trans. Ind. Electron. 2018, 65, 5990–5998. [Google Scholar] [CrossRef]
  13. Liu, R.; Meng, G.; Yang, B.; Sun, C.; Chen, X. Dislocated Time Series Convolutional Neural Architecture: An Intelligent Fault Diagnosis Approach for Electric Machine. IEEE Trans. Ind. Electron. 2017, 13, 1310–1320. [Google Scholar] [CrossRef]
  14. Sun, W.; Yao, B.; Zeng, N.; Chen, B.; He, Y.; Cao, X.; He, W. An Intelligent Gear Fault Diagnosis Methodology Using a Complex Wavelet Enhanced Convolutional Neural Network. Materials 2017, 10, 790. [Google Scholar] [CrossRef] [PubMed]
  15. Guo, S.; Yang, T.; Gao, W.; Zhang, C. A Novel Fault Diagnosis Method for Rotating Machinery Based on a Convolutional Neural Network. Sensors 2018, 18, 1429. [Google Scholar] [CrossRef] [PubMed]
  16. Denil, M.; Shakibi, B.; Dinh, L.; Ranzato, M.; De Freitas, N. Predicting Parameters in Deep Learning. In Proceedings of the Advances in Neural Information Processing Systems 26 (NIPS 2013), Lake Tahoe, NV, USA, 5–8 December 2013; pp. 2148–2156. [Google Scholar]
  17. Gao, L.; Guo, Z.; Zhang, H.; Xu, X.; Shen, H.T. Video Captioning With Attention-Based LSTM and Semantic Consistency. IEEE Trans. Multimed. 2017, 19, 2045–2055. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Lu, X. A Speech Recognition Acoustic Model Based on LSTM -CTC. In Proceedings of the 2018 IEEE 18th International Conference on Communication Technology (ICCT), Chongqing, China, 8–11 October 2018; pp. 1052–1055. [Google Scholar] [CrossRef]
  19. Palangi, H.; Deng, L.; Shen, Y.; Gao, J.; He, X.; Chen, J.; Song, X.; Ward, R. Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval. IEEE/ACM Trans. Audio Speech Lang. Process. 2016, 24, 694–707. [Google Scholar] [CrossRef] [Green Version]
  20. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  21. Zhang, S.; Wu, Y.; Che, T.; Lin, Z.; Memisevic, R.; Salakhutdinov, R.; Bengio, Y. Architectural complexity measures of recurrent neural networks. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 1830–1838. [Google Scholar]
  22. Arjovsky, M.; Shah, A.; Bengio, Y. Unitary evolution recurrent neural networks. In Proceedings of the 33rd International Conference on Machine Learning (ICML 2016), New York, NY, USA, 19–24 June 2016; pp. 1120–1128. [Google Scholar]
  23. Ororbia, A.G.; Mikolov, T.; Reitter, D. Learning Simpler Language Models with the Differential State Framework. Neural Comput. 2017, 29, 3327–3352. [Google Scholar] [CrossRef] [Green Version]
  24. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Gated Feedback Recurrent Neural Networks. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), Lille, France, 6–11 July 2015; pp. 2067–2075. [Google Scholar]
  25. Zhao, R.; Yan, R.; Wang, J.; Mao, K. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks. Sensors 2017, 17, 273. [Google Scholar] [CrossRef]
  26. Park, D.; Kim, S.; An, Y.; Jung, J. LiReD: A Light-Weight Real-Time Fault Detection System for Edge Computing Using LSTM Recurrent Neural Networks. Sensors 2018, 18, 2110. [Google Scholar] [CrossRef]
  27. Yuan, M.; Wu, Y.; Lin, L. Fault diagnosis and remaining useful life estimation of aero engine using LSTM neural network. In Proceedings of the 2016 IEEE International Conference on Aircraft Utility Systems (AUS), Beijing, China, 10–12 October 2016; pp. 135–140. [Google Scholar]
  28. Zhang, Y.; Xiong, R.; He, H.; Pecht, M.G. Long Short-Term Memory Recurrent Neural Network for Remaining Useful Life Prediction of Lithium-Ion Batteries. IEEE Trans. Veh. Technol. 2018, 67, 5695–5705. [Google Scholar] [CrossRef]
  29. Cui, Q.; Li, Z.; Yang, J.; Liang, B. Rolling bearing fault prognosis using recurrent neural network. In Proceedings of the 29th Chinese Control And Decision Conference (CCDC), Chongqing, China, 28–30 May 2017; pp. 1196–1201. [Google Scholar] [CrossRef]
  30. Wang, W.; Qiu, X.; Chen, C.; Lin, B.; Zhang, H. Application Research on Long Short-Term Memory Network in Fault Diagnosis. In Proceedings of the 2018 International Conference on Machine Learning and Cybernetics (ICMLC), Chengdu, China, 15–18 July 2018; Volume 2, pp. 360–365. [Google Scholar] [CrossRef]
  31. Liu, H.; Zhou, J.; Zheng, Y.; Jiang, W.; Zhang, Y. Fault diagnosis of rolling bearings with recurrent neural network-based autoencoders. ISA Trans. 2018, 77, 167–178. [Google Scholar] [CrossRef]
  32. Der Westhuizen, J.V.; Lasenby, J. The unreasonable effectiveness of the forget gate. arXiv 2018, arXiv:1804.04849. [Google Scholar]
  33. Case Western Reserve University. Bearing Data Center. Available online: http://csegroups.case.edu/bearingdatacenter/pages/welcome-case-western-reserve-university-bearing-data-center-website (accessed on 10 July 2019).
  34. Qiu, H.; Lee, J.; Lin, J.; Yu, G. Wavelet filter-based weak signature detection method and its application on rolling element bearing prognostics. J. Sound Vib. 2006, 289, 1066–1090. [Google Scholar] [CrossRef]
  35. Jia, F.; Lei, Y.; Lin, J.; Zhou, X.; Lu, N. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mech. Syst. Signal Process. 2016, 72, 303–315. [Google Scholar] [CrossRef]
  36. Chen, K.; Zhou, X.; Fang, J.; Zheng, P.; Wang, J. Fault Feature Extraction and Diagnosis of Gearbox Based on EEMD and Deep Briefs Network. Int. J. Rotating Mach. 2017, 2017, 9602650. [Google Scholar] [CrossRef]
  37. Acoustics and Vibration Database. NASA Bearing Dataset. Available online: https://ti.arc.nasa.gov/tech/dash/groups/pcoe/prognostic-data-repository/ (accessed on 10 July 2019).
  38. Williams, T.; Ribadeneira, X.; Billington, S.A.; Kurfess, T.R. Rolling element bearing diagnostics in run-to-failure lifetime testing. Mech. Syst. Signal Process. 2001, 15, 979–993. [Google Scholar] [CrossRef]
Figure 1. The step length ( D x ) and step number (L/ D x ) of a data sequence.
Figure 1. The step length ( D x ) and step number (L/ D x ) of a data sequence.
Sensors 19 03109 g001
Figure 2. The inner structures of different recurrent neural network (RNN) cells: (a) RNN; (b) Long short-term memory network (LSTM); (c) Gated recurrent unit (GRU)
Figure 2. The inner structures of different recurrent neural network (RNN) cells: (a) RNN; (b) Long short-term memory network (LSTM); (c) Gated recurrent unit (GRU)
Sensors 19 03109 g002
Figure 3. The inner structures of JANET.
Figure 3. The inner structures of JANET.
Sensors 19 03109 g003
Figure 4. (a) Flowchart of the proposed method. (b) Diagnostic process of the proposed method. LLRNN: low-delay lightweight recurrent neural network.
Figure 4. (a) Flowchart of the proposed method. (b) Diagnostic process of the proposed method. LLRNN: low-delay lightweight recurrent neural network.
Sensors 19 03109 g004
Figure 5. LLRNN network structure: (a) The real structure; (b) The logically expanding structure.
Figure 5. LLRNN network structure: (a) The real structure; (b) The logically expanding structure.
Sensors 19 03109 g005
Figure 6. Fault simulation test rig of the Case Western Reserve University (CWRU) data set.
Figure 6. Fault simulation test rig of the Case Western Reserve University (CWRU) data set.
Sensors 19 03109 g006
Figure 7. Diagram of overlapping sampling.
Figure 7. Diagram of overlapping sampling.
Sensors 19 03109 g007
Figure 8. Test rig and sensor placement of the Center for Intelligent Maintenance Systems (IMS) dataset.
Figure 8. Test rig and sensor placement of the Center for Intelligent Maintenance Systems (IMS) dataset.
Sensors 19 03109 g008
Figure 9. The root mean square (RMS) values in dataset 2 of IMS.
Figure 9. The root mean square (RMS) values in dataset 2 of IMS.
Sensors 19 03109 g009
Table 1. Classification of the CWRU bearing fault data subsets under each load.
Table 1. Classification of the CWRU bearing fault data subsets under each load.
Training
Samples
Validation
Samples
Test
Samples
Fault Types
(Inches)
Labels
800100100Normal1
800100100Ball 0.0072
800100100Ball 0.0143
800100100Ball 0.0214
800100100Inner 0.0075
800100100Inner 0.0146
800100100Inner 0.0217
800100100Outer 0.0078
800100100Outer 0.0149
800100100Outer 0.02110
Table 2. Experimental results for the effects of different step lengths.
Table 2. Experimental results for the effects of different step lengths.
D x Params
(KB)
FLOPs
( 10 5 )
Training
Time
Test
Time (s)
Validation
Acc(%)
Test
Acc (%)
10 dB
Acc (%)
5 dB
Acc (%)
0 dB
Acc (%)
−4 dB
Acc (%)
11306826 h 33 m 29 s2.56297.195.984.460.544.018.4
21313433 h 01 m 48 s1.19497.896.694.180.747.321.6
41331741 h 32 m 12 s0.55298.697.194.481.652.428.1
813789.846 m 14 s0.31299.298.294.882.956.229.2
1614547.523 m 59 s0.18499.399.198.697.385.549.6
3216126.413 m 04 s0.09799.399.398.397.591.068.6
6419315.87 m 58 s0.05499.499.399.097.891.875.5
12825710.54 m 39 s0.05099.499.498.997.289.974.1
2563857.883 m 48 s0.04899.498.998.396.388.873.5
5126416.562 m 30 s0.04896.996.595.693.485.971.2
102411535.902 m 13 s0.04896.296.095.794.990.779.3
Training time denotes the time of training the network on the GPU with batch size equaling 100; Test time denotes the time of testing the network on the CPU using all the test samples of load 0.
Table 3. Experimental results for the effects of different hidden units.
Table 3. Experimental results for the effects of different hidden units.
D h Params
(KB)
FLOPs
( 10 4 )
Training
Time (s)
Test
Time (s)
Validation
Acc (%)
Test
Acc (%)
10 dB
Acc (%)
5 dB
Acc (%)
0 dB
Acc (%)
−4 dB
Acc (%)
165.064.104710.04799.298.195.487.970.647.4
3212.139.834710.04899.498.997.994.082.458.9
4821.1917.24720.05199.499.198.796.787.767.0
6432.2526.24730.05199.399.298.696.688.769.5
8045.3136.94750.05399.499.298.897.189.570.9
9660.3849.24760.05399.499.298.897.290.172.7
12896.5078.64780.05499.399.399.097.891.875.5
160140.631154800.06399.499.298.897.490.474.0
192192.751574820.07199.399.198.897.590.373.9
224252.882064850.07599.399.098.897.590.673.8
256321.002624860.09099.199.098.797.690.673.5
Table 4. Experimental results for the effects of different numbers of layers.
Table 4. Experimental results for the effects of different numbers of layers.
LayerParams
(KB)
FLOPs
( 10 6 )
Training
Time (s)
Test
Time (s)
Validation
Acc (%)
Test
Acc (%)
10 dB
Acc (%)
5 dB
Acc (%)
0 dB
Acc (%)
−4 dB
Acc (%)
11931.584780.05499.399.399.097.891.875.5
23863.166840.07699.599.399.198.493.679.0
35794.7411260.10399.399.098.697.390.672.8
47726.3213320.12399.098.798.196.689.873.2
59657.9117920.14698.398.097.496.189.673.4
Table 5. Classification of the IMS bearing fault data set.
Table 5. Classification of the IMS bearing fault data set.
TrainingValidationTestFault TypesLabels
1600200200Normal1
1600200200Roller2
1600200200Outer3
1600200200Inner4
Table 6. Experimental results of various methods in the CWRU dataset.
Table 6. Experimental results of various methods in the CWRU dataset.
MethodLayer D h Params
(KB)
FLOPs
( 10 6 )
Training
Time (s)
Test
Time (s)
Load 0
Acc (%)
Load 1
Acc (%)
Load 3
Acc (%)
Load 4
Acc (%)
LLRNN11281931.584780.05499.399.799.899.7
GRU 1 11282892.375180.08199.299.299.599.6
LSTM 1 11283863.154860.06899.799.799.899.8
GRU 2 11001931.585020.07099.299.199.299.4
LSTM 2 1841951.604670.06199.299.499.799.6
GRU 3 2641931.589870.07499.399.399.699.6
LSTM 3 2531951.597880.06799.499.599.799.7
Reference [12]--503711458760.86599.699.699.799.8
Reference [13]--56580.85860.58499.799.699.899.7
Reference [14]--20610.18730.51599.899.799.799.8
Reference [15]--36720029925.75299.799.699.899.7
Reference [6]----377.28882.383.585.888.3
LLRNN denotes the network structure using JANET cell with D x = 64, D h = 128, and l a y e r = 1. GRU 1 and LSTM 1 denotes the network structure using GRU and LSTM cells with the same D x , D h and l a y e r as LLRNN. GRU 2 and LSTM 2 denote the network structure using GRU and LSTM cells under l a y e r = 1 and D x = 64 with approximately the same amount of calculation as LLRNN. GRU 3 and LSTM 3 denote the network structure using GRU and LSTM cells under l a y e r = 2 and D x = 64 with approximately the same amount of calculation as LLRNN.
Table 7. Noise immunity of various methods in the CWRU dataset.
Table 7. Noise immunity of various methods in the CWRU dataset.
Method load 0 Acc (%)load 1 Acc (%)load 2 Acc (%)load 3 Acc (%)
10 dB5 dB0 dB10 dB5 dB0 dB10 dB5 dB0 dB10 dB5 dB0 dB
LLRNN99.097.891.599.598.794.099.799.495.199.598.391.2
GRU 1 99.097.991.899.298.994.199.599.091.899.698.990.3
LSTM 1 99.198.192.199.599.393.599.999.793.199.799.791.5
GRU 2 98.197.189.999.098.591.499.198.191.599.197.789.1
LSTM 2 98.396.789.599.398.591.699.298.792.199.098.289.3
GRU 3 99.097.891.199.298.592.599.398.893.099.398.491.5
LSTM 3 99.197.890.699.398.792.299.499.194.499.398.592.7
Reference [12]92.978.333.393.582.435.695.986.337.596.988.338.3
Reference [13]97.592.456.098.293.457.498.594.457.898.995.258.1
Reference [14]96.286.461.296.787.661.797.188.462.197.288.763.8
Reference [15]77.154.836.978.658.135.375.355.837.979.654.639.2
Reference [6]80.577.443.881.573.440.580.577.443.881.574.842.3
Table 8. Experimental results of various methods in the IMS dataset.
Table 8. Experimental results of various methods in the IMS dataset.
Method Layer D h Params
(KB)
FLOPs
( 10 6 )
Training
Time (s)
Test
Time (s)
Test
Acc (%)
10 dB
Acc (%)
5 dB
Acc (%)
0 dB
Acc (%)
−4 dB
Acc (%)
LLRNN11281931.583600.05299.599.197.685.263.6
GRU 1 11282892.374620.07599.699.798.684.260.8
LSTM 1 11283863.153730.06399.699.397.985.163.7
GRU 2 11001931.584620.06799.398.698.482.459.0
LSTM 2 1841951.603570.05699.398.397.084.562.4
GRU 3 2641931.587900.06299.598.898.083.859.2
LSTM 3 2531951.596380.05899.699.598.385.563.6
Reference [12]--50,37114510061.34499.798.597.087.160.3
Reference [13]--56580.86150.47399.897.995.487.261.2
Reference [14]--20610.17230.43699.889.660.626.125.3
Reference [15]--36720024414.65799.796.682.461.135.6
Reference [6]----323.49776.072.857.750.228.9

Share and Cite

MDPI and ACS Style

Liu, W.; Guo, P.; Ye, L. A Low-Delay Lightweight Recurrent Neural Network (LLRNN) for Rotating Machinery Fault Diagnosis. Sensors 2019, 19, 3109. https://doi.org/10.3390/s19143109

AMA Style

Liu W, Guo P, Ye L. A Low-Delay Lightweight Recurrent Neural Network (LLRNN) for Rotating Machinery Fault Diagnosis. Sensors. 2019; 19(14):3109. https://doi.org/10.3390/s19143109

Chicago/Turabian Style

Liu, Wenkai, Ping Guo, and Lian Ye. 2019. "A Low-Delay Lightweight Recurrent Neural Network (LLRNN) for Rotating Machinery Fault Diagnosis" Sensors 19, no. 14: 3109. https://doi.org/10.3390/s19143109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop