Rolling Bearing Fault Diagnosis Based on WOA-VMD-MPE and MPSO-LSSVM

In order to further improve the accuracy of fault identification of rolling bearings, a fault diagnosis method based on the modified particle swarm optimization (MPSO) algorithm optimized least square support vector machine (LSSVM), combining parameter optimization variational mode decomposition (VMD) and multi-scale permutation entropy (MPE), was proposed. Firstly, to solve the problem of insufficient decomposition and mode mixing caused by the improper selection of mode component K and penalty factor α in VMD algorithm, the whale optimization algorithm (WOA) was used to optimize the penalty factor and mode component number in the VMD algorithm, and the optimal parameter combination (K, α) was obtained. Secondly, the optimal parameter combination (K, α) was used for the VMD of the rolling bearing vibration signal to obtain several intrinsic mode functions (IMFs). According to the Pearson correlation coefficient (PCC) criterion, the optimal IMF component was selected, and its optimal multi-scale permutation entropy was calculated to form the feature set. Finally, K-fold cross-validation was used to train the MPSO-LSSVM model, and the test set was input into the trained model for identification. The experimental results show that compared with PSO-SVM, LSSVM, and PSO-LSSVM, the MPSO-LSSVM fault diagnosis model has higher recognition accuracy. At the same time, compared with VMD-SE, VMD-MPE, and PSO-VMD-MPE, WOA-VMD-MPE can extract more accurate features.


Introduction
The rolling bearing is an important part of rotating machinery and equipment, whose main role is to transfer kinetic energy from the drive shaft to the shaft seat and reduce the energy loss caused by friction. A large part of the failure of rotating machinery and equipment is caused by rolling bearing failure. Rolling bearing failure will not only affect the progress of the project but also cause huge economic losses and, more seriously, will lead to staff casualties. Therefore, the study of rolling bearing fault diagnosis is necessary [1][2][3][4]. In the early days, staff mainly relied on manual experience to diagnose rolling bearings, and this method was inefficient and could not detect faults in the bearings at the earliest possible time. Later, it was found that the analysis of rolling bearing vibration signals could detect the status of bearings in real-time, so a large number of scholars studied various methods to process the signals. Dragomiretskiy [5] proposes variational mode decomposition (VMD), which is an adaptive signal decomposition method. Instead of adopting the same decomposition mode as empirical mode decomposition (EMD), this method adopts a non-recursive variational mode, which avoids the occurrence of the end effect and makes the decomposed mode components more accurate. However, the drawback of this method is that the number of mode components K and the penalty factor α have a large impact on the decomposition results [6]. To obtain the accurate number of mode components K, Zhou et al. [7] combined EMD and center frequency to determine the value of K according to the trend of center frequency variation of each intrinsic mode function (IMF). Zhang et al. [8] used the Gini index and autocorrelation function to construct the weighted autocorrelative function maximum (AFM) indicator as the optimization objective function and optimized the VMD using the improved particle swarm optimization (IPSO) algorithm to obtain the required parameters K and α for the VMD decomposition to obtain the sensitive IMFs. Wang et al. [9] used the Archimedes optimization algorithm (AOA) to optimize the mode number K and penalty factor α of the VMD algorithm by taking the minimum average value of all IMFs' correlation waveform index (Cwi) as the objective function. Jiao et al. [10] determined the mode number K required for VMD decomposition according to the method of abnormal decline of center frequency (ADCF). Duan et al. [11] combined the improved VMD and sample entropy (SE) to determine the value of K by the maximum correntropy criterion (MCC), which effectively improved the statistical properties of highly nonlinear process errors. Li et al. [12] proposed a genetic algorithm (GA) to optimize VMD decomposition parameters K and α, which decomposes the optimal IMFs and improves the accuracy of VMD decomposition. Extracting appropriate feature information is the key that determines the accuracy and reliability of fault diagnosis results. He et al. [13] used an improved sparrow search algorithm to optimize the VMD parameters with dispersion entropy as the fitness value and used the optimized VMD algorithm to decompose the original signal into a series of mode components and calculate the energy entropy of each mode component to complete the flywheel bearing fault diagnosis. Xue et al. [14] calculated the dispersion entropy of IMF components in different frequency bands and then used the joint approximate diagonalization of eigenmatrices (JADE) to extract fusion features and finally obtain the hierarchical discrete entropy (HDE) for bearing fault diagnosis. Wang et al. [15] proposed a feature extraction method based on the combination of variational mode extraction (VME) and multi-objective information fusion band-pass filter (MIFBF). Yang et al. [16] used the fractional Fourier transform (FRFT) algorithm to extract fault features from the original signals and then used stochastic resonance (SR) to enhance the weak fault feature information to complete bearing fault diagnosis according to the fault feature frequency. Yan et al. [17] performed VMD decomposition of bearing signals, and the calculated multiscale envelope dispersion entropy (MEDE) of the IMF component was used as the feature to complete bearing fault pattern recognition. Zheng et al. [18] calculated the permutation entropy (PE) value of each IMF obtained by VMD decomposition to reflect the characteristic information of the bearing vibration signal. Zhang et al. [19] combined VMD and sample entropy and used the multi-domain indexes to construct the feature vector to characterize the fault information.
An intelligent fault diagnosis method is needed for pattern recognition of rolling bearings in order to enable rapid fault diagnosis of fault characteristic information and avoid mechanical equipment failures. Vapnik [20] proposed the support vector machine (SVM) machine learning algorithm mainly to solve the problems of nonlinearity as well as insufficient samples. Zhang et al. [21] used multi-scale information entropy to construct a sample set, and IPSO optimization SVM was used to realize bearing fault diagnosis. Wang et al. [22] used quantum-behaved particle swarm optimization (QPSO) and multiscale permutation entropy (MPE) to extract features from denoising bearing signals and then used SVM to identify faults. The experimental results show that the proposed fault diagnosis method can identify bearing fault types well. Ye et al. [23] used VMD-MPE to construct feature vectors, then used PSO to optimize SVM to improve the model recognition accuracy. However, SVM is complicated to solve the non-equation constraint problem, and in order to reduce the solution difficulty, Suykens [24] improved SVM and proposed the least square support vector machine (LSSVM), which replaced the non-equation constraint in SVM with an equation constraint, greatly reducing the solution difficulty. The LSSVM algorithm has been widely applied in the field of industrial intelligence in recent years [25][26][27][28]. He et al. [29] used wavelet packet transform to extract fault features and com-bined them with LSSVM to complete the fault identification of circuit output voltage signals. Gao et al. [30] fused singular entropy, energy entropy, and permutation entropy to obtain complementary features, combined with the PSO algorithm to optimize LSSVM, and successfully completed the diagnosis of bearing faults. Zhao et al. [31] extracted narrowband kurtosis vectors from the cyclic correntropy spectrum (CCES) as feature vectors of LSSVM for the early detection and classification of locomotive axle bearing faults. Zhu et al. [32] used VMD to decompose the bearing vibration signal, used the fuzzy entropy of each IMF as the feature vector, optimized the LSSVM model by the gray wolf optimizer (GWO) algorithm, and finally completed the identification of the rolling bearing faults.
The methods in the above literature simply perform individual optimization of feature extraction or model parameters, which limits the accuracy of rolling bearing fault diagnosis. The future trend is definitely to optimize feature extraction and model parameters simultaneously with different algorithms to avoid the problem of low accuracy caused by individual optimization. In this paper, the whale algorithm (WOA) is used to optimize the VMD algorithm, and the optimal combination of parameters (K, α) required for VMD decomposition is obtained. According to the Pearson correlation coefficient (PCC) criterion, the optimal IMF component is selected, and its optimal multi-scale permutation entropy is calculated to form the feature set. Finally, k-fold cross-validation was used to train the MPSO-LSSVM model, and the test set was input into the trained model for identification. The experimental results show that compared with PSO-SVM, LSSVM, and PSO-LSSVM, the MPSO-LSSVM fault diagnosis model has higher recognition accuracy. Meanwhile, compared with VMD-SE, VMD-MPE, and PSO-VMD-MPE, WOA-VMD-MPE can extract more accurate features.

Feature Extraction
The first step of establishing a rolling bearing diagnosis model is feature extraction. Whether the extracted features are accurate or not directly determines the accuracy of diagnosis, so the extracted features must be able to truly and accurately reflect the status information of the bearing. Since different parts produce different frequencies of vibration signals, this will lead to different IMFs after VMD decomposition, and the calculated multi-scale permutation entropy values of IMFs will be different according to which feature information will be constructed. In feature extraction, a series of IMFs are obtained by WOA-VMD decomposition of the vibration signal, and the multi-scale permutation entropy value of each IMF is calculated as the feature vector.

VMD
VMD is an adaptive signal decomposition method that uses a non-recursive decomposition mode to decompose the signal into a specified number of IMFs with different center frequencies according to a predetermined number of modes K and a penalty factor α. It gets rid of the uncertainty of the number of IMFs caused by the traditional method of EMD decomposition as well as the end effect and modal mixing problems encountered and can better highlight the characteristic information of the signal [33]. The expression of the k-th order eigenmode function is obtained by VMD decomposition, that is: is a non-monotonically decreasing phase function. The analytical signal of u k (t) is obtained by the Hilbert transform, so as to obtain the unilateral frequency spectrum, that is: By adjusting the center frequency ω k (t) of each u k (t) and mixing it with the unilateral frequency spectrum of each mode, the baseband signal is obtained: Calculate the square of the L 2 norm of the gradient of the demodulated signal to obtain the bandwidth of the demodulated signal, and establish the following constrained variational model expression: where f (t) is the input signal, and δ(t) is the pulse function.
In order to turn Equation (5) into an unconstrained variational problem and to ensure the accuracy of the signal decomposition, an extended Lagrange function is introduced, whose expression is: where α is the quadratic penalty factor, λ is the Lagrange operator, and , represents the inner product. Use the alternate direction method of multipliers (ADMM) to continuously update (ω) alternately to find the minimum value of Equation (6).
The iteration ends when the accuracy satisfies Equation (10) and, finally, K IMFs are obtained.
where ε (ε > 0) is the precision convergence value. According to the above theoretical analysis, the specific process of the VMD algorithm is as follows: Step 1. Initialize u 1 k (ω) , ω 1 k ,λ 1 k (ω) and n = 0.
Step 2. Let n = n + 1, the loop starts. for k = 1:K update Step 3. Given the precision, if the iteration stop condition is met, stop the loop; otherwise, enter step 2 and continue the loop.

PCC
A Pearson correlation coefficient (PCC) is used to measure the linear correlation between two sets of data, that is, to carry out a correlation analysis between variables and select variables with strong correlation [34]. The closer the absolute value is to 1, the stronger the correlation between variables. The formula for calculating is as follows: where E is the mathematical expectation, cov is the covariance, and σ is the standard deviation.
According to the literature [35], it can be known that the signal component with a correlation number greater than 0.3 should be selected. This eliminates irrelevant features and avoids losing sensitive fault signal information.

MPE
Permutation entropy (PE) can detect the complexity and randomness of time series and is sensitive to local variations, so it is usually used for mechanical equipment fault diagnosis [36]. However, PE can only reflect the complexity of time series at a single scale and cannot reflect the situation at multiple scales, so the MPE is introduced. The MPE is used to determine the complexity and randomness of a time series by calculating the PE of the time series at multiple scales [37]. The calculation procedure is as follows: j with a scale factor s is obtained by coarse granulation: Reconstructing the time series y The reconstructed components of Equation (12) are arranged in increasing order to obtain the sign vector, which is: According to the probability of occurrence of each sign, the MPE can be defined as: The smaller the value of H p , the more orderly the time series is and the more likely it is to be in a fault state; the larger the value of H p , the more irregular the time series is and the greater the probability that it is in a normal state.

Feature Extraction Based on WOA-VMD and MPE
Mirjalili [38] proposed a novel population intelligence optimization algorithm, the whale optimization algorithm (WOA), based on the hunting behavior of humpback whales. This algorithm can effectively avoid falling into the trap of local minima, and the global optimization search is more effective. Since the mode number K and penalty factor α in the VMD algorithm have a large impact on the decomposition results, this paper uses WOA to optimize the parameters K and α.
The WOA needs to define a suitable fitness function to calculate the fitness value when optimizing the VMD parameters and update the parameters by comparing the fitness values. In this paper, the envelope entropy is chosen as the fitness function. The size of the envelope entropy value reflects the uncertainty of the probability distribution, and the larger the entropy value is, the more uncertain the signal is. The envelope entropy E p of the signal x(t)(t = 1, 2, . . . , N) is calculated as follows: where N is the number of signal sampling points, a(t) is the envelope signal obtained by Hilbert demodulation of signal x(t), and p t is the normalization result of signal a(t).
The flow chart of feature extraction based on WOA-VMD and MPE is shown in Figure 1. The specific steps are as follows: Step 1. Initialize the WOA algorithm, take the envelope entropy as the fitness function of the WOA, and obtain the global optimal parameters (K, α) for the VMD decomposition of the signal.
Step 2. VMD decomposition of the vibration signal according to (K, α) obtained in step 1 to obtain K IMF components and pick the best ones according to the PCC criterion.
Step 3. Select the optimal MPE parameters and calculate the MPE value of each IMF to form the feature data set.

MPSO
The particle swarm optimization (PSO) algorithm is a global optimization algorithm with an efficient search function. However, it is easy to fall into the local optimum, the accuracy decreases in the late iteration, and the convergence speed is slow when searching for the best [39], so this paper proposes the improved particle swarm optimization (MPSO) algorithm. MPSO adopts linear decreasing weights and time-varying learning factors to optimize PSO, which improves the search ability and convergence speed of the algorithm. In the MPSO optimization principle, in a D-dimensional vector, the position of the p-th particle is X p = (x p1 , x p2 , . . . , x pD ), the velocity is v p = (v p1 , v p2 , . . . , v pD ), the optimal position of the particle is W p = (p p1 , p p2 , . . . , p pD ), and the optimal position of all particles is W g = (w g1 , w g2 , . . . , w gD ).
The velocity and position update equations are as follows: v k+1 where ω is the inertia weight; c 1 and c 2 are the learning factor constants; and r 1 and r 2 are uniform random numbers in the range of [0, 1]. The inertia weight ω represents the ability of the particle to maintain the velocity of motion at the previous moment. When the value of ω is small, the local search ability is stronger, and when the value of ω is larger, the global search ability is stronger. In the early stages of the search, the global search ability needs to be improved to avoid getting into local optimal solutions, and in the later stages of the search, the local search ability needs to be improved to find optimal solutions. The linear decreasing inertia weights can better balance the global and local search ability of the algorithm, and the expression is as follows: where ω max is the maximum value of inertia weight, ω min is the minimum value of inertia weight, g is the current number of iterations, and g max is the maximum number of iterations. The learning factor c 1 represents particle self-awareness and c 2 represents particle social awareness. In order to facilitate particle search, it is necessary to improve selfawareness in the early stages of the search and social awareness in the latter stages. The expression of the learning factor is: where c 1s and c 1 f are the initial and final values of c 1 ; c 2s and c 2 f are the initial and final values of c 2 and are constants.

LSSVM Fault Diagnosis Model Based on MPSO Optimization
The selection of the regularization parameter γ and the radial basis kernel function parameter σ in the LSSVM model with radial basis function (RBF) as the kernel function is critical when classifying faults in rolling bearings, and the improper selection of the parameters will lead to poor classification model results. The initial value selection in the pre-classification stage is random, and in the past, it relied on experience to select the appropriate parameters, which can cause the problem of underfitting or overfitting to occur. The MPSO algorithm is used to optimize the parameter combination (γ, σ), which avoids the above disadvantages and greatly improves the classification accuracy of the LSSVM model. The specific process is shown in Figure 2. Step 1: Extract the fault features of rolling bearing vibration signal processing and construct them into a training set and a test set.
Step 2: Initialize particle swarm parameters. The dimension is two because the parameter combination (γ, σ) is optimized. The parameters of the algorithm are set and the initial swarm of particles is generated randomly.
Step 3: Calculate the accuracy error δ e of each particle as the fitness value through Equation (20), and the smaller the fitness value, the better the diagnosis result of the LSSVM model. That is: where r x is the number of correct classifications and r y is the number of wrong classifications.
Step 4: According to the particle fitness, the velocity and position of the particle are updated by Equations (16) and (17).
Step 5: If the maximum number of iterations or the termination condition is satisfied, the loop ends and the optimal combination of parameters is output to construct the MPSO-LSSVM model. Otherwise, return to Step 4.
Step 6: Input the test set into the constructed MPSO-LSSVM model to obtain the fault diagnosis result.

Experiment
This paper adopts the Western Reserve University bearing test bench data to verify the method [40].  Using the signal of the bearing inner race fault as an example, the WOA algorithm is used to find the optimal parameter combination (K, α) of VMD decomposition. To verify the effectiveness of WOA in VMD parameter optimization, PSO-VMD and GA-VMD are used to compare and verify WOA-VMD, respectively. The initial parameters are as follows: the maximum iteration number is 40, the population size is 20, the average value of 20 tests is taken, the range of K is [2,10], and the range of α is [500, 6000]. The convergence comparison of the three optimization algorithms is shown in Figure 4. It can be seen from Figure 4 that PSO-VMD, GA-VMD, and WOA-VMD converge at the 12th, 18th, and 26th generations, respectively, and the convergence value is 3.4045. The convergence speed of the WOA-VMD fitness value optimization curve is the fastest. Table 1 shows the average time taken to run VMD under three optimization algorithms. It can be seen from Table 1 that GA-VMD runs the longest and WOA-VMD runs the shortest. It shows that the WOA-VMD algorithm has advantages over the GA-VMD and PSO-VMD algorithms.
The WOA-VMD optimization algorithm is used to optimize the four bearing signals, and the fitness curve is shown in Figure 5. Figure 5a shows that after 14 iterations, the best fitness of the normal signal is obtained, the convergence value is 3.2228, and the best parameter combination (K, α) is (9,2103). Figure 5b shows that after 12 iterations, the best fitness of the inner race fault signal is obtained, the convergence value is 3.4045, and the best parameter combination (K, α) is (6,3648). Figure 5c shows that after 15 iterations, the best fitness of the outer race fault signal is obtained, the convergence value is 3.2066, and the best parameter combination (K, α) is (7,2585). Figure 5d shows that after nine iterations, the best fitness of the ball fault signal is obtained, the convergence value is 3.0738, and the best parameter combination (K, α) is (9,3029). The result of the four data optimizations is shown in Table 2.  The bearing signals are VMD decomposed, and the decomposition results are shown in Figures 6-9. Figures 6a, 7a, 8a and 9a show the time-domain waveform. The frequencydomain analysis is performed on the decomposed IMF, and its frequency spectrum is shown in Figures 6b, 7b, 8b and 9b. It can be seen from Figures 6b, 7b, 8b and 9b that the IMFs have different central frequencies and no defects such as state aliasing and signal distortion, and the original signal can be effectively decomposed.    According to the PCC, the Pearson correlation coefficients between each IMF and the original signal are calculated. The calculated results are shown in Tables 3-6.  It can be seen from Table 3 that the components of IMF1, IMF2, IMF3, and IMF5 obtained by VMD decomposition of the normal (Normal) signal meet the PCC condition with a correlation value greater than 0.3. This indicates that IMF1, IMF2, IMF3, and IMF5 components are highly correlated with the original signal, and the signal contains abundant fault information. Therefore, IMF1, IMF2, IMF3, and IMF5 components are selected as the key components. It can be seen from Table 4 that the IMF3, IMF4, IMF5, and IMF6 components obtained by VMD decomposition of the outer race fault (ORF) signal meet the PCC condition with correlation values greater than 0.3. Therefore, IMF3, IMF4, IMF5, and IMF6 components are selected as the key components. It can be seen from Table 5 that the IMF4, IMF5, IMF6, and IMF7 components obtained by VMD decomposition of the ball fault (BF) signal meet the PCC condition with a correlation value greater than 0.3. Therefore, IMF4, IMF5, IMF6, and IMF7 components are selected as the key components. It can be seen from Table 6 that the components of IMF2, IMF3, IMF4, IMF5, and IMF6 obtained by VMD decomposition of the inner race fault (IRF) signal meet the PCC condition with a correlation value greater than 0.3. According to the above analysis, the normal (Normal) signal, the outer race fault (ORF) signal, and the ball fault (BF) signal only have four IMF components that satisfy the PCC condition. Therefore, in order to ensure the same dimension of the eigenvectors obtained below, the IMF3, IMF4, IMF5, and IMF6 components are selected as optimal components. A new array can be formed based on the order of the optimal IMF components obtained above. The result is shown in Table 7.  Normal  IMF1  IMF2  IMF3  IMF5  IRF  IMF3  IMF4  IMF5  IMF6  ORF  IMF3  IMF4  IMF5  IMF6  BF  IMF4  IMF5  IMF6  IMF7 The selection of MPE parameters is extremely important and determines the accuracy of fault diagnosis. The method of determining the optimal MPE parameters is introduced, initially setting the embedding dimension s = 6, the delay time t = 1, and setting the scale factor to τ = 20. Figure 10 shows the relationship between MPE values of the array U and scale factor τ. It can be seen from Figure 10a that when τ = 2, the difference in MPE value is larger, and four states can be clearly distinguished. Therefore, the value of the optimal scale factor for U1 is determined as 2 and uses the same method to determine the optimal scale factor τ = 4, τ = 9, and τ = 5 of U2, U3, and U4. The result is shown in Table 8.  According to the optimal scale factor τ, the optimal MPE is selected to form the feature vectors. The feature vectors in the four states are normalized to the range of (0, 1) to form the feature vector data set, for which Table 9 shows the feature vector data set. Figure 11 is the boxplot of the feature vector U for the four types of bearing signals. It can be seen from Figure 11 that the feature vectors are relatively concentrated.

Analysis of Fault Diagnosis Results
The feature vector data set is input into the LSSVM model for classification, and the MPSO algorithm is used to optimize the model. The parameters of the MPSO-LSSVM algorithm are set as follows: the c 1s , c 1 f , c 2s , and c 2 f values are 2, 1, 1, 2, the ω max is 0.9, the ω min is 0.1, the number of particles is 30, the number of iterations is 200, the penalty factor range is [0.1, 100], and the radial basis kernel parameter range is [0.1, 100]. For the multi-classification problem, the sample data is grouped and trained by the K-fold cross-validation method. K = 10 is selected, each subset of data is used as a validation set, and the remaining nine sets of subset data are combined as a training set, which is brought into the MPSO-LSSVM model for training. The accuracy rate of 10 groups of discriminant models obtained through training is shown in Figure 12, and the average accuracy is 99.75%, which proves that the model can perfectly discriminate the fault types of rolling bearings and effectively avoid the effects of over-fitting. In order to verify whether the model trained by K-fold cross-validation has excellent generalization ability, a total of 80 test samples are classified and identified by taking 20 test samples of the normal, inner race fault, outer race fault, and ball fault. The fitness curve of the algorithm with the number of iterations is shown in Figure 13. The MPSO algorithm optimizes the optimal combination of LSSVM parameters (γ, σ) as (30.65, 7.13), and the accuracy of the model is 99.88%. The classification result is shown in Figure 14. It can be seen from Figure 14 that the classification rate is 100%.
In order to prevent the contingency of experimental results, the test set is tested 20 times and takes an average of 20 results. Table 10 shows the diagnosis results. As can be seen from Table 10, the average accuracy is 100% after optimizing the LSSVM model with the MPSO algorithm, which proves that the MPSO-LSSVM pattern recognition has a strong adaptive capability.    Figure 15. The different methods are tested 20 times to obtain the average value. The specific diagnosis result is shown in Table 11. Figure 16 shows the identification results of different methods. It can be seen from Figure 16 that the method of WOA-VMD-MPE-MPSO-LSSVM presented in this paper has the highest accuracy, while the method of VMD-SE-MPSO-LSSVM has the lowest accuracy.   From Table 11, it can be seen that the accuracy of PSO-SVM, LSSVM, and PSO-LSSVM models to identify the feature vectors constructed by WOA-VMD-MPE is 97.80%, 98.88%, and 99.38%, respectively, which is lower than the method proposed in this paper. The identification accuracy of the MPSO-LSSVM model to identify the feature vectors constructed by VMD-SE, VMD-MPE, and PSO-VMD-MPE is 96.44%, 97.50%, and 98.94%, respectively, which is lower than that of WOA-VMD-MPE. Through the above analysis, the effectiveness of the MPSO-LSSVM fault diagnosis method based on the combination of WOA-VMD-MPE is verified.

Conclusions
A fault diagnosis method based on the modified particle swarm optimization (MPSO) algorithm optimized least square support vector machine (LSSVM) combining parameter optimization variational mode decomposition (VMD) and multi-scale permutation entropy (MPE) is proposed in this paper. The main conclusions are as follows: (1) The whale optimization algorithm (WOA) is used to optimize the penalty factor α and the number of mode components K in the VMD algorithm so as to solve the problems of insufficient decomposition and mode mixing caused by the improper selection of mode components K and penalty factor α in the VMD algorithm. (2) In order to extract fault features more accurately, the Pearson correlation coefficient (PCC) criterion is introduced to screen out the optimal IMF, and the multi-scale permutation entropy of the optimal IMF is calculated to form a feature vector. Experimental results show that the WOA-VMD-MPE extracts more accurate features compared to VMD-SE, VMD-MPE, and PSO-VMD-MPE methods.
(3) In order to improve the generalization ability of the MPSO-LSSVM model, K-fold cross-validation is performed on the model, and the average accuracy of the model can reach 99.75%. The test samples are input into the model for classification to verify whether the model has good generalization ability. The results show that the accuracy of fault identification of rolling bearings is 100%. Meanwhile, compared with PSO-SVM, LSSVM, and PSO-LSSVM methods, the MPSO-LSSVM fault diagnosis model has higher identification accuracy.
The improvement needed in this scheme is that the uncertainty in data acquisition is not considered, and there may be some parts of the vibration data that are not collected and can be improved by the acoustic emission technique.