Next Article in Journal
Research on the Influence of Disc–Drum Connection Bolt Preloading Rotor Assembly Modal Characteristics and Diagnosis Technology
Next Article in Special Issue
Study on the Potential of New Load-Carrying Capacity Descriptions for the Service Life Calculations of Gears
Previous Article in Journal
Exploring a Material-Focused Design Methodology: An Innovative Approach to Pressure Vessel Design
Previous Article in Special Issue
Research on the Hobbing Processing Method of Marine Beveloid Gear
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Extraction of a Planetary Gearbox Based on the KPCA Dual-Kernel Function Optimized by the Swarm Intelligent Fusion Algorithm

1
School of Mechanical Engineering, North University of China, Taiyuan 030051, China
2
Shanxi Key Laboratory of Advanced Manufacturing Technology, North University of China, Taiyuan 030051, China
*
Author to whom correspondence should be addressed.
Machines 2024, 12(1), 82; https://doi.org/10.3390/machines12010082
Submission received: 13 December 2023 / Revised: 15 January 2024 / Accepted: 18 January 2024 / Published: 21 January 2024
(This article belongs to the Special Issue Advancements in Mechanical Power Transmission and Its Elements)

Abstract

:
The feature extraction problem of coupled vibration signals with multiple fault modes of planetary gears has not been solved effectively. At present, kernel principal component analysis (KPCA) is usually used to solve nonlinear feature extraction problems, but the kernel function selection and its blind parameter setting greatly affect the performance of the algorithm. For the optimization of the kernel parameters, it is very urgent to study the theoretical modeling to improve the performance of kernel principal component analysis. Aiming at the deficiency of kernel principal component analysis using the single-kernel function for the nonlinear mapping of feature extraction, a dual-kernel function based on the flexible linear combination of a radial basis kernel function and polynomial kernel function is proposed. In order to increase the scientificity of setting the kernel parameters and the flexible weight coefficient, a mathematical model for dual-kernel parameter optimization was constructed based on a Fisher criterion discriminant analysis. In addition, this paper puts forward a swarm intelligent fusion algorithm to increase this method’s advantages for optimization problems, involving the shuffled frog leaping algorithm combined with particle swarm optimization (SFLA-PSO). The new fusion algorithm was applied to optimize the kernel parameters to improve the performance of KPCA nonlinear mapping. The optimized dual-kernel function KPCA (DKKPCA) was applied to the feature extraction of planetary gear wear damage, and had a good identification effect on the fuzzy damage boundary of the planetary gearbox. The conclusion is that the DKKPCA optimized by the SFLA-PSO swarm intelligent fusion algorithm not only effectively improves the performance of feature extraction, but also enables the adaptive selection of parameters for the dual-kernel function and the adjustment of weights for the basic kernel function through a certain degree of optimization; so, this method has great potential for practical use.

1. Introduction

Feature extraction is an important prerequisite for pattern recognition. It can be divided into the methods based on signal processing and those based on learning [1]. The former was developed earlier, and its typical algorithms include the Fourier transform, Gabor filter, and wavelet transform [2]. The latter has emerged in the last ten years; this method uses the dimension reduction method to map the data from the original space onto a certain low-dimensional space. The data in the low-dimensional space can greatly reflect the essential characteristics of the original space. The typical algorithms include principal component analysis (PCA), kernel principal component analysis (KPCA) [3], linear discriminant analysis (LDA) [4], etc. The feature extraction method based on signal processing involves extracting features from the transform domain and emphasizing the individual information from the samples, while the method based on learning mainly considers the statistical characteristics of all the samples and extracts the typical features that can express the original information.
In recent years, some achievements have been made using the proposed kernel learning method in the research fields of feature extraction, pattern recognition, data mining, and image and signal processing. To a certain extent, it solves the nonlinear problem in an actual system and improves the accuracy of pattern recognition and prediction. For example, a kernel principal component analysis (KPCA) and the observer method were applied to detect faults in a hydraulic system by analyzing the variation in water volume, and achieved results for the detection of faults in complex and nonlinear systems [5]. A rolling bearing fault diagnosis method based on the Volterra series and a kernel principal component analysis (KPCA) has been proposed [6]. However, the kernel learning method has encountered a bottleneck problem that affects the performance of the algorithm, that is, the selection of the kernel function and its parameters. The kernel method is based on the kernel function, which can be divided into the local kernel and global kernel at present. For the so-called local kernel, only when the distance between the data points is very close or the two features are similar will the value of the kernel function be greatly affected; for the global kernel, on the contrary, a large distance between the data points or a significant feature difference can also have a great impact on the value of the kernel function. In general, the radial basis function (RBF) kernel is the typical local kernel, and the polynomial kernel is the global kernel, and their mapping performances are different when their parameters are changed. The different kernel functions and their parameters reflect the differences in their mapping performance for the overall or local data.
In the practical application of the kernel analysis method, the type of kernel function is generally selected according to the data characteristics of practical problems [7]. Each kernel function has its own limitations. In reality, one kernel function mapping can only reflect a certain feature, and often cannot effectively describe the whole feature. Moreover, with the complexity and diversification of engineering problems, there is an increasing number of problems, such as large sample data sets or uneven high-dimensional data, and the defects of the single-kernel function are more obvious. Therefore, researchers began to try to combine the multi-kernel functions. Lei et al. proposed a relevance vector machine prediction method based on an adaptive multi-kernel combination, which was applied to the remaining useful life prediction of machinery [8]. Fu et al. proposed a combined kernel correlation vector machine and a quantum particle swarm optimization algorithm for transformer fault diagnosis [9]. Deng et al. modified kernel principal component analysis using a dual-weighted local outlier factor and applied this method to linear process monitoring [10]. Pan et al. put forward a mean-class kernel principal component fault diagnosis method with a combined kernel function for monitoring the distillation process [11].
In recent years, it has been proven that the performance of nonlinear mapping using multiple kernels is better than that using the single-kernel model or the single-kernel machine combination model. The construction methods for the combinatorial kernel can be divided as follows: the first method is a multi-kernel linear combination synthesis, in which the shape is like K = i = 1 q β j K i ,   β j 0 , j q β j = 1 , where q is the number of basic kernel functions, Ki is the basic kernel function, and β is the weight coefficient; the second is a multi-kernel extended synthesis method, which is an attempt to achieve the fusion of the different kernel matrices through the concept of summation “averaging” [12]. For example, Lv et al. derived a hybrid kernel function combining the RBF kernel with the polynomial kernel, and introduced it into an extreme learning machine to improve the accuracy of an intrusion detection system [13]. Nithya et al. proposed a method for kidney disease detection and segmentation using an artificial neural network and a multi-kernel K-means clustering algorithm for ultrasound images [14]. Ouyang et al. proposed a hybrid improved kernel LDA and PNN algorithm for efficient face recognition [15]. Afzal et al. achieved deep multilayer kernel learning in kernel vector machines for classification [16].
For the combinatorial kernel function, the scientific setting of the kernel parameters is the key to improving the performance of the kernel learning algorithm. In past studies, the kernel parameters were set according to the experimental or empirical value with considerable blindness. However, the current theoretical research is mainly based on the experimental correction method, which is inefficient, as well as data-related methods and intelligence based on optimization methods [17]. Applying the swarm intelligence optimization method to the study of the optimization of the kernel function and its parameters represents a new research hotspot for improving the status quo and enhancing the performance of the kernel learning method [18,19].
Particle swarm optimization (PSO) and the shuffled frog leaping algorithm (SFLA) are categorized as swarm intelligence optimization algorithms [20,21]. PSO is a method derived from the behavior characteristics of biological populations, and is used to solve optimization problems. In order to optimize the search, the behavioral characteristics of birds are simulated, and competition among the individuals is achieved. The SFLA simulates the optimization of the group foraging of frogs by introducing a meme evolution mechanism into the group search. Organizational social behavior is used to replace the natural selection mechanisms of evolutionary algorithms, and this method has been widely used in optimization problems, path selection, and fault diagnosis [22].
As complex compound gear transmission systems, planetary gearboxes with a compact structure, large transmission ratio, high transmission efficiency, smooth operation, and other characteristics, have been widely used in automobile gearboxes, wind turbines [23], and aviation engine settings. They play an important role in industries such as coal, energy, advanced manufacturing, and wind power. Due to the harsh working environment, gear and bearing damage and failures often occur and, due to different degrees of damage, diverse fault modes, and complex vibration signal transmission paths, the problems of feature extraction, state recognition, and the fault diagnosis of planetary gear multi-fault-mode coupled vibration signals have not been effectively solved. At present, KPCA is commonly used to solve nonlinear feature extraction problems. This paper focuses on the transmission boxes of wind turbines and establishes a planetary gearbox simulation fault experimental platform for feature extraction and fault diagnosis research.
In this paper, aiming at the deficiency of the nonlinear mapping of the single-kernel function for KPCA feature extraction, a dual-kernel function with a flexible linear combination of the RBF and polynomial function is proposed. In order to increase the scientificity of the setting of the kernel parameters and the flexible weight coefficient, a mathematical model for the optimization of the kernel parameters was constructed, based on Fisher’s discriminant thought. Making full use of the advantages of the swarm intelligence fusion algorithm for solving optimization problems, the SFLA-PSO fusion algorithm is presented, and is used to find the optimal solution for kernel parameters and the flexible weight coefficient. Lastly, a KPCA of the dual-kernel function optimized by the SFLA-PSO fusion algorithm was applied to the feature extraction of planetary gear wear.
The paper is organized as follows: First, it introduces the kernel learning method and summarizes the combination kernel function and its kernel parameter’s important role in feature extraction with KPCA. The necessity of kernel parameter optimization with the swarm intelligence optimization algorithm is put forward. In Section 2, the SFLA and PSO fusion mechanism is described, and the flow chart for the swarm intelligence algorithm is given. The constitution of the dual-kernel function of the KPCA and its optimization model, as well as the simulation and comparative analysis of the Iris data, are described in Section 3. In addition, the experimental scheme for planetary gearbox failure is described in Section 4. The feature extraction for the planetary gearbox based on the KPCA dual-kernel parameter optimized by the SFLA-PSO is presented, and the simulation results are discussed in detail in Section 5. Finally, we present the conclusion in Section 6.

2. Swarm Intelligence Fusion Algorithm

2.1. The SFLA and PSO Fusion Mechanism

As a new branch of evolutionary algorithms, the swarm intelligence optimization algorithm has many advantages, such as its simplicity, easy implementation, low requirements for the mathematical behavior of the target problem, and high efficiency, so it has attracted the attention of scholars [24]. One form of the swarm intelligence optimization algorithm, PSO, has the advantage of fast convergence speed, but it easily falls into local convergence, while the SFLA has a strong global search and optimization ability. Similarly, as in evolution, it has to go through a series of processes, such as frog population generation, evolution, crossover, and information exchange. The SFLA evolution mechanism is more flexible, but its computational complexity is high, and its convergence speed is slow. Therefore, the fusion strategy of different algorithms aims to utilize the unique advantages of the original algorithms, achieving mutual coordination and complementary advantages, and forming a robust and more efficient algorithm [25]. The fusion of the swarm intelligence algorithm involves the construction of a new fusion algorithm, replacing the original algorithm, which involves the strategies of population size, the population structure relationship, individual learning or interaction, and the population evolution mode (selecting, eliminating, and updating individuals). According to the concept of elite strategy, the SFLA algorithm is combined with the PSO algorithm in this study, the strategy of “two-level optimization and internal and external circulation” is implemented, and the SFLA-PSO swarm intelligent fusion algorithm is formed. Compared to other improved PSO and SFLA methods, the concept of the fusion algorithm is simple, no additional parameters are added, and it is easy to implement.
The SFLA-PSO fusion strategy is as follows: Firstly, the randomly generated particles are divided into npso subgroups of equal size. The particles in each subgroup evolve independently according to the PSO mechanism to achieve the optimization of the first level. Then, the optimal particles in each subgroup are taken out to form a new population, i.e., the initial frog group. According to the SFLA mechanism, the optimal frog position is that which realizes the optimization of the second level. Finally, the best position in the SFLA subgroup is reflected in the speed of the particle swarm optimization update, which can effectively guide PSO evolution and avoid falling into the local optimum. Using this strategy not only reduces the blindness of the initial SFLA group, but also enables a fine search within the SFLA.
The particle velocity and position update for the PSO optimization of the first level are shown in Equations (1) and (2) [20]:
v j i d ( t + 1 ) = ω × v j i d ( t ) + c 1 r 1 p j i d x j i d ( t ) + c 2 r 2 p j g d x j i d ( t )
x j i d ( t + 1 ) = x j i d ( t ) + v j i d ( t + 1 )
where j = 1, 2, …, npso is the group number of the particle swarm; i = 1, 2, …, np denotes the ith particle; pjid and pjgd represent the current optimal position and the global optimal position of the particles in the jth group, respectively; and xjid is the current position of the ith particle in the jth group. vjid is the current velocity of the ith particle in the jth group, vjid ∈ [−vlimit, vlimit], vlimit is the maximum speed limit. C1 and c2 are the learning factors, r1 and r2 are uniform random numbers within the range [0, 1], and ω is the inertia weight. t is the evolution generation.
In the SFLA optimization of the second stage, the frog’s velocity and position update are shown in Equations (3)–(5) [21].
The step length update formula is
S i = r × P b P w
S i = r × P g P w
where r is a uniform random number within [0, 1], Pb is the optimal frog position in the meme group, Pw is the worst frog position in the meme group, Pg is the best frog position in the whole group, and Si is the frog step length, Smin ≤ Si ≤ Smax.
The position update formula for the worst-positioned frog is
P w = P w + S i
where Pw is the position of the updated frog, which is within the range [Pmin, Pmax]. When the position of the worst-positioned individual is not improved through this evolution, Pg, the best of the entire frog group, is used to replace Pb, as shown in Equation (4), and then Formula (5) is used for further calculations.

2.2. Fusion Algorithm Flow

The flow chart for the SFLA-PSO fusion algorithm is shown in Figure 1. The specific optimization process is as follows:
(1)
Relevant parameters of the SFLA-PSO fusion algorithm are set.
The parameters for PSO include the total number of particles in the initial particle swarm N, the number of particles per particle swarm np, the number of particle groups npso, the particle dimension d, the total number of iterations Tmax, the inertia weight ω, learning factors c1, c2, and the velocity limit vlimit. The parameters of the SFLA include the number of meme groups m, the number of frogs in each meme group n, the total number of iterations of the SFLA Gmax, the number of local cycles Lmax, and the maximum step Smax of the SFLA, where N = npso × np and npso = n × m.
(2)
The particle swarm is initialized, and each particle is initialized. After the fitness is sorted, each particle is divided into npso groups with np particles in each group.
(3)
The PSO mode iterative optimization is carried out for each group, the fitness of each particle is evaluated, and the optimal particle Pjid of each group is recorded.
(4)
According to Formulas (1) and (2) for PSO, the velocity and position of each group particle are updated to complete the optimization of the first level.
(5)
Each group of optimal particles (called elites) optimized by PSO are regarded as all the frogs of the SFLA, and their fitness is evaluated and ranked.
(6)
According to the SFLA grouping mechanism, the elite particles are divided into m groups with n particles in each group.
(7)
According to the updating Formulas (3)–(5) of the SFLA, the local optimal value and the local worst value for the population are recorded. After all the groups are updated, the global optimal value is updated to find the best Xgd, and the second-level optimization is completed.
(8)
If the end condition is satisfied, that is, the predetermined accuracy requirement is met or the set maximum iteration number is reached, Xgd is the output, and the algorithm is stopped; otherwise, the procedure returns to (4).

3. The Composition and Optimization of the Dual-Kernel Function of KPCA

3.1. The Principle of KPCA Algorithm Feature Extraction

The detection data can be abstracted as vector y i = y i 1 , y i 2 , , y i L , where y i L is the Lth-dimension data of y i . The KPCA feature extraction steps are as follows [26]:
(1)
The data are mapped to the high-dimensional feature space via the nonlinear transformation Φ(yi), and the kernel matrix can be calculated. The covariance matrix CF in the feature space of the training sample after mapping is
C F = 1 M i = 1 M Φ y i Φ y i T
where M is the number of training samples. The kernel matrix K is defined as
K i j = K i j = Φ y i , Φ y j = K ( y i y j ) , i , j = 1 , 2 , , M
where K is the kernel function, which is defined as a function that represents the inner product Φ y i , Φ y j of vector yi and yj through nonlinear mapping to a feature space, using two vectors in the original space as a function in order to achieve nonlinear mapping.
(2)
Data centralization: If
i = 1 M Φ y i 0
then the centralized processing is carried out. K is replaced by K* = KAKKA + AKA. In the formula, A i j = 1 M .
(3)
The eigenvalues λ and eigenvectors v of the covariance matrix C F are calculated, satisfying the following conditions:
λ v = C F v
By substituting Equation (7) into Equation (8), the following equation can be obtained:
M λ α = K α
The eigenvalues greater than zero, λ 1 , λ 2 , , λ p , and the corresponding eigenvectors,   α 1 , α 2 , , α p , are obtained by solving Equation (9), giving
V k = i = 1 M α i k Φ y i
(4)
Extraction of the principal component: the projection Φ y on the feature vector space V k is calculated:
V k Φ y = i = 1 M α i k Φ y i Φ y
where Φ(yi)·Φ(y) can be calculated using the kernel technique, which is a type of function that makes Kij = Φ(yi)·Φ(yj) hold. Let
g k y = V k Φ y = i M α i k K y i , y
where K is the kernel function and g k ( y ) becomes the kth nonlinear kernel principal component corresponding to Φ(y), and all projected values are used as the feature vectors of the samples.
g k y = g 1 ( y i ) , g 2 ( y i ) , , g p ( y i )
As long as the first r maximum eigenvalues of K satisfy j = 1 r λ j / j = 1 p λ j ≥ 85%, then r feature vectors can be extracted from the above p feature components; thus, the feature vectors of the sample can be extracted as
y i = g 1 ( y i ) , g 2 ( y i ) , , g r ( y i )
The common kernel functions are as follows:
Polynomial kernel function (Poly):
K ( y i , y j ) = [ ( y i y j ) + 1 ] d 1
Radial basis kernel function (RBF):
K ( y i , y j ) = exp ( y i y j 2 / 2 σ 2 )
Sigmoid kernel function (Sigmoid):
K ( y i , y j ) = tanh [ ν ( y i , y j ) + c ]

3.2. The Composition of the Dual-Kernel Function of Flexible Weight Linear Combination

At present, the combination kernel method typically employs a new kernel function generated by a linear combination of the simple kernel functions, which satisfies the Mercer condition and has had some successful applications [26,27]. However, there is no theoretical basis for the parameter selection or the combination of kernel functions, and the uneven distribution of samples cannot be solved satisfactorily, which greatly limits the expression ability of the decision function. Therefore, there is an urgent need to establish a mathematical model for combinatorial kernel optimization using theory and to theoretically optimize the weight coefficient and parameters of the combined kernel function using an intelligent algorithm [28].
Due to the different characteristics of different kernel functions, their performance at solving specific problems varies significantly. Therefore, the combined kernel is the best method with which to select the kernel function, and it can avoid the problem of kernel function selection to some extent. The dual kernel can adopt the direct sum kernel, weighted sum kernel, and weighted polynomial extension kernel. In many practical applications, the Gaussian kernel function shows excellent properties. In this study, the Gaussian kernel function was used as the main function, and other kernel functions were selected for linear combination. The Gaussian kernel function is a typical local kernel. In order to balance the local kernel with the global kernel, the typical global kernel–polynomial kernel is used for the combination. In the Gaussian kernel function, the RBF is a typical local kernel. So, the RBF and the polynomial kernel were used in this study as a flexible linear combination. The dual-kernel function expression is shown in Equation (17):
K ( y i y j ) = γ [ ( y i y j ) + 1 ] d 1 + ( 1 γ ) exp ( y i y j 2 / 2 σ 2 )
where γ is the flexible weight of the two kernel functions, with 1 ≥ γ ≥ 0, and σ and d1 are the parameters of the combined kernel function.
With the help of Fisher’s criteria (FC) of minimum within-class distance and maximum class distance [29], a model with which to measure the class discrimination of the feature space data was constructed. For two kinds of problems, the classification can be determined by constructing the discriminant function and criterion [30].
In the feature space, the data samples are y 1 y 11 , y 12 , , y 1 i , y 2 y 21 , y 22 , , y 2 j (i = 1, 2, …, n1, j = 1, 2, …, n2), and the mean vectors of the two groups of samples are
μ 1 = 1 n 1 i = 1 n 1 Φ y 1 i
μ 2 = 1 n 2 j = 1 n 2 Φ y 2 j
The square of the distance between the two groups is as follows:
D b = μ 1 μ 2 2 = ( μ 1 μ 2 ) T ( μ 1 μ 2 ) = 1 n 1 2 i = 1 n 1 j = 1 n 2 k ( y 1 i , y 1 j ) 2 n 1 n 2 i = 1 n 1 j = 1 n 2 k ( y 1 i , y 2 j ) + 1 n 2 2 i = 1 n 1 j = 1 n 2 k ( y 2 i , y 2 j )
The square of the dispersion in yk is
D w 1 = i = 1 n 1 Φ ( y 1 i ) μ 1 2 = i = 1 n 1 Φ ( y 1 i ) T Φ ( y 1 i ) n 1 μ 1 T μ 1 = i = 1 n 1 k ( y 1 i , y 1 i ) 1 n 1 i = 1 n 1 j = 1 n 2 k ( y 1 i , y 1 j )
D w 2 = j = 1 n 2 Φ ( y 2 j ) μ 2 2 = j = 1 n 2 Φ ( y 2 j ) T Φ ( y 2 j ) n 2 μ 2 T μ 2 = j = 1 n 2 k ( y 2 j , y 2 j ) 1 n 2 i = 1 n 1 j = 1 n 2 k ( y 2 i , y 2 j )
According to Fisher’s criterion, the objective function is established; that is, the fitness function of the swarm intelligence optimization is
J F i s h e r ( γ , d 1 , σ ) = D w 1 + D w 2 D b
Because the mapping process is realized by means of the kernel function, the final transformation is to find   γ , d 1 , a n d   σ   to obtain the minimum value of the JFisher function. The optimal kernel parameters γ*, d1*, and σ* can be found via the SFLA-PSO to achieve the minimum value of JFisher.

3.3. The Specific Steps of Dual-Kernel Parameter Optimization

The FC discriminant function JFisher is used as the fitness of the SFLA-PSO algorithm to optimize the parameters γ, d1, and σ of the dual-kernel function. The specific process is as follows:
(1)
According to Fisher’s criterion, the samples are input, and the sum of the square of the distance between class Db and within class Dw1 and Dw2 are calculated using Formulas (20)–(22).
(2)
The optimized objective function JFisher is constructed using Equation (23) and is used as the fitness of the swarm intelligence optimization.
(3)
The parameters of the SFLA-PSO swarm intelligent fusion algorithm are set, and the particle swarm optimization is initialized.
(4)
The initial population is generated randomly, the fitness of the individuals are calculated, and the velocity and position are updated according to Formulas (1) and (2) of the PSO strategy.
(5)
The optimal particles optimized in the first layer of PSO are taken as all the initial frogs of the SFLA and are grouped once more.
(6)
According to the fitness size, the frog’s velocity and position are updated based on the frog leaping update Formulas (3)–(5) to determine the best advantage.
(7)
If the objective function JFisher satisfies the termination condition, the optimal value JFisher (γ*, d1*, σ*) is output, and the algorithm is stopped. Otherwise, it returns to (4).

3.4. Simulation and Comparative Analysis

In order to compare the application effect of the KPCA feature extraction method based on the dual-kernel function proposed in this paper (DKKPCA for short), a comprehensive comparison test was carried out.

3.4.1. Iris Plant Database

In the simulation experiment, the Iris data set was selected, and its characteristics were used as the data source. It is divided into three types of patterns, and there are 50 data sets in each category. Each data point includes four data sets, namely, the length and width of the calyx and the length and width of the petals. They are commonly taken as test sets and training sets in data mining and data classification. The first class is linearly separable from the second and the third classes, while the second and the third classes are nonlinearly separable. The distribution diagram is shown in Figure 2.

3.4.2. Parameter Optimization of the Iris Data Set with the Dual-Kernel Function Based on the SFLA-PSO

In the simulation analysis, the first 25 samples of the Iris data set were taken as the training samples, and the SFLA-PSO was used to optimize the kernel parameters. The parameters of the SFLA-PSO were as follows: N = 200, nPSO = 20, np = 10, c1 = c2 = 2, the PSO iteration number was 10, F = 20, m = 5, n = 4, Lmax = 10, Tmax = 50, d = 10, and Smax = 20.
The optimization iteration process is shown in Figure 3. It can be seen that the minimum value can be obtained in the 50 evolution algebras, and the three optimized parameters are γ* = 0.057, d1* = 1, and σ* = 3.1.

3.4.3. DKKPCA Feature Extraction of the Iris Data

The optimized parameters γ*, d1*, and σ* were used with the Iris data for the DKKPCA analysis, and the results are shown in Figure 4a,b. Figure 4a shows the DKKPCA scatter diagram for the principal component of the Iris data, and Figure 4b shows the histogram for the kernel principal component contribution rate. It can be seen that, due to the combination of the RBF kernel and the polynomial kernel function, the microscopic effect of the local kernel function and the amplification effect of the global kernel function are comprehensively used, the weight γ and kernel parameters d1 and σ are optimized, and the clustering effect of the Iris samples is very obvious. Three types of Iris patterns are clustered in their respective centers, and the data projection points for categories 1 and 2 are very compact; the distance between classes is large, and the regional boundaries between classes are obvious.
For the comparative analysis, the analysis results of the single-kernel KPCA are shown in Figure 5a,b, which are the polynomial kernel function and the RBF kernel function, respectively. From Figure 5a, it can be found that the three types of data are mixed together in the feature space using KPCA and cannot be distinguished. It can be observed from Figure 5b that class 1 can be obviously separated from class 2 and class 3. However, some of the projection points for the data from classes 2 and 3 are still mixed due to their nonlinear inseparability, and the boundary between class 2 and class 3 is not obvious. The KPCA results are listed in Table 1 for the dual-kernel function (DKKPCA), the polynomial kernel function (KPCA_Poly), and the radial basis kernel (KPCA_RBF). The contribution rate of the first two principal components using KPCA is more than 85%; that is, Iris can completely replace the original four attributes by using two attributes, and the features are reduced by half.
In order to compare the effect of KPCA before and after the parameter optimization of the dual-kernel function, the KPCA results before the optimization of the dual kernel are given, as shown in Figure 6. The results for parameters γ = 0.5, d1 = 1, and σ = 3.389 are shown in Figure 6a, and those for γ = 0.06, σ = 3.333, and d1 = 1 are shown in Figure 6b. It can be seen from these that due to the influence of the three parameters, the Iris data cannot be well separated linearly after nonlinear mapping onto the feature space, especially for classes 2 and 3. The optimized results for the dual-kernel parameters shown in Figure 4a are obviously better than those of the KPCA before optimization in Figure 6.
Therefore, the DKKPCA optimized by the intelligent fusion algorithm is not only suitable for solving the nonlinear feature extraction problem, but can also provide better feature quality than the linear dimension reduction method and can greatly enhance the ability of nonlinear data processing.

4. The Simulation Failure Experiment for the Planetary Gearbox

4.1. Experimental Scheme of the Planetary Gearbox

As shown in Figure 7, the planetary gearbox fault diagnosis testbed consists of a control cabinet, a motor, a two-stage transmission box, a magnetic powder brake, etc. The structural diagram for the transmission system of the experimental platform is shown in Figure 8. The first transmission is a helical gear transmission, and the second stage is a 2K-H planetary gear transmission, which includes an inner gear ring, a solar wheel, and three planetary wheels. In this study, the fault modes for the planetary gear transmission system of a wind power system were simulated, and the speed range was 75–3000 r/min. Table 2 illustrates the equipment’s technical parameters.
The motor provides power, the helical gearbox plays a deceleration role, and the planetary gearbox is a key research area. The magnetic powder brake acts as the load, and they are connected by an elastic coupling in the middle. We controlled the motor speed to adjust the shaft speed, and simulated different loads by setting the parameters of the magnetic powder brake. According to the testing plan, six measuring points, shown in Figure 9, were arranged. Four measuring points (1 to 4) were distributed on the helical gearbox, and the others (5 to 6) were on the planetary gearbox. The signal lines for the six sensors were connected to the DASP signal acquisition instrument and computer, and the entire collection system was fully built. The vibration signal was measured using six unidirectional piezoelectric acceleration sensors (CA-YD-186G) in this experiment.

4.2. Analysis of Vibration Signal

The vibration signals for each measuring point on the box body were measured. According to the experimental scheme, the vibration signals for the normal state and three kinds of planetary gear tooth surface wear states were mainly measured and analyzed, and the characteristic parameters were extracted. They include 21 time-domain features, such as the mean value, mean square value, maximum value, minimum value, variance, root-mean-square value, root amplitude, absolute average amplitude, skewness, kurtosis, peak, and six-order moments, and six frequency-domain features, namely, the frequency domain variance, correlation factor, power spectrum barycenter index, mean square spectrum, harmonic factor, and origin moment of spectrum. Sixty groups of training samples and sixty groups of test samples were extracted. After their standardization, the KPCA dual-kernel parameter optimization and feature extraction were carried out.

5. Feature Extraction of the Planetary Gearbox Based on the KPCA Dual-Kernel Parameter Optimized by the SFLA-PSO

5.1. Optimization of the Dual-Kernel Parameters of KPCA Based on the SFLA-PSO

In order to improve the mapping performance of KPCA, the SFLA-PSO fusion algorithm was used to optimize the parameters of the dual-kernel function for the data sets for the normal state, one-tooth wear, two-tooth wear, and three-tooth wear of the planetary gear (referred to as models A, B, C, and D, respectively). The parameters of the SFLA-PSO fusion algorithm are shown in Table 3, and the optimization was carried out after substituting the sample data. The optimization results for the kernel parameters of the different wear modes are shown in Table 4.
The evolutionary optimization process is shown in Figure 10a,b, from which it can be seen that the fitness function JFisher can reach the minimum value and obtain the optimal parameter in 100 iteration steps. It can be seen that different flexible weight coefficients γ* and kernel parameters d1* and σ* should be adopted when DKKPCA mapping is applied to identify the different wear models.

5.2. KPCA Feature Extraction of the Planetary Gear Wear Based on the Dual-Kernel Optimization

Based on the test results of the normal state and the planetary gear wear in the experiment, the kernel parameters optimized by the SFLA-PSO were substituted into the DKKPCA to analyze the characteristics of the different wear models of the planetary gear. The analysis flow is shown in Figure 11.
The KPCA analysis results are shown in Figure 12; Figure 12a1–c1 show the scatter diagrams for the KPCA principal component before the optimization of the RBF kernel parameters. The kernel parameter σ was empirically set as follows: ABC: σ = 10.25; BCD: σ = 5.4438; ABCD: σ = 23.1. Figure 13a2–c2 show those for the DKKPCA principal component before the dual-kernel parameter optimization, for which the three parameters were set according to experience: ABC: γ = 0.07, σ = 23.37, and d1 = 1.2; BCD: γ = 0.009, d1 = 1, and σ = 5.4438; ABCD: γ = 0.2, d1 = 2, and σ = 10. Figure 14a3–c3 show those for the DKKPCA principal component after the dual-kernel parameter optimization, and the three parameters are the optimization results γ*, d1*, and σ*.

5.3. The Analysis of the Obtained Results

As can be seen from Figure 12a1–c1, the data points for the normal model are relatively scattered and interspersed with the three models of teeth wear. Although B, C, and D have their own clustering centers, some of the samples with the three models of wear are interwoven, and there is no obvious boundary line to accurately distinguish the wear models. Thus, it can be seen from Figure 13a2–c2 that the DKKPCA greatly improved the wear recognition of several planetary gears via the dual-kernel function, but some features of ABCD still intersected and were not completely distinguished, especially for the normal state (model A) and the slight wear (model B) of one tooth. Therefore, the KPCA scatter diagrams’ comparison of the planetary faults using the single-kernel KPCA and DKKPCA indicate that the DKKPCA has a better nonlinear mapping performance than the single-kernel KPCA. The DKKPCA is suitable for intricate processes and nonlinear dimensionality reduction.
It can be seen from Figure 14a3–c3 that regardless of whether three or four models are employed, they can be highly clustered to a point after DKKPCA mapping. There is no gap in any category of B, C, and D for the three different wear models, the class spacing is large, and the categories are clear and divisible. This is because the interference from the small sample features of the contribution rate is eliminated after high-dimensional nonlinear mapping. It can be seen that when the dual-kernel function after a flexible weight linear combination is adopted for the DKKPCA, the kernel parameters are optimized to avoid the blindness of parameter selection, and the accuracy of the fault feature extraction and identification are significantly improved.
The KPCA method proposed in this paper was applied to a gearbox (JZQ-250 type) [30], for which the simulation failures included normal working conditions, a bearing outer-ring crack in the intermediate shaft, a broken tooth in the bearing cage fracture gear, and a broken tooth in the gear combined with a fracture of the bearing outer ring. The analysis results of the gearbox fault diagnosis demonstrate its effectiveness. So, the method has been verified for other conditions of other gearboxes.

6. Conclusions

KPCA is a nonlinear dimensionality reduction technique that maps data onto a high-dimensional feature space using kernel functions, enabling the detection of faults in complex and nonlinear systems. Its advantages lie in its ability to handle nonlinearity, making it suitable for intricate processes. However, KPCA’s performance heavily relies on the appropriate choice of kernel function and its associated parameters, which can be challenging to determine in practice.
Aiming at the disadvantages of the KPCA single-kernel function and the advantages of the multi-kernel method, this paper proposes a KPCA with a flexible weight linear combination of the dual-kernel function (DKKPCA). In order to improve KPCA’s performance and solve the optimization problem of dual-kernel parameters, an optimization model of dual-kernel parameters was constructed by referring to Fisher’s criterion and defining the optimization variables and objectives. Then, taking full advantage of the SFLA-PSO fusion algorithm, which is simple, and a global search, the kernel parameters were optimized. A comprehensive comparison test was carried out using the Iris data to verify the application effect of the DKKPCA feature extraction method, which could be separated well after nonlinear mapping onto the feature space, especially for classes 2 and 3. The simulation results indicate that the DKKPCA optimized by the intelligent fusion algorithm is not only suitable for solving the nonlinear feature extraction problem, but can also provide better feature quality than the linear dimension reduction method, and greatly enhances the ability of nonlinear data processing.
A planetary gear simulation fault diagnosis experiment was conducted, and the vibration signals of the normal state and three kinds of planetary gear tooth surface wear states were mainly measured and analyzed. Then, the proposed DKKPCA method was applied to the feature extraction of multi-fault-mode coupled vibration signals of a planetary gear. The KPCA scatter diagrams for the planetary faults using the single-kernel KPCA and dual-kernel KPCA (DKKPCA) were compared and analyzed, and the results show that the DKKPCA has a better nonlinear mapping performance than the single-kernel KPCA. Similarly, a comparative analysis of the DKKPCA before and after the dual-kernel parameter optimization was completed shows that the scattering can be highly clustered to a point, and the wear damage state of a planetary gear can be accurately identified using the DKKPCA after parameter optimization; so, it has better nonlinear mapping performance than before parameter optimization. Therefore, the DKKPCA method has been adapted to feature extraction and state recognition for the nonlinear behavior of other mechanical equipment.

Author Contributions

Writing—original draft preparation and writing—review and editing, Y.H.; visualization and software, L.Y.; writing—review and editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Young Science Foundation of Shanxi province, China, grant number 201901D211201; the National Natural Science Foundation of China, grant numbers 52375470 and 52005455; and the Central Guidance on Local Science and Technology Development Fund of Shanxi Province, grant number YDZJSX2022C005.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pan, H.; Zheng, J.; Yang, Y.; Cheng, J. Nonlinear sparse mode decomposition and its application in planetary gearbox fault diagnosis. Mech. Mach. Theory 2021, 155, 104082. [Google Scholar] [CrossRef]
  2. Wang, C.; Li, H.; Ou, J.; Hu, R.; Hu, S.; Liu, A. Identification of planetary gearbox weak compound fault based on parallel dual-parameter optimized resonance sparse decomposition and improved momeda. Measurement 2020, 165, 108079. [Google Scholar] [CrossRef]
  3. Schölkof, B.S.; Muller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
  4. Xiao, Y.; Feng, L. A novel neural-network approach of analog fault diagnosis based on kernel discriminant analysis and particle swarm optimization. Appl. Soft Comput. 2012, 12, 904–920. [Google Scholar] [CrossRef]
  5. Fatma, L.; Lotfi, M.; Wiem, A.; Hedi, D.; Khaled, Y. Investigating Machine Learning and Control Theory Approaches for Process Fault Detection: A Comparative Study of KPCA and the Observer-Based Method. Sensors 2023, 23, 6899. [Google Scholar]
  6. Wang, Y.; Dong, R.; Wang, X.; Zhang, X. Research on Rolling Bearing Fault Diagnosis Based on Volterra Kernel Identification and KPCA. Shock. Vib. 2023, 2023, 1–9. [Google Scholar] [CrossRef]
  7. Shao, R.; Hu, W.; Wang, Y.; Qi, X. The fault feature extraction and classification of gear using principal component analysis and kernel principal component analysis based on the wavelet packet transform. Measurement 2014, 54, 118–132. [Google Scholar] [CrossRef]
  8. Lei, Y.; Chen, W.; Li, N.; Lin, J. A relevance vector machine prediction method based on adaptive multi-kernel dual and its application to remaining useful life prediction of machinery. J. Mech. Eng. 2016, 52, 87–93. [Google Scholar] [CrossRef]
  9. Fu, H.; Ren, R.; Yan, Z.; Ma, Y. Fault diagnosis method of power transformers using multi-kernel RVM and QPSO. High Volt. Appar. 2017, 53, 131–135. [Google Scholar]
  10. Deng, X.; Lei, W. Modified kernel principal component analysis using dual-weighted local outlier factor and its application ton on linear process monitoring. ISA Trans. 2018, 72, 218–228. [Google Scholar] [CrossRef]
  11. Pan, C.; Li, H.; Chen, B.; Zhou, M. Fault Diagnosis method with class mean kernel principal component analysis based on combined kernel function. Comput. Simul. 2019, 36, 414–419. [Google Scholar]
  12. Wang, H.; Cai, Y.; Sun, F.; Zhao, Z. Adaptive sequence learning and application of multi-scale kernel method. Pattern Recognit. Artif. Intell. 2011, 24, 72–81. [Google Scholar]
  13. Lv, L.; Wang, W.; Zhang, Z.; Liu, X. A novel intrusion detection system based on an optimal hybrid kernel extreme learning machine. Knowl.-Based Syst. 2020, 195, 105648. [Google Scholar] [CrossRef]
  14. Nithya, A.; Appathurai, A.; Venkatadri, N.; Ramji, D.R.; Anna, P.C. Kidney disease detection and segmentation using artificial neural network and multi-kernel k-means clustering for ultrasound images. Measurement 2020, 149, 106952. [Google Scholar] [CrossRef]
  15. Ouyang, A.; Liu, Y.; Pei, S.; Peng, X.; He, M.; Wang, Q. A hybrid improved kernel LDA and PNN algorithm for efficient face recognition. Neurocomputing 2020, 393, 214–222. [Google Scholar] [CrossRef]
  16. Afzal, A.L.; Asharaf, S. Deep multiple multilayer kernel learning in core vector machines. Expert Syst. Appl. 2018, 96, 149–156. [Google Scholar]
  17. Li, X.; Gao, X.; Li, K.; Hou, Y. Prediction for dynamic fluid Level of oil well based on GPR with AFSA optimized combined kernel function. J. Northeast. Univ. (Nat. Sci.) 2017, 38, 11–15. [Google Scholar]
  18. Xie, F.; Chen, H.; Xie, S.; Jiang, W.; Liu, B.; Li, X. Bearing state recognition based on kernel principal component analysis of particle swarm optimization. Meas. Control Technol. 2018, 37, 28–35. [Google Scholar]
  19. Bernal, D.; Lázaro, J.M.; Prieto, M.A.; Llanes, S.O.; da Silva, N.A.J. Optimizing kernel methods to reduce dimensionality in fault diagnosis of industrial systems. Comput. Ind. Eng. 2015, 87, 140–149. [Google Scholar] [CrossRef]
  20. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks; IEEE Computer Society: Washington, DC, USA, 1995; pp. 1942–1948. [Google Scholar]
  21. Eusuff, M.M.; Lansey, K.E. Optimization of water distribution network design using shuffled frog leaping algorithm. J. Water Resour. Plan. Manag. 2003, 129, 210–225. [Google Scholar] [CrossRef]
  22. Jaafari, A.; Zenner, E.K.; Panahi, M.; Shahabi, H. Hybrid artificial intelligence models based on a neuro-fuzzy system and metaheuristic optimization algorithms for spatial prediction of wild fire probability. Agric. For. Meteorol. 2019, 266–267, 198–207. [Google Scholar] [CrossRef]
  23. Wang, Y.; Sun, W.; Liu, L.; Wang, B.; Bao, S.; Jiang, R. Fault Diagnosis of Wind Turbine Planetary Gear Based on a Digital Twin. Appl. Sci. 2023, 13, 4776. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R.C. Swarm Intelligence; Morgan Kaufmann Division of Academic Press: San Francisco, CA, USA, 2001; pp. 11–15. [Google Scholar]
  25. Rajamohana, S.P.; Umamaheswari, K. Hybrid approach of improved binary particle swarm optimization and shuffled frog leaping for feature selection. Comput. Electr. Eng. 2018, 67, 497–508. [Google Scholar] [CrossRef]
  26. Wang, H.; Sun, F.; Cai, Y.; Chen, N.; Ding, L. Multi-kernel learning method. Acta Autom. Sin. 2010, 36, 1037–1050. [Google Scholar] [CrossRef]
  27. Li, J.; Qiao, J.; Yin, H.; Liu, D. Kernel Adaptive Learning and Application in Pattern Recognition; Publishing House of Electronics Industry: Beijing, China, 2013; pp. 23–25. [Google Scholar]
  28. Chen, W.; Panahi, M.; Tsangaratos, P.; Shahabi, H.; Ilia, I.; Panahi, S.; Li, S.; Jaafari, A.; Bin Ahmad, B. Applying population-based evolutionary algorithms and a neuro-fuzzy system for modeling landslide susceptibility. Catena 2019, 172, 212–231. [Google Scholar] [CrossRef]
  29. Zhao, Y. Pattern Recognition; Shanghai Jiao Tong University: Shanghai, China, 2013; pp. 12–16. [Google Scholar]
  30. He, Y.; Wang, Z. Regularized kernel function parameter of kpca using WPSO-FDA for feature extraction and fault recognition of gearbox. J. Vibroengineering 2018, 20, 225–239. [Google Scholar] [CrossRef]
Figure 1. SFLA-PSO fusion algorithm flow chart.
Figure 1. SFLA-PSO fusion algorithm flow chart.
Machines 12 00082 g001
Figure 2. Iris sample distribution: (a) the training sample distribution; (b) the test sample distribution.
Figure 2. Iris sample distribution: (a) the training sample distribution; (b) the test sample distribution.
Machines 12 00082 g002
Figure 3. Evolution process of kernel parameters: (a) SFLA-PSO evolution course; (b) evolution course of kernel parameters σ.
Figure 3. Evolution process of kernel parameters: (a) SFLA-PSO evolution course; (b) evolution course of kernel parameters σ.
Machines 12 00082 g003
Figure 4. KPCA results for Iris data using the optimized dual-kernel parameters (a) γ = 0.057, d1 = 1, and σ = 3.1; (b) histogram of the DKKPCA contribution rate.
Figure 4. KPCA results for Iris data using the optimized dual-kernel parameters (a) γ = 0.057, d1 = 1, and σ = 3.1; (b) histogram of the DKKPCA contribution rate.
Machines 12 00082 g004
Figure 5. Iris data single-kernel principal component analysis results: (a) polynomial kernel function (d1 = 1); (b) RBF kernel function (σ = 3.1).
Figure 5. Iris data single-kernel principal component analysis results: (a) polynomial kernel function (d1 = 1); (b) RBF kernel function (σ = 3.1).
Machines 12 00082 g005
Figure 6. KPCA results for Iris data before dual-kernel function optimization: (a) γ = 0.5, d1 = 1, and σ = 3.389; (b) γ = 0.06, d1 = 1, and σ = 3.333.
Figure 6. KPCA results for Iris data before dual-kernel function optimization: (a) γ = 0.5, d1 = 1, and σ = 3.389; (b) γ = 0.06, d1 = 1, and σ = 3.333.
Machines 12 00082 g006
Figure 7. The planetary gear test bed.
Figure 7. The planetary gear test bed.
Machines 12 00082 g007
Figure 8. Structural diagrams of the transmission system of the experimental platform: (a) helical gear transmission; (b) planetary gear transmission.
Figure 8. Structural diagrams of the transmission system of the experimental platform: (a) helical gear transmission; (b) planetary gear transmission.
Machines 12 00082 g008
Figure 9. Arrangement of measuring points.
Figure 9. Arrangement of measuring points.
Machines 12 00082 g009
Figure 10. Evolution curves of the kernel parameter optimization process: (a) ABC model; (b) ABCD model.
Figure 10. Evolution curves of the kernel parameter optimization process: (a) ABC model; (b) ABCD model.
Machines 12 00082 g010
Figure 11. A feature extraction flow diagram for dual-kernel optimization.
Figure 11. A feature extraction flow diagram for dual-kernel optimization.
Machines 12 00082 g011
Figure 12. KPCA scatter diagram of planetary faults before single-kernel parameter optimization: A: normal state, B: one-tooth wear, C: two-tooth wear, D: three-tooth wear. (a1) ABC: σ = 10.25; (b1) BCD: σ = 5.4438; (c1) ABCD: σ = 23.1.
Figure 12. KPCA scatter diagram of planetary faults before single-kernel parameter optimization: A: normal state, B: one-tooth wear, C: two-tooth wear, D: three-tooth wear. (a1) ABC: σ = 10.25; (b1) BCD: σ = 5.4438; (c1) ABCD: σ = 23.1.
Machines 12 00082 g012
Figure 13. DKKPCA scatter diagram of planetary faults before dual-kernel parameter optimization: (a2) γ = 0.07, d = 1.2, and σ = 23.37; (b2) γ = 0.009, d1 = 1, and σ = 5.443; (c2) γ = 0.2, d = 2, and σ = 10.
Figure 13. DKKPCA scatter diagram of planetary faults before dual-kernel parameter optimization: (a2) γ = 0.07, d = 1.2, and σ = 23.37; (b2) γ = 0.009, d1 = 1, and σ = 5.443; (c2) γ = 0.2, d = 2, and σ = 10.
Machines 12 00082 g013
Figure 14. DKKPCA scatter diagram of planetary gear faults after dual-kernel parameter optimization: (a3) γ = 0.005, d1 = 0.8, and σ = 23.37; (b3) γ = 0.038, d1 = 0.896, and σ = 5.4438; (c3) γ = 0.055, d1 = 1.03, and σ = 13.1.
Figure 14. DKKPCA scatter diagram of planetary gear faults after dual-kernel parameter optimization: (a3) γ = 0.005, d1 = 0.8, and σ = 23.37; (b3) γ = 0.038, d1 = 0.896, and σ = 5.4438; (c3) γ = 0.055, d1 = 1.03, and σ = 13.1.
Machines 12 00082 g014
Table 1. KPCA results for Iris data.
Table 1. KPCA results for Iris data.
Serial Number1234
Algorithm
DKKPCAKernel principal component eigenvalue2.5180.29960.10140.0284
Contribution %85.43110.1653.4410.963
Cumulative contribution rate %85.43195.59599.036100.000
KPCA_PolyKernel principal component eigenvalue49.5664.05590.34410.0217
Contribution %91.80997.51260.63730.0402
Cumulative contribution rate %91.809999.322599.9558100.00
KPCA_RBFKernel principal component eigenvalue3.06010.51770.09460.0127
Contribution %83.040414.04862.56660.3444
Cumulative contribution rate %83.040497.08999.6556100.000
Table 2. Technical parameters of main components.
Table 2. Technical parameters of main components.
ComponentNameParameter
Helical gear boxGearLarge gear: modules = 2, number of teeth = 77
Pinion: modules = 2, number of teeth = 55
BearDeep groove ball rolling bearing 6206
Planetary gearboxGearInner gear ring: modules = 2, number of teeth = 72
Planetary gear: modules = 2, number of teeth = 27, quantity = 3
Sun gear: modules = 2, number of teeth = 18
BearDeep groove ball rolling bearing: for the planet wheel 6202, for the planet shelf = 6206, for the sun wheel 6205
BrakeThe loading form is magnetic, and the loading torque is 0–100 N·m
MotorConverter
motor
2.2 kW, the rotational speed is 1500 RPM, and the rated speed is 1410 RPM
Table 3. The parameters of the SFLA-PSO fusion algorithm.
Table 3. The parameters of the SFLA-PSO fusion algorithm.
Algorithm Parameters
SFLA-PSON = 200, nPSO = 20, nP = 10, c1 = c2 = 2, F = 20, m = 5, n = 4, Lmax = 10, Tmax = 100, d = 20, Smax = 20
Table 4. The optimization results of dual-kernel parameters of planetary gear wear state.
Table 4. The optimization results of dual-kernel parameters of planetary gear wear state.
TypeModelKPCA Kernel ParameterJFnsher
γ*d1*σ*
3-typeABC0.0050.823.370.3497
BCD0.0380.8965.44380.8970
4-type ABCD0.0551.0313.10.9949
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, Y.; Ye, L.; Liu, Y. Feature Extraction of a Planetary Gearbox Based on the KPCA Dual-Kernel Function Optimized by the Swarm Intelligent Fusion Algorithm. Machines 2024, 12, 82. https://doi.org/10.3390/machines12010082

AMA Style

He Y, Ye L, Liu Y. Feature Extraction of a Planetary Gearbox Based on the KPCA Dual-Kernel Function Optimized by the Swarm Intelligent Fusion Algorithm. Machines. 2024; 12(1):82. https://doi.org/10.3390/machines12010082

Chicago/Turabian Style

He, Yan, Linzheng Ye, and Yao Liu. 2024. "Feature Extraction of a Planetary Gearbox Based on the KPCA Dual-Kernel Function Optimized by the Swarm Intelligent Fusion Algorithm" Machines 12, no. 1: 82. https://doi.org/10.3390/machines12010082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop