Vibration Signal Noise-Reduction Method of Slewing Bearings Based on the Hybrid Reinforcement Chameleon Swarm Algorithm, Variate Mode Decomposition, and Wavelet Threshold (HRCSA-VMD-WT) Integrated Model

To enhance fault detection in slewing bearing vibration signals, an advanced noise-reduction model, HRCSA-VMD-WT, is designed for effective signal noise elimination. This model innovates by refining the Chameleon Swarm Algorithm (CSA) into a more potent Hybrid Reinforcement CSA (HRCSA), incorporating strategies from Chaotic Reverse Learning (CRL), the Whale Optimization Algorithm’s (WOA) bubble-net hunting, and the greedy strategy with the Cauchy mutation to diversify the initial population, accelerate convergence, and prevent local optimum entrapment. Furthermore, by optimizing Variate Mode Decomposition (VMD) input parameters with HRCSA, Intrinsic Mode Function (IMF) components are extracted and categorized into noisy and pure signals using cosine similarity. Subsequently, the Wavelet Threshold (WT) denoising targets the noisy IMFs before reconstructing the vibration signal from purified IMFs, achieving significant noise reduction. Comparative experiments demonstrate HRCSA’s superiority over Particle Swarm Optimization (PSO), WOA, and Gray Wolf Optimization (GWO) regarding convergence speed and precision. Notably, HRCSA-VMD-WT increases the Signal-to-Noise Ratio (SNR) by a minimum of 74.9% and reduces the Root Mean Square Error (RMSE) by at least 41.2% when compared to both CSA-VMD-WT and Empirical Mode Decomposition with Wavelet Transform (EMD-WT). This study improves fault detection accuracy and efficiency in vibration signals and offers a dependable and effective diagnostic solution for slewing bearing maintenance.


Introduction
Slewing bearings, which are crucial components in large slewing machinery, significantly influence equipment performance [1].Their operation in often harsh conditions increases failure risks, with periodic impacts and loads contributing to wear, cracks, and spalling [2].Consequently, the fault detection and diagnosis of slewing bearings have become a focal point, leveraging statistical measurements to identify concealed fault features in complex signals at the earliest opportunity [3].
Fault diagnosis via vibration signals, which are rich in fault state information, has been a widespread practice over the past decade.The primary methods for diagnosing faults in vibration signals encompass time domain analysis, frequency domain analysis, and time-frequency domain analysis [4].The complexity of noise in vibration signals obtained from test benches makes noise-reduction techniques crucial for extracting valuable information, positioning these methods as a focal point of current research [5].Empirical Mode Decomposition (EMD) and Variational Mode Decomposition (VMD) are prominent noise-reduction techniques that decompose signals into multiple modal components [6].
Sensors 2024, 24, 3344 2 of 21 However, EMD often faces challenges such as end effects and mode aliasing during decomposition [7].In contrast, VMD excels in decomposing non-stationary mixed signals into distinct single-component modes, effectively preventing aliasing by bandwidth control [8].As an adaptive and entirely non-recursive approach, VMD is extensively applied in denoising non-stationary signals.
The effectiveness of VMD is influenced by two critical input parameters: the modal decomposition number, K, and the penalty factor, α.Inappropriate settings can result in either over-decomposition or incomplete decomposition of the signal [9], highlighting the importance of choosing these parameters wisely.To identify the optimal combination of VMD input parameters, various optimization algorithms have been employed, including Particle Swarm Optimization (PSO), Gray Wolf Optimization (GWO), Whale Optimization Algorithm (WOA), and Sparrow Search Algorithm (SSA).Xiong et al. [10] combined PSO and VMD to decompose wind speed signals in complex mountainous areas into heterogeneous modal components, considering their complexity and non-stationary nature.Addressing the drawbacks of random weights and thresholds in traditional Elman neural networks, as well as the volatility and non-stationarity of photovoltaic output power signals, Zhang et al. [11] integrated VMD and GWO to enhance the short-term prediction model for photovoltaic output power using Elman neural networks.Wang et al. [12] predicted and judged the health status factors such as tool wear and damage in the early stage, introducing the WOA-VMD-SVM (support vector machine) model with a recognition accuracy higher than the SVM and GA (Genetic Algorithm)-SVM models.Guo et al. [13] developed a hybrid electricity price prediction model based on VMD and SSA for addressing price volatility, demonstrating superior prediction accuracy in the Pennsylvania-New Jersey-Maryland (PJM) electricity market.However, these optimization algorithms share common drawbacks, including susceptibility to local optima, subpar performance in complex, multi-dimensional scenarios, high computational demands, and slow convergence rates.Introduced by Braik in 2021, the Chameleon Swarm Algorithm (CSA) stands out for its excellent convergence speed and accuracy, drawing inspiration from the dynamic hunting behavior of chameleons [14].Ji and Cao [15] incorporated CSA into mobile robot obstacle avoidance to assist in finding a dependable path for efficient operations.SAID et al. [16] applied CSA to optimize solutions in the economic load scheduling problem and demonstrated through simulations that CSA calculations yielded superior results.But there is limited research on optimizing the two parameters of VMD using CSA to leverage the exceptional convergence speed and accuracy of CSA.Therefore, this study introduces a novel approach by integrating CSA and VMD to analyze the vibration signal of slewing bearings.Nonetheless, the standard CSA is susceptible to local optimization traps, with the quality of its initial population significantly impacting its performance.
To address this, an advanced noise-reduction model, HRCSA-VMD-WT, is designed for effective signal noise elimination.This model innovates by refining the CSA into a more potent Hybrid Reinforcement CSA (HRCSA).Strategies from Chaotic Reverse Learning (CRL), WOA's bubble-net hunting, and the greedy strategy with the Cauchy mutation are utilized to diversify the initial population, accelerate convergence, and prevent local optimum entrapment, respectively.Furthermore, by optimizing VMD input parameters with HRCSA, Intrinsic Mode Function (IMF) components are extracted and categorized into noisy and pure signals using cosine similarity.Finally, Wavelet Threshold (WT) denoising targets the noisy IMFs before reconstructing the vibration signal from purified IMFs, achieving significant noise reduction.Comparative analysis with other algorithms demonstrates the proposed model's effectiveness and superiority, showcasing its robust noise-reduction capabilities for vibration signals.
The contributions of this study are summarized as follows: (a) The study introduces the innovative noise-reduction model, HRCSA-VMD-WT, which addresses the challenge of signal noise in vibration analysis.It significantly improves the Signal-to-Noise Ratio (SNR) and reduces the Root Mean Square Error (RMSE) compared to EMD-WT and CSA-VMD-WT.
(b) The study incorporates the Chaotic Reverse Learning (CRL) strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation into standard CSA, enhancing the performance of HRCSA over standard CSA.HRCSA provides a more effective approach for optimizing VMD input parameters.(c) The study establishes a fatigue test platform integrated with a measurement system utilizing the HRCSA-VMD-WT method for acquiring and processing vibration signals from tested slewing bearings.This offers a practical technical solution for vibration analysis and fault diagnosis in slewing bearings.
The remainder of the paper is organized as follows.Section 2 provides the details of the HRCSA optimization method, including the CSA principle, the CRL strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation.The framework of the noise-reduction model, HRCSA-VMD-WT, with its effect evaluation metrics, is presented in Section 3. In Section 4, HRCSA and HRCSA-VMD-WT are both validated based on simulation experiments.The vibration data from a test platform for slewing bearings are used to demonstrate the practical feasibility of the noise-reduction model in Section 5, leading to the conclusions in Section 6.

Hybrid Reinforcement CSA (HRCSA)
CSA is a new optimization algorithm that can find the optimal solution according to the dynamic behavior of a chameleon hunting prey and has excellent convergence speed and solving accuracy.But at the same time, similar to other heuristic algorithms, the quality of the initial population will affect the performance of the algorithm.When a chameleon in the population is close to the prey, other individuals will approach the chameleon, resulting in the algorithm easily falling into the local optimal solution.In this study, a Hybrid Reinforcement CSA is proposed, which not only guarantees the convergence speed but also improves its ability to jump out of the local optimal solution.

CSA Principle
CSA draws inspiration from the hunting behavior of chameleons.Distinct from the Chameleon Algorithm (CA), which primarily serves as a bottom-up clustering algorithm, CSA is adept at addressing global numerical optimization and constraint problems.Based on the verification using a Markov chain, CSA can reliably converge to the global optimal solution with a probability of 1.The operational mechanism of CSA mimics the chameleon's prey interaction strategy, which encompasses three main phases: locating prey, tracking prey, and capturing prey [17].The mathematical model and steps of the algorithm are as follows.
(1) Initial population Assuming the population size of the chameleons is n, the hunt for prey occurs in m-dimensional space.The n × m matrix X corresponding to the chameleons' positions is defined by Equation (1) [15]: improves the Signal-to-Noise Ratio (SNR) and reduces the Root Mean Square Error (RMSE) compared to EMD-WT and CSA-VMD-WT.(b) The study incorporates the Chaotic Reverse Learning (CRL) strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation into standard CSA, enhancing the performance of HRCSA over standard CSA.HRCSA provides a more effective approach for optimizing VMD input parameters.(c) The study establishes a fatigue test platform integrated with a measurement system utilizing the HRCSA-VMD-WT method for acquiring and processing vibration signals from tested slewing bearings.This offers a practical technical solution for vibration analysis and fault diagnosis in slewing bearings.
The remainder of the paper is organized as follows.Section 2 provides the details of the HRCSA optimization method, including the CSA principle, the CRL strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation.The framework of the noise-reduction model, HRCSA-VMD-WT, with its effect evaluation metrics, is presented in Section 3. In Section 4, HRCSA and HRCSA-VMD-WT are both validated based on simulation experiments.The vibration data from a test platform for slewing bearings are used to demonstrate the practical feasibility of the noise-reduction model in Section 5, leading to the conclusions in Section 6.

Hybrid Reinforcement CSA (HRCSA)
CSA is a new optimization algorithm that can find the optimal solution according to the dynamic behavior of a chameleon hunting prey and has excellent convergence speed and solving accuracy.But at the same time, similar to other heuristic algorithms, the quality of the initial population will affect the performance of the algorithm.When a chameleon in the population is close to the prey, other individuals will approach the chameleon, resulting in the algorithm easily falling into the local optimal solution.In this study, a Hybrid Reinforcement CSA is proposed, which not only guarantees the convergence speed but also improves its ability to jump out of the local optimal solution.

CSA Principle
CSA draws inspiration from the hunting behavior of chameleons.Distinct from the Chameleon Algorithm (CA), which primarily serves as a bottom-up clustering algorithm, CSA is adept at addressing global numerical optimization and constraint problems.Based on the verification using a Markov chain, CSA can reliably converge to the global optimal solution with a probability of 1.The operational mechanism of CSA mimics the chameleon's prey interaction strategy, which encompasses three main phases: locating prey, tracking prey, and capturing prey [17].The mathematical model and steps of the algorithm are as follows.
(1) Initial population Assuming the population size of the chameleons is n, the hunt for prey occurs in mdimensional space.The n × m matrix X corresponding to the chameleons' positions is defined by Equation (1) [15]: , . . .
. . .improves the Signal-to-Noise Ratio (SNR) and reduces the Root Mean Square Error (RMSE) compared to EMD-WT and CSA-VMD-WT.(b) The study incorporates the Chaotic Reverse Learning (CRL) strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation into standard CSA, enhancing the performance of HRCSA over standard CSA.HRCSA provides a more effective approach for optimizing VMD input parameters.(c) The study establishes a fatigue test platform integrated with a measurement system utilizing the HRCSA-VMD-WT method for acquiring and processing vibration signals from tested slewing bearings.This offers a practical technical solution for vibration analysis and fault diagnosis in slewing bearings.
The remainder of the paper is organized as follows.Section 2 provides the details of the HRCSA optimization method, including the CSA principle, the CRL strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation.The framework of the noise-reduction model, HRCSA-VMD-WT, with its effect evaluation metrics, is presented in Section 3. In Section 4, HRCSA and HRCSA-VMD-WT are both validated based on simulation experiments.The vibration data from a test platform for slewing bearings are used to demonstrate the practical feasibility of the noise-reduction model in Section 5, leading to the conclusions in Section 6.

Hybrid Reinforcement CSA (HRCSA)
CSA is a new optimization algorithm that can find the optimal solution according to the dynamic behavior of a chameleon hunting prey and has excellent convergence speed and solving accuracy.But at the same time, similar to other heuristic algorithms, the quality of the initial population will affect the performance of the algorithm.When a chameleon in the population is close to the prey, other individuals will approach the chameleon, resulting in the algorithm easily falling into the local optimal solution.In this study, a Hybrid Reinforcement CSA is proposed, which not only guarantees the convergence speed but also improves its ability to jump out of the local optimal solution.

CSA Principle
CSA draws inspiration from the hunting behavior of chameleons.Distinct from the Chameleon Algorithm (CA), which primarily serves as a bottom-up clustering algorithm, CSA is adept at addressing global numerical optimization and constraint problems.Based on the verification using a Markov chain, CSA can reliably converge to the global optimal solution with a probability of 1.The operational mechanism of CSA mimics the chameleon's prey interaction strategy, which encompasses three main phases: locating prey, tracking prey, and capturing prey [17].The mathematical model and steps of the algorithm are as follows.
(1) Initial population Assuming the population size of the chameleons is n, the hunt for prey occurs in mdimensional space.The n × m matrix X corresponding to the chameleons' positions is defined by Equation (1) [15]: , where x i,j represents the position of the i-th chameleon in the j-th dimensional space, and 1 ≤ i ≤ n, 1 ≤ j ≤ m.We initialize the positions of the chameleon group in m-dimensional space using Equation (2): where ub and lb represent the upper and lower boundaries of the search space, respectively; r denotes a random number; and r ∈ (0, 1).
(2) Locating prey The chameleon will constantly change its position during the prey search process, guided by previous location and social experience, and its position-renewal strategy is mathematically described as follows [15].
During the prey search process, the chameleon dynamically adjusts its position, influenced by prior locations and social experience.The strategy for updating its position is mathematically outlined in Equation (3) [14]: where x i,j t+1 is the new position of the i chameleon in the j-th dimensional space after t + 1 iteration; p 1 and p 2 are positive coefficients used to control the ability of algorithm development; P i,j t is the best position of the chameleon i in the j-th dimensional space in the current t iteration; G j t is the global best position in the j-th dimensional space in the t-th iteration; r i is a random number in the range of (0,1); sgn(rand − 0.5) is used to control the direction of rotation of the chameleon, and its value can either +1 or −1; P p is the probability of the chameleon perceiving things; µ is the search capability parameter, which changes over t and is defined by Equation ( 4): where α, β and γ are constants; T is the maximum number of iterations.According to the research in reference [15], setting α, β and γ to 1, 3.5, and 3, respectively, yields strong search capabilities.
(3) Tracking prey The chameleon's ability to rotate its eyes allows it to search for prey across a full 360 • range.It updates its position based on the prey's location, a process mathematically defined in Equation ( 5) [14]: where x i t is the current position; x i t is in the center of the t iteration; x i t+1 is the new position generated after the rotation matrix is flipped; and m is the rotation matrix, denoted by Equations ( 6) and ( 7): where θ represents the random rotation angle of the chameleon, with θ ∈ (−180 • ,180 • ) determined by generating a random value r within the interval (0,1), and V z1,z2 is the vector synthesized from two orthogonal vectors z 1 and z 2 in m dimensional space.Chameleons track their prey through eye movements in four stages: In this phase, the chameleons capture their prey via tongue strikes.The chameleon nearest to the prey represents the optimal solution within the current group.The efficiency of prey capture is determined by the velocity of the chameleon's tongue extension, as detailed in Equation ( 8): where c 1 and c 2 are cognitive factor constants, and v i,j t is the speed of the t-th iteration.Based on the velocity, acceleration, and displacement relationship in rigid body kinematics, Equation ( 9) governs the chameleon's updated position at this stage: where a is the acceleration, which changes with the number of iterations in a non-linear manner, as described in Equation ( 10):

Chaotic Reverse Learning (CRL) Strategy
Similar to other heuristic optimization algorithms, the quality of the initial population significantly influences the algorithm's performance, impacting its global search capability and convergence speed.In the standard CSA, population initialization involves using a random number between 0 and 1.However, solely relying on the random function may not ensure the uniform distribution of chameleon individuals in the initial population, potentially causing individuals to be too distant or too close to each other [18].This scenario increases the risk of the algorithm converging to local optima.
The circle mapping from CRL exhibits strong randomness and ergodicity, enhancing the global search capability of the algorithm through its mapping of population members.The function is defined by Equation ( 11): where i is the sequence number of the chaos variable.Figure 1 illustrates the distribution of the circle mapping and random distribution in a chaotic state.According to Figure 1, circle mapping provides a more uniform randomness, facilitating the generation of varied initial populations.The expression for calculating the initial population's position based on circle mapping is shown in Equation (12)., ( ) The expression for calculating the initial population's position based on circle mapping is shown in Equation (12).
x i,j = lb + y i (ub − lb) The reverse learning strategy enhances population diversity and algorithm search capability by generating reverse individuals at each position within the current population.The expression of a new reverse population for the integrated circle mapping is given in Equation ( 13): where x i,j is the population generated by circle mapping; X i,j is the new population generated by reverse learning.x i,j and X i,j are merged and ranked based on fitness, with the top n individuals chosen to form the final population.

Bubble-Net Hunting Strategy
Equation (5) shows that updating chameleons' positions during prey tracking involves using their central and current positions and adjusting hunting direction with a rotation matrix.While this offers guidance, it introduces randomness and uncertainty.Early iterations may see guidance fail if an individual's position significantly deviates from the center, impacting the algorithm's convergence speed.This study incorporates WOA's bubble-net hunting strategy to enhance global search capabilities, with the position update equation presented in Equation ( 14): where x i,j t is the central position of the t-th iteration; b determines the spiral shape, b = 1 here; and r and l are random numbers in [−1,1].D is the random distance between the central position x i,j t and the actual position x i,j t , which decreases with the number of iterations, denoted as Equation ( 15): where r is a random number between 0 and 1.The bubble-net hunting strategy is introduced into the chameleon-tracking prey process.To balance the new algorithm's global search capabilities and local search efficiency [19], an adaptive dynamic inertial weight w(t) is introduced.The chameleon's position update at this stage follows Equation ( 16): where T represents the maximum number of iterations.The chameleon's position update at this stage follows Equation (17).
Figure 2 shows the change curve of adaptive inertial weight w(t) with the number of iterations, from which it can be seen that a large inertial weight is used to enhance the global search ability in the early iteration, while a small inertial weight is used to maintain excellent local development ability in the later iteration.Figure 3 illustrates how the adaptive inertial weight varies with the iteration count, using a larger inertial weight early on to boost global search capabilities and a smaller one later to enhance local search efficiency.
iterations, from which it can be seen that a large inertial weight is used to enhance the global search ability in the early iteration, while a small inertial weight is used to maintain excellent local development ability in the later iteration.Figure 3 illustrates how the adaptive inertial weight varies with the iteration count, using a larger inertial weight early on to boost global search capabilities and a smaller one later to enhance local search efficiency.

Greedy Strategy with Cauchy Mutation
During the algorithm's later stages, chameleon group members tend to converge ne the optimal chameleon's position, increasing the risk of local optima entrapment.To m igate this, this study introduces a Cauchy variation strategy that randomly alters the a gorithm's evolutionary direction by perturbing the optimal chameleon's position per ite ation, thereby enhancing the algorithm's capability to escape local optima.The cham leon's position post-Cauchy mutation is updated according to Equation ( 18):

Greedy Strategy with Cauchy Mutation
During the algorithm's later stages, chameleon group members tend to converge near the optimal chameleon's position, increasing the risk of local optima entrapment.To mitigate this, this study introduces a Cauchy variation strategy that randomly alters the algorithm's evolutionary direction by perturbing the optimal chameleon's position per iteration, thereby enhancing the algorithm's capability to escape local optima.The chameleon's position post-Cauchy mutation is updated according to Equation ( 18): where x best t+1 is the optimal chameleon position in the (t + 1)-th iteration; cauchy(0, 1) is the standard Cauchy distribution; and x new t+1 is the chameleon position disturbed by Cauchy variation.Given the uncertainty of the chameleon's position improvement post-Cauchy variation, a greedy strategy is employed to decide if the optimal chameleon's position should be updated based on fitness comparisons before and after the disturbance.This strategy is formulated in Equation ( 19): where f is the fitness function or objective function.In this study, the objective function is defined as the minimum envelope entropy, represented by Equations ( 20) and ( 21): where a(j) represents the j-th signal data.

Framework of HRCSA-VMD-WT Noise-Reduction Method
The proposed HRCSA-VMD-WT noise-reduction method framework encompasses the noise-reduction model to denoise the vibration signal of slewing bearings and the index system to assess the noise-reduction effect.The noise-reduction model optimizes VMD based on Hybrid Reinforcement CSA and integrates VMD with WT denoising.Figure 3 shows the flow diagram of the HRCSA-VMD-WT noise-reduction method framework.The main parts of the framework are outlined as follows.
(1) HRCSA optimization: HRCSA optimization is used to find the optimal input parameters for VMD.In this study, the standard CSA is adjusted by introducing the CRL strategy, the bubble-net hunting strategy, and the greedy strategy with the Cauchy mutation.The main steps of HRCSA are as follows.
(1) Select the minimum envelope entropy as the objective function and use CRL to initialize the population.(2) Update the chameleon position according to bubble-net hunting strategy.(3) Obtain the optimal chameleon position per iteration.(4) Perturb the optimal chameleon's position per iteration by the Cauchy mutation and update the new chameleon's position by the greedy strategy that determines whether to update the optimal chameleon's position.( 5) For chameleon individuals beyond the boundary constraint, their position is randomly updated to terminate their tendency to approach the boundary attachment.( 6) Obtain the optimal solution of VMD input parameters when the termination condition of the iteration is met.
(2) VMD: VMD is utilized to decompose the original vibration signal of the slewing bearing.Optimal input parameters are input during VMD initialization.The original vibration signal of the slewing bearing is decomposed into K IMF components.(3) Similarity degree analysis: In this study, the cosine similarity degree is employed to categorize each IMF into either a noisy or a pure component.The cosine similarity degree of each IMF component with the original signal is calculated.Based on the average value of the cosine similarity degree of all IMFs, the IMFs with the cosine similarity degree above the average value will be identified as noisy IMFs.(4) WT denoising: WT denoising is used to eliminate the signal noise of the noisy IMFs.
The main steps of WT denoising are as follows.4) The IMF signal with noise is reconstructed after de-noising.

VMD Principle
VMD is a widely used method for processing time-frequency signals, distinguished by its self-adaptive and non-recursive features.It posits that signals can be broken down into linear combinations of eigenmodes with constrained bandwidth [19], effectively turning signal processing challenges into variational model resolution tasks.Within this framework, an input signal, x(t), is segmented into K IMF components based on a predefined decomposition level K and a quadratic penalty factor α. In Figure 3, the VMD processes are split into two steps.The first step is "Initialization", where the range boundary values for the two parameters of the VMD are set, and their optimization combination is determined by HRCSA based on the input x(t).The second step is "Obtain IMF components".Throughout this step, the center frequency and bandwidth of each IMF are dynamically adjusted.Equation ( 22) is the expression of the IMF components [20]: where A k (t) is the instantaneous amplitude of IMF component; ϕ k (t) is its phase function; w k (t) is its instantaneous phase; and w k (t) = ϕ k (t).
In order to obtain the restricted bandwidth of each IMF component, a Hilbert transformation is performed on each IMF component.Subsequently, the frequency of u k (t) is shifted to w k , resulting in the establishment of a constraint equation, denoted as Equation ( 23) [21]: where is the Dirac delta function; the constraint is that the sum of the IMF components equals the input signal x(t); ∂ t is partial derivative operation with respect to t; ∥•∥ 2 2 is the squares of the Euclidean norm; j is the imaginary unit, and * is the convolution operation.
To solve {u k } and {w k } above, by introducing Lagrange multipliers λ and penalty fac- tors α, the constrained variational problem is transformed into an unconstrained variational problem, and the augmented Lagrange function is obtained by Equation (24) [22]: Sensors 2024, 24 where ûn k represents the Fourier transform of u n k ; e is the discriminant accuracy; and the value is greater than 0.

Similarity Degree Analysis
Decomposing the original signal with VMD yields K Intrinsic Mode Function (IMF) components.This study employs cosine similarity degree to categorize each IMF into either a noisy or a pure component.The calculation of cosine similarity is detailed in Equation ( 26): where Y 1 and Y 2 represent the IMF component signal and the original signal, respectively.By mapping these signals into vector space, their similarity is assessed through the cosine of the angle between them in the inner product space.A cosine similarity value close to 1 indicates a high degree of similarity between the components of Y 1 and Y 2 , suggesting that the presence of more noisy components in Y 1 necessitates increased noise-reduction efforts.

WT Principle
The Wavelet Threshold (WT) denoising technique involves decomposing the signal into wavelets, eliminating or reducing noise while preserving or enhancing useful signals, and then reconstructing the signal for denoising.Key parameters of the WT method include the wavelet basis function, decomposition layers, threshold value, and the threshold function.This study adopts the widely used Daubechies wavelet basis with five-layer decomposition.Based on the reference [23], a fixed threshold value is employed, detailed in Equation ( 27): where N is the data length of the IMF component.In order to eliminate the discontinuity of the threshold function w j,k , and when the absolute value of the wavelet coefficient is greater than λ, w j,k is able to quickly approach the original value of the wavelet coefficient, w j,k adopted in this paper is as Equation (28).

Noise-Reduction Effect Evaluation
This study employs an objective method to evaluate noise-reduction effects, utilizing an index system with specific indicators [24].Signal-to-Noise Ratio (SNR) and Root Mean Square Error (RMSE) serve as the chosen metrics for assessing the noise-reduction efficacy.
SNR refers to the ratio of the useful signal power and noise signal power, as defined by Equation (29): where x a (i) is the original signal; x b (i) is the signal after noise reduction.RMSE is a measure of the difference between predicted values and observed values, as defined by Equation (30): where N is the number of observations.

Simulation Experiment
To validate the proposed noise-reduction model, HRCSA-VMD-WT, this study introduces simulations and comparative analyses with recent algorithms, including PSO, WOA, and GWO.The validation process is structured into two parts: firstly, assessing HRCSA's effectiveness, and secondly, evaluating the HRCSA-VMD-WT noise-reduction model.In the simulation experiment, the number of population is 100 and the maximum number of iterations is 100.The main parameters of each algorithm are shown in Table 1.
Table 1.Main parameter of the algorithms.

Algorithm Name
Main Parameter PSO w max = 0.9, w min = 0.2, and it goes down linearly from 2, r 1 ,

Simulation Experiment of HRCSA
This study evaluates the algorithms' performance and reliability using five specific test functions as benchmarks.Functions F1 and F2, characterized as single-peak, assess the algorithms' convergence speed and precision.Conversely, functions F3 to F5, which are multi-peak, examine their capability to escape local optima.Details and characteristics of these functions are delineated in Table 2.
Four optimization algorithms were applied to five test functions in thirty independent runs.For each test function, the algorithms determined the optimal solution, average value, and standard deviation.These statistical outcomes are presented in Table 3. Figures 4-8 illustrate the analyses: part (a) displays the three-dimensional representation of each test function, while part (b) depicts the iterative performance of each algorithm.
From the three-dimensional graph of the test function, it is evident that F1 and F2 possess only one extreme point, whereas F3 to F5 have multiple extreme points, thereby raising the difficulty of optimization.The results of four optimization algorithms indicate that for the F1 test function, WOA, GWO, and HRCSA successfully identified the optimal solution, whereas the PSO algorithm exhibited a noticeable deviation even after 100 iterations.For the F2 function, all four algorithms reached the optimal solution, yet the PSO's precision was comparatively lower.Among these, only the HRCSA algorithm could pinpoint the optimal solution for the multimodal F3 function.An analysis of five iterative convergence diagrams reveals that the HRCSA algorithm demonstrates the least initial deviation from the optimal solution, leading to quicker convergence in subsequent iterations.This efficiency is attributed to the integration of a greedy strategy with Cauchy variation, enhancing the accuracy of test results.Consequently, HRCSA exhibits superior stability and precision compared to its counterparts.The PSO algorithm demonstrates superior convergence speed and accuracy in handling low-dimensional problems, albeit with the slowest convergence speed compared to the other three optimization algorithms.In Figure 8, the convergence diagram illustrates that WOA performs similarly to PSO and GWO in terms of convergence speed and accuracy for low-dimensional problems but falls behind GWO for high-dimensional problems.Across Figures 4-7, it is evident that GWO outperforms PSO and WOA in terms of convergence speed and accuracy.However, GWO is prone to local optima in low-dimensional problems, as indicated in Figure 8.

Functional Formula
Dimensionality Radius Optimal Solution [−5.12,5.12]0      From the three-dimensional graph of the test function, it is evident that F1 and F2 possess only one extreme point, whereas F3 to F5 have multiple extreme points, thereby raising the difficulty of optimization.The results of four optimization algorithms indicate that for the F1 test function, WOA, GWO, and HRCSA successfully identified the optimal solution, whereas the PSO algorithm exhibited a noticeable deviation even after 100 iterations.For the F2 function, all four algorithms reached the optimal solution, yet the PSO's precision was comparatively lower.Among these, only the HRCSA algorithm could pinpoint the optimal solution for the multimodal F3 function.An analysis of five iterative convergence diagrams reveals that the HRCSA algorithm demonstrates the least initial deviation from the optimal solution, leading to quicker convergence in subsequent iterations.This efficiency is attributed to the integration of a greedy strategy with Cauchy variation, enhancing the accuracy of test results.Consequently, HRCSA exhibits superior stability and precision compared to its counterparts.The PSO algorithm demonstrates superior convergence speed and accuracy in handling low-dimensional problems, albeit with the slowest convergence speed compared to the other three optimization algorithms.In Figure 8, the convergence diagram illustrates that WOA performs similarly to PSO and GWO in terms of convergence speed and accuracy for low-dimensional problems but falls behind GWO for high-dimensional problems.Across Figures 4-7, it is evident that GWO outperforms PSO and WOA in terms of convergence speed and accuracy.However, GWO is prone to local optima in low-dimensional problems, as indicated in Figure 8. From the three-dimensional graph of the test function, it is evident that F1 and F2 possess only one extreme point, whereas F3 to F5 have multiple extreme points, thereby raising the difficulty of optimization.The results of four optimization algorithms indicate that for the F1 test function, WOA, GWO, and HRCSA successfully identified the optimal solution, whereas the PSO algorithm exhibited a noticeable deviation even after 100 iterations.For the F2 function, all four algorithms reached the optimal solution, yet the PSO's precision was comparatively lower.Among these, only the HRCSA algorithm could pinpoint the optimal solution for the multimodal F3 function.An analysis of five iterative convergence diagrams reveals that the HRCSA algorithm demonstrates the least initial deviation from the optimal solution, leading to quicker convergence in subsequent iterations.This efficiency is attributed to the integration of a greedy strategy with Cauchy variation, enhancing the accuracy of test results.Consequently, HRCSA exhibits superior stability and precision compared to its counterparts.The PSO algorithm demonstrates superior convergence speed and accuracy in handling low-dimensional problems, albeit with the slowest convergence speed compared to the other three optimization algorithms.In Figure 8, the convergence diagram illustrates that WOA performs similarly to PSO and GWO in terms of convergence speed and accuracy for low-dimensional problems but falls behind GWO for high-dimensional problems.Across Figures 4-7, it is evident that GWO outperforms PSO and WOA in terms of convergence speed and accuracy.However, GWO is prone to local optima in low-dimensional problems, as indicated in Figure 8.

Simulation Experiment of HRCSA-VMD-WT
To evaluate the proposed noise-reduction model's performance, simulation signals were analyzed with 1000 data points collected over a duration of 1 s.The original signal y and the noisy signal Y are defined in Equation (31): where f is the sampling frequency, with f 1 at 50 Hz, f 2 at 30 Hz, and f 3 at 20 Hz, and noise = 0.6randn(1, 1000) is white noise generated randomly following a normal distribution.

Y y noise     
where f is the sampling frequency, with f1 at 50 Hz, f2 at 30 Hz, and f3 at 20 Hz, and n 0.6randn(1,1000) is white noise generated randomly following a normal distribution.ures 9 and 10 illustrate the original and corresponding noisy signals, respectively.

Y y noise    
where f is the sampling frequency, with f1 at 50 Hz, f2 at 30 Hz, and f3 at 20 Hz, and n 0.6randn(1,1000) is white noise generated randomly following a normal distribution ures 9 and 10 illustrate the original and corresponding noisy signals, respectively.The minimum envelope entropy is taken as the objective function of HRCSA.HRCSA was used to derive the optimal VMD parameters, K = 6 and α = 2495.Figure 11 presents the time-domain representation of the five IMF components post-VMD.Cosine similarity was calculated for each IMF.IMFs with cosine similarity above the average were denoised using WT.The denoised signal is depicted in Figure 12.
The minimum envelope entropy is taken as the objective function of HRCSA.HR was used to derive the optimal VMD parameters, K = 6 and α = 2495.Figure 11 pres the time-domain representation of the five IMF components post-VMD.Cosine simil was calculated for each IMF.IMFs with cosine similarity above the average were deno using WT.The denoised signal is depicted in Figure 12.The minimum envelope entropy is taken as the objective function of HRCSA.HR was used to derive the optimal VMD parameters, K = 6 and α = 2495.Figure 11 pre the time-domain representation of the five IMF components post-VMD.Cosine simil was calculated for each IMF.IMFs with cosine similarity above the average were deno using WT.The denoised signal is depicted in Figure 12.This study introduces the EMD-WT and CSA-VMD-WT models alongside HRCSA-VMD-WT to denoise signals.SNR and RSME are used to evaluate the noise-reduction effect of the three models.The corresponding results of these models on SNR and RMSE are shown in Table 4. HRCSA-VMD-WT outperforms EMD-WT, showing a 91.6% improvement in SNR and a 44.7% reduction in RMSE compared to EMD-WT.Furthermore, the analysis highlights the significant influence of the K and α parameters on VMD-WT's noise-reduction capability, underscoring that HRCSA-optimized parameters substantially enhance noise-reduction performance.In the CSA-VMD-WT model, the optimal VMD parameters are K = 7 and α = 1986, while in the HRCSA-VMD-WT model, the parameters are K = 6 and α = 2495.Compared with CSA-VMD-WT, HRCSA-VMD-WT improves SNR by 74.9% and reduces RMSE by 41.2%.

Experimental Verification and Analysis
A fatigue test platform of slewing bearings is developed to study the dynamic performance of slewing bearings under different operation conditions.The test platform is designed to replicate the axial force and overturning moment encountered by the slewing bearing during operation, inducing relative rotation between the inner and outer rings of the bearing at a specified speed.In the testing process, the inner ring of the bearing is secured to the rotating section of the table using bolts through the connecting flange.This rotating component is powered by a three-phase variable frequency motor via a two-stage synchronous belt reduction system.The upper section of the outer ring of the bearing connects to the upper clamp, where a hydraulic rod applies pressure through the loading head to generate axial force and overturning moment on the bearing.The software component of the test bench is developed using Labview software (Version 20.0), enabling real-time monitoring of the test slewing bearing's status and recording vibration signals at predefined time intervals.Figure 13 displays the constructed test platform and its Human-Machine Interface (HMI).The initial vibration signals of the tested slewing bearing were obtained using a vibration sensor directly in magnetic contact with the upper clamp on the tested slewing bearing.These initial signals contain various noise components from the mechanical drive system, hydraulic loading system, and other sources.Consequently, when utilizing the captured vibration signals for diagnosing slewing bearing faults, it is essential to conduct noise reduction on the captured vibration signals.

Experimental Verification and Analysis
A fatigue test platform of slewing bearings is developed to study the dynamic performance of slewing bearings under different operation conditions.The test platform is designed to replicate the axial force and overturning moment encountered by the slewing bearing during operation, inducing relative rotation between the inner and outer rings of the bearing at a specified speed.In the testing process, the inner ring of the bearing is secured to the rotating section of the table using bolts through the connecting flange.This rotating component is powered by a three-phase variable frequency motor via a two-stage synchronous belt reduction system.The upper section of the outer ring of the bearing connects to the upper clamp, where a hydraulic rod applies pressure through the loading head to generate axial force and overturning moment on the bearing.The software component of the test bench is developed using Labview software(Version 20.0), enabling realtime monitoring of the test slewing bearing's status and recording vibration signals at predefined time intervals.Figure 13 displays the constructed test platform and its Human-Machine Interface (HMI).The initial vibration signals of the tested slewing bearing were obtained using a vibration sensor directly in magnetic contact with the upper clamp on the tested slewing bearing.These initial signals contain various noise components from the mechanical drive system, hydraulic loading system, and other sources.Consequently, when utilizing the captured vibration signals for diagnosing slewing bearing faults, it is essential to conduct noise reduction on the captured vibration signals.This study investigates slewing bearings model 010.25.380under a 7.6-ton load, with an overturning moment of 1.1402 × 10 7 N•m and an inner ring rotation speed of 110 r/min.The piezoelectric vibration sensor was selected with a measuring range of ±25 g and a sampling frequency of 1000 Hz. Figure 14 displays the noisy vibration signal of the slewing bearing with the worn inner raceway, while Figure 15 depicts the frequency domain diagram of this signal.In Figure 15, the highest amplitude of the vibration signal is evident at about 42 Hz, with frequencies below 100 Hz being the most prominent.However, This study investigates slewing bearings model 010.25.380under a 7.6-ton load, with an overturning moment of 1.1402 × 10 7 N•m and an inner ring rotation speed of 110 r/min.The piezoelectric vibration sensor was selected with a measuring range of ±25 g and a sampling frequency of 1000 Hz. Figure 14 displays the noisy vibration signal of the slewing bearing with the worn inner raceway, while Figure 15 depicts the frequency domain diagram of this signal.In Figure 15, the highest amplitude of the vibration signal is evident at about 42 Hz, with frequencies below 100 Hz being the most prominent.
However, accurately determining whether the peak frequency (42 Hz) is a result of noise or bearing failure is challenging due to noise interference in the high-frequency range.This interference may hinder a precise observation of amplitudes at the harmonics of the identified frequency with the highest amplitude.accurately determining whether the peak frequency (42 Hz) is a result of noise or bearing failure is challenging due to noise interference in the high-frequency range.This interference may hinder a precise observation of amplitudes at the harmonics of the identified frequency with the highest amplitude.Figure 16 depicts the vibration signal curve post-de-noising using the HRCSA-VMD-WT method.Notably, the curve exhibits a reduced amplitude compared to the original signal, highlighting a clearer periodicity in the vibration signal.In Figure 17, the frequency domain diagram of the noise-reduced signal reveals fault frequencies at about 42 Hz, 82 Hz, and 167 Hz that align with the theoretical prediction of the slewing bearing's inner raceway fault.These findings confirm the practical utility of the HRCSA-VMD-WT method in diagnosing slewing bearing faults.accurately determining whether the peak frequency (42 Hz) is a result of noise or bearing failure is challenging due to noise interference in the high-frequency range.This interference may hinder a precise observation of amplitudes at the harmonics of the identified frequency with the highest amplitude.Figure 16 depicts the vibration signal curve post-de-noising using the HRCSA-VMD-WT method.Notably, the curve exhibits a reduced amplitude compared to the original signal, highlighting a clearer periodicity in the vibration signal.In Figure 17, the frequency domain diagram of the noise-reduced signal reveals fault frequencies at about 42 Hz, 82 Hz, and 167 Hz that align with the theoretical prediction of the slewing bearing's inner raceway fault.These findings confirm the practical utility of the HRCSA-VMD-WT method in diagnosing slewing bearing faults.Figure 16 depicts the vibration signal curve post-de-noising using the HRCSA-VMD-WT method.Notably, the curve exhibits a reduced amplitude compared to the original signal, highlighting a clearer periodicity in the vibration signal.In Figure 17, the frequency domain diagram of the noise-reduced signal reveals fault frequencies at about 42 Hz, 82 Hz, and 167 Hz that align with the theoretical prediction of the slewing bearing's inner raceway fault.These findings confirm the practical utility of the HRCSA-VMD-WT method in diagnosing slewing bearing faults.

Conclusions
This study addresses the issue of excessive random noise in slewing bearing fault vibration signals by introducing the HRCSA-VMD-WT noise-reduction model.The model primarily comprises three parts: HRCSA, VMD, and WT.The novel HRCSA method improves the standard CSA by employing a Chaotic Reverse Learning strategy to enhance the quality of the initial population.It integrates bubble-net hunting and greedy strategies with Cauchy variation to enhance the convergence speed and accuracy of CSA.HRCSA is employed to determine the optimal input parameters K and α for VMD.Subsequently, VMD decomposes the noisy vibration signal into a series of IMF components.Noisy IMF components identified using cosine similarity are then denoised through WT, leading to significant noise reduction in the reconstructed signal.Simulation experiments are carried out for both HRCSA and HRCSA-VMD-WT to validate the proposed noise-reduction model.The results show that HRCSA outperforms PSO, WOA, and GWO regarding convergence speed and accuracy, and HRCSA-VMD-WT demonstrates superiority over EMD-WT and CSA-VMD-WT models.Notably, HRCSA-VMD-WT increases the Signal-

( 1 )
Calculate the central position of the current flock.(2) Use a rotation matrix to pinpoint the prey's position.(3) Form the transformation position through the rotation matrix.(4) Derive the final position by integrating the central position of the current chameleon group.(4)Capturing prey

( 1 )
The noisy IMFs are transformed by the wavelet.(2) The wavelet coefficient is calculated.(3) The threshold function handles the wavelet coefficients.(

Figure 8 .
Figure 8.(a) Three-dimensional plot of F 5 ; (b) Convergence curve of different algorithms for F 5 .

Figures 9
and 10 illustrate the original and corresponding noisy signals, respectively.

Figure 13 .
Figure 13.Test platform and its HMI.

Figure 13 .
Figure 13.Test platform and its HMI.

Figure 15 .
Figure 15.Frequency domain diagram of vibration signal.

Figure 15 .
Figure 15.Frequency domain diagram of vibration signal.

Figure 15 .
Figure 15.Frequency domain diagram of vibration signal.

Figure 17 .
Figure 17.Frequency domain diagram after noise reduction.
This study addresses the issue of excessive random noise in slewing bearing fault vibration signals by introducing the HRCSA-VMD-WT noise-reduction model.The model primarily comprises three parts: HRCSA, VMD, and WT.The novel HRCSA method improves the standard CSA by employing a Chaotic Reverse Learning strategy to enhance the quality of the initial population.It integrates bubble-net hunting and greedy strategies with Cauchy variation to enhance the convergence speed and accuracy of CSA.HRCSA is employed to determine the optimal input parameters K and α for VMD.Subsequently, VMD decomposes the noisy vibration signal into a series of IMF components.Noisy IMF components identified using cosine similarity are then denoised through WT, leading to significant noise reduction in the reconstructed signal.Simulation experiments are carried out for both HRCSA and HRCSA-VMD-WT to validate the proposed noise-reduction model.The results show that HRCSA outperforms PSO, WOA, and GWO regarding convergence speed and accuracy, and HRCSA-VMD-WT demonstrates superiority over EMD-WT and CSA-VMD-WT models.Notably, HRCSA-VMD-WT increases the Signal-

Figure 17 .
Figure 17.Frequency domain diagram after noise reduction.

Figure 17 .
Figure 17.Frequency domain diagram after noise reduction.
This study addresses the issue of excessive random noise in slewing bearing fault vibration signals by introducing the HRCSA-VMD-WT noise-reduction model.The model primarily comprises three parts: HRCSA, VMD, and WT.The novel HRCSA method improves the standard CSA by employing a Chaotic Reverse Learning strategy to enhance the quality of the initial population.It integrates bubble-net hunting and greedy strategies with Cauchy variation to enhance the convergence speed and accuracy of CSA.HRCSA is employed to determine the optimal input parameters K and α for VMD.Subsequently, VMD decomposes the noisy vibration signal into a series of IMF components.Noisy IMF components identified using cosine similarity are then denoised through WT, leading to significant noise reduction in the reconstructed signal.Simulation experiments are carried out for both HRCSA and HRCSA-VMD-WT to validate the proposed noise-reduction model.The results show that HRCSA outperforms PSO, WOA, and GWO regarding convergence speed and accuracy, and HRCSA-VMD-WT demonstrates superiority over EMD-WT and CSA-VMD-WT models.Notably, HRCSA-VMD-WT increases the Signal-to-Noise Ratio Alternating Direction Method of Multipliers (ADMM) is used to iteratively optimize {u k }, {w k }, and λ, to locate the saddle point of the augmented Lagrange function.The iterative process concludes once the specific condition, outlined in Equation (25), is met: where ⟨λ(t) , f (t) − ∑ k u k (t)⟩ represents the inner product of λ(t) and f (t) − ∑ k u k (t).

Table 2 .
Test function feature information.

Table 3 .
Experimental results of HRCSA and other comparison algorithms on test functions.

Table 4 .
Noise-reduction performance table of three models.

Table 4 .
Noise-reduction performance table of three models.