You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

22 November 2025

Optimizing LSSVM for Bearing Fault Diagnosis Using Adaptive t-Distribution Slime Mold Algorithm

,
,
,
and
1
School of Mechanical Engineering, Jiangsu University of Technology, Changzhou 213001, China
2
School of Automobile and Traffic Engineering, Jiangsu University of Technology, Changzhou 213001, China
3
School of Airport Management, Jiangsu Aviation Technical College, Zhenjiang 212134, China
4
China Classification Society Institute for Maritime Transport Equipment Safety, Nanjing 210011, China
Electronics2025, 14(23), 4568;https://doi.org/10.3390/electronics14234568 
(registering DOI)
This article belongs to the Section Artificial Intelligence

Abstract

Accurate and robust bearing fault diagnosis is crucial for the reliability of rotating machinery. To improve the precision of bearing fault classification, this study introduces a novel methodology that integrates the Adaptive t-distribution Slime Mold Algorithm (AtSMA) with the Least Squares Support Vector Machine (LSSVM). During the signal processing phase, Local Mean Decomposition (LMD) is employed to extract intrinsic mode functions from bearing vibration signals, which are subsequently reconstructed using the Pearson correlation coefficient method. Key features, such as sample entropy, permutation entropy, and energy entropy, are calculated to create a comprehensive feature vector for fault diagnosis. To enhance the convergence stability and global exploration capabilities of the Slime Mold Algorithm (SMA), an adaptive t-distribution mutation mechanism is incorporated to increase population diversity. Additionally, an improved step size strategy is implemented to prevent premature convergence and to expedite optimization speed. AtSMA is utilized to optimize the kernel parameters and penalty factor of LSSVM, thereby enhancing fault classification accuracy. Experimental evaluations conducted on two benchmark bearing datasets reveal that the proposed method achieves an average diagnostic accuracy of 96% on the Case Western Reserve University (CWRU) dataset and 93.25% on the Xi’an Jiaotong University dataset, surpassing conventional optimization algorithms and diagnostic techniques. These findings substantiate the superior diagnostic precision and robustness of the proposed approach under various fault scenarios and dynamic operating conditions.

1. Introduction

Bearings, as essential components in rotating machinery, serve critical functions across aerospace, energy, and transportation sectors by facilitating rotational motion, reducing friction, and transmitting mechanical loads [,,]. However, their operational reliability is severely challenged under extreme conditions characterized by high dynamic loads, intense vibrations, and acoustic disturbances []. These adverse conditions hasten bearing degradation, and undetected faults can evolve into catastrophic failures, causing unplanned downtime, production losses [], and potentially safety-critical incidents with significant socioeconomic impacts. Consequently, there is an urgent need for research aimed at improving the accuracy and robustness of diagnostic techniques, particularly in complex operational contexts. Moreover, bearing faults are increasingly recognized as resulting not only from mechanical stresses but also from electrical stresses, such as circulating bearing currents. These electrical stresses, often neglected in conventional analyses, can inflict considerable damage, particularly in contexts involving electric motors and electric vehicles (EVs), where electrical faults are more common [,]. This oversight underscores the necessity for diagnostic approaches that consider both mechanically and electrically induced bearing damage.

1.1. Literature Review

Traditional vibration analysis methods face limitations when processing nonlinear and non-stationary signals emitted from bearings operating under variable speeds or low-velocity conditions [,,]. Local Mean Decomposition (LMD) overcomes these challenges by adaptively decomposing signals into Product Functions (PFs) [], thereby efficiently isolating intrinsic frequency components [,]. Unlike the Wavelet Transform (WT), which depends on predefined basis functions, the self-adaptive nature of LMD excels in managing transient features in non-stationary settings []. To enhance feature extraction, we apply Pearson correlation coefficient analysis to pinpoint dominant PFs that contain essential fault indicators while minimizing noise-corrupted components, thus improving the signal-to-noise ratio.
The feature engineering phase incorporates a multi-entropy framework that merges sample entropy [], permutation entropy, and energy entropy. This integrated approach captures diverse facets of fault characteristics: (1) Sample entropy measures signal complexity, reflecting dynamic alterations induced by faults []; (2) Permutation entropy identifies temporal patterns, facilitating early anomaly detection []; (3) Energy entropy assesses the heterogeneity of energy distribution across frequency bands []. The synthesis of these measures forms a multidimensional feature vector that provides a discriminative basis for fault pattern recognition.
The evolution of intelligent diagnostics has significantly advanced the application of machine learning in mainstream fault detection, particularly in tasks involving nonlinear classification []. Although conventional Support Vector Machines (SVM) [] demonstrate proficiency in high-dimensional pattern recognition, their practical application is often hampered by computational bottlenecks when dealing with small datasets. In contrast, Least Squares SVM (LSSVM) [,] addresses these limitations by converting the quadratic optimization problem into one of solving linear equations, thereby substantially accelerating the training process. However, the performance of LSSVM remains highly sensitive to the selection of hyperparameters, such as kernel parameters and regularization factors []. Traditional optimization strategies, such as grid search, tend to be inefficient and risk entrapment in local optima, particularly in high-dimensional spaces. Moreover, recent studies have increasingly investigated hybrid frameworks based on deep learning, including CNN-LSTM and Transformer-driven models, which autonomously learn hierarchical fault features and exhibit remarkable diagnostic accuracies []. Nonetheless, these models generally demand large-scale labeled data sets and substantial computational resources, which curtails their feasibility in scenarios involving small samples or real-time monitoring.
Recent developments have utilized metaheuristic algorithms, such as Particle Swarm Optimization (PSO) [,], Sparrow Search Algorithm (SSA) [], and Cuckoo Search (CSA) [], to tune LSSVM parameters. Despite their enhanced global search capabilities, these methods continue to face challenges in balancing the trade-offs between exploration and exploitation, often resulting in premature convergence or protracted search refinement. Slime Mold Algorithm (SMA), inspired by the foraging behaviors of Physarum, exhibits considerable potential for global optimization. However, it still suffers from slow convergence rates and inadequate precision in local searches within high-dimensional diagnostic contexts.
Although alternative metaheuristics such as the HHO, EO, and BOA demonstrate strong global search capabilities, they frequently encounter issues of instability or delayed convergence in optimizing high-dimensional fault features. Unlike these methods, SMA provides a more balanced adaptive mechanism but still necessitates improvements in convergence precision and local search accuracy.

1.2. Contribution

This study introduces an Adaptive t-distribution Slime Mold Algorithm (AtSMA) to enhance the optimization of LSSVM for bearing fault diagnosis. Our methodology incorporates three principal innovations:
(1)
Signal Decomposition: LMD is utilized to decompose vibration signals into significant components, which aids in noise suppression and the extraction of more representative features for further analysis.
(2)
Feature Fusion: We adopt a hybrid entropy-based feature fusion strategy to encapsulate comprehensive signal characteristics. By integrating multiple entropy measures, this method enhances the discriminative capability of the feature set.
(3)
Algorithm Enhancement: The proposed AtSMA employs adaptive t-distribution mutation for global exploration and dynamic step-size adjustment to accelerate local convergence. These strategies mitigate the performance constraints of the standard SMA in high-dimensional optimization contexts.
Compared to existing SMA variants and t-distribution-based metaheuristics, the proposed AtSMA introduces an innovative adaptive mutation mechanism that dynamically balances exploration and exploitation, effectively addressing the premature convergence issues commonly observed in conventional SMA, HHO, and BOA algorithms. Experimental results confirm that the AtSMA-LSSVM model surpasses conventional methods in diagnostic accuracy, providing both theoretical advancements and practical benefits for intelligent fault diagnosis.
Despite integrating multiple techniques for signal decomposition, feature extraction, and optimization, the proposed method retains a manageable computational complexity. This efficiency stems largely from the optimization process enabled by the AtSMA algorithm, which is designed to balance exploration and exploitation effectively. LSSVM offers a streamlined optimization framework that reduces the computational burden typically associated with traditional SVM methods. Furthermore, the versatility of the AtSMA-LSSVM method renders it suitable for a variety of challenging scenarios, including environments with sparse samples, unknown faults, and significant noise interference. The adaptive nature of the AtSMA optimization, coupled with robust feature extraction via the hybrid entropy approach, augments the model’s generalization capability across diverse fault types, even under uncertain conditions such as those encountered in complex systems like transport ship propeller diagnostics [].

1.3. Organization

The remainder of this study is structured as follows. Initially, a comprehensive review of the relevant literature and background concepts is presented. This is followed by a detailed description of the proposed AtSMA-LSSVM framework, including the signal processing techniques and feature extraction processes employed. Subsequent sections delineate the experimental setup, performance evaluation, and a discussion that compares the new method with existing techniques. The study concludes with a summary of the findings and offers suggestions for future research directions.

2. Signal Decomposition, Reconstruction, and Feature Extraction

2.1. Weak Signal Extraction Based on LMD

LMD is an adaptive decomposition method specifically designed for processing nonlinear and non-stationary signals. By decomposing a signal into a series of Product Functions, LMD effectively captures local characteristics at different frequency levels, providing robust support for fault feature analysis under complex operating conditions. One of its key advantages is that it does not require predefined basis functions, making it more adaptable to intricate signal variations.
Given an original signal x(t), LMD begins by identifying its local extrema and constructing the upper and lower envelopes. These envelopes are then used to compute the local mean m(t) and amplitude envelope a(t), which describe the dynamic changes in the signal over time []. These parameters form the foundation for mean removal processing and intrinsic mode component extraction. After eliminating the local mean, the signal is decomposed into a standardized form, facilitating further analysis and feature extraction.
h ( t ) = x ( t ) m ( t )
The ratio of the mean-removed signal to its amplitude envelope is processed to generate the instantaneous frequency component s(t). Using a modulation formula, the Product Function is then constructed:
P F ( t ) = a ( t ) cos ( θ ( t ) )
where θ(t) represents the phase function.
After extracting one PF, it is subtracted from the original signal, leaving the residual signal:
r ( t ) = x ( t ) P F ( t )
This process is repeated iteratively until the residual signal no longer contains significant oscillatory components. Ultimately, the original signal x(t) is decomposed into a sum of multiple PFs and a residual term:
x ( t ) = i = 1 n P F i ( t ) + r ( t )

2.2. Signal Reconstruction Based on Pearson Correlation Coefficient

Following the decomposition of signals, each PF component obtained characterizes different frequency bands of the signal. In real-world scenarios, signals are often contaminated with noise and other forms of interference, resulting in components that may contain irrelevant or predominantly noise-dominated information. Utilizing all components directly for feature extraction and diagnostic purposes could not only augment computational complexity but also yield feature extraction results that diverge from the true signal characteristics. This divergence can adversely affect the accuracy of fault diagnosis. To refine the decomposition outcomes, it is imperative to filter and reconstruct each mode component. In this research, the Pearson correlation coefficient is utilized for the selection of components. This technique evaluates the linear correlation between each component and the original signal to ascertain the presence of significant feature information []. Components deemed essential are retained, while those correlated predominantly with noise are excluded. This method effectively mitigates noise interference while retaining the critical features of the signal.
The Pearson correlation coefficient R i between each component P F i ( t ) and the original signal x(t) is calculated as follows:
R i = i = 1 N ( P F i ( t ) P F i ¯ ) ( x ( t ) x ¯ ) i = 1 N ( P F i ( t ) P F i ¯ ) 2 i = 1 N ( x ( t ) x ¯ ) 2
Here, P F i ¯ and x ¯ represent the means of the respective signals, and the Pearson correlation coefficient R i varies within the range of [−1, 1]. A value of R i close to 1 indicates a strong linear correlation between the component and the original signal. Conversely, a R i approaching 0 suggests a weak correlation.
A correlation threshold τ is established, such that if R i τ , the component is considered significant and retained for signal reconstruction []. Components failing to meet this criterion are deemed as noise or irrelevant and are subsequently discarded. The reconstructed signal x r ( t ) can thus be expressed:
x r ( t ) = i S P F i ( t )
where S denotes the set of components satisfying R i τ .
In this study, the correlation threshold τ is empirically set at 0.3, following comprehensive testing and analysis across various datasets and fault conditions. This threshold was selected after considering the balance between retaining critical fault-related components and excluding noise. A threshold of 0.3 effectively balances these considerations, ensuring the selection of components with substantial correlations to the original signal while discarding those dominated by noise.
While a lower threshold might retain more components, potentially including noise, and a higher threshold could exclude components containing subtle yet significant fault information, the established threshold of 0.3 has proven to consistently provide robust feature sets. This balance is crucial for maintaining diagnostic accuracy across different noise levels and operational conditions, thereby preventing the inclusion of irrelevant data that could detract from the performance of the fault diagnosis model.

2.3. Feature Extraction from Reconstructed Signal

Following the reconstruction of the signal, feature extraction is conducted to facilitate the accurate classification of fault modes. Given the nonlinear and non-stationary characteristics of signals in bearing fault diagnosis, sample entropy, permutation entropy, and energy entropy have been selected as the primary features for this study. These features are utilized to characterize the signal from multiple perspectives, including complexity, sequencing, and energy distribution. This approach enhances both the comprehensiveness and robustness of the feature vector.
Sample entropy is a nonlinear dynamic feature that measures the complexity of a signal []. It quantifies the degree of disorder and the dynamic variation characteristics inherent in the signal. A higher sample entropy value indicates increased complexity and more pronounced dynamic behavior. The calculation of sample entropy involves initially segmenting the signal into subsequences of length m, followed by calculating the probability of matching between each subsequence and all others. Subsequently, the subsequence length is extended to m + 1, and the rate of change in matching probability is computed, which culminates in the expression for sample entropy:
S a m p E n ( m , r ) = ln i = 1 N m C m + 1 ( i ) i = 1 N m + 1 C m ( i )
where C m ( i ) represents the number of matches between a subsequence and other subsequences, and r is the matching threshold.
Permutation entropy, which focuses on the temporal sequence arrangement properties of the signal, is derived by constructing embedding vectors from the time series and sorting the elements within these vectors to determine the permutation pattern of the signal. This entropy is particularly adept at characterizing short-term dynamic changes within the signal. The magnitude of permutation entropy reflects the randomness and complexity of the signal sequence, thereby effectively distinguishing between different fault conditions. The expression for permutation entropy is as follows:
H p = j = 1 m ! P j ln ( P j )
where P j represents the probability of each permutation pattern.
Energy entropy is utilized to describe the energy distribution of the signal across different frequency bands, marking it as a significant feature for characterizing the non-stationarity of the signal. It underscores the non-uniform distribution of energy [], which is particularly valuable for distinguishing between various fault types. By segmenting the signal into frequency bands, the energy proportion of each segment is calculated, and the entropy value is derived based on its probability distribution:
H e = k = 1 K P k ln ( P k )
where P k represents the proportion of the energy in the k-th segment relative to the total energy.
The sample entropy, permutation entropy, and energy entropy are subsequently combined to construct the feature vector, which is formulated as follows:
F = [ S a m p E n , H p , H e ]
These features collectively describe the complexity, sequencing, and energy distribution of the reconstructed signal from various dimensions, thereby constructing a comprehensive and robust set of fault features.

3. Bearing Fault Diagnosis Model Based on Optimizing LSSVM Using Adaptive t-Distribution Slime Mold Algorithm

This chapter outlines the standard SMA and its enhancement with adaptive t-distribution mutation to improve global and local search capabilities. Next, this chapter describes LSSVM and how AtSMA optimizes its key parameters. The optimized LSSVM is subsequently employed for efficient fault classification under intricate conditions.

3.1. Slime Mold Algorithm

SMA is a population-based intelligence optimization algorithm inspired by the process through which slime molds form optimal paths while foraging for food. The inspiration comes from the way slime molds, in their natural growth process, continuously expand and modify their paths to identify the shortest route to a food source. Slime molds secrete chemical substances that attract other individuals, and based on the variation in the concentration of these substances, they adjust their expansion behavior. This process can be modeled as an optimization problem-solving process.
The essence of SMA lies in simulating the expansion and contraction behaviors of slime molds to explore the solution space. This involves iteratively updating the solution states to converge upon the global optimum []. In this algorithm, each individual organism updates its position by emulating the slime mold’s expansion behavior []. The formula for updating the position of each member of the population is as follows:
x i ( t + 1 ) = x i ( t ) + α F ( x i ( t ) )
Here, x i ( t ) denotes the position of the i-th individual at time t, α is a step-size factor, and F ( x i ( t ) ) represents the attraction or driving force acting on the individual. This force is pivotal to the individual’s movement and is influenced by both the current and target positions. The driving force, F ( x i ( t ) ) , comprises two components: the intrinsic attraction of the slime mold and the interaction force between the individual and the food source:
F ( x i ( t ) ) = A e β d ( x i , f )
In this model, A is the attraction coefficient, β signifies the decay factor, and d ( x i , f ) is the Euclidean distance between the current position x i of the individual and the food source f. This attraction model encapsulates the slime mold’s expansion process, where the attraction increases as the distance between the individual and the target decreases.
Furthermore, to simulate the branching behavior of slime molds during their search for food, a local search operation is applied to update the individual’s position. At each search step, SMA selects the current best food source and adjusts its growth path accordingly. Thus, the update of an individual’s position depends not only on the current position and target location but may also be influenced by other individuals:
x i ( t + 1 ) = x i ( t ) + γ ( x g x i ( t ) ) + ε ( x i ( t ) x r i ( t ) )
Here, x g represents the global best position, x r i ( t ) indicates the current random position of the individual, and γ and ε are weighting factors used to balance global information and local search.

3.2. Slime Mold Algorithm Improved with Adaptive t-Distribution Mutation

Although SMA exhibits strong global search capabilities when solving optimization problems, its local search ability is relatively weak, making it prone to getting stuck in local optima, especially when dealing with high-dimensional and complex optimization problems. To overcome this limitation, Adaptive t-Distribution Mutation is introduced to improve SMA. By adaptively adjusting the search step size and incorporating mutation operations, the AtSMA algorithm can effectively enhance local search ability and improve the global search accuracy, thus optimizing the solution.
The t-distribution mutation is a probability distribution-based mutation strategy primarily used to introduce new search directions by perturbing individual solutions. The t-distribution has been widely employed in statistics and is known for its thicker tails, which allow it to better handle outliers and avoid premature convergence to local optima compared to the normal distribution. The t-distribution mutation introduces random perturbations based on the t-distribution to each individual solution, enabling the algorithm to more easily escape from local optima during the search process and explore a broader solution space. The probability density function of the t-distribution is formulated into:
f ( x ) = Γ ( v + 1 2 ) v π Γ ( v 2 ) ( 1 + x 2 v ) v + 1 2
where x denotes the random perturbation, and ν represents the degrees of freedom of the t-distribution, typically set below 2 to accentuate the tail effect. The lower the value of ν, the more pronounced the tail effect, thus increasing the mutation magnitude and aiding in the evasion of local optima.
By applying the t-distribution mutation to SMA, a perturbation term can be randomly generated in each iteration based on the individual’s current fitness and position, thereby introducing new exploration of the solution space. The updated mutation formula is presented as:
x i ( t + 1 ) = x i ( t ) + α F ( x i ( t ) ) + β T ( x i ( t ) , v )
where T ( x i ( t ) , v ) indicates the random perturbation generated based on the t-distribution, and β is a coefficient that modulates the mutation intensity, controlling the amplitude of the perturbation. By incorporating this term, individuals receive additional mutation information during the search process, enhancing global search capabilities and the ability to evade local optima.
To enhance the intelligence of the search process, an adaptive mechanism adjusts the degrees of freedom ν of the t-distribution and the mutation step size β . In the initial stages of the search, when the solution space is expansive and the distance between individuals is significant, a larger degree of freedom is selected. This choice serves to reduce the mutation amplitude and maintain robust global exploration capabilities. As the search progresses and individuals converge towards the optimal solution, the need for precise local search increases. Consequently, the degree of freedom is reduced to increase the mutation amplitude, enabling individuals to refine the solution space and prevent premature convergence. The adaptive adjustment formula is detailed in Equations (16) and (17),
v ( t + 1 ) = v ( t ) γ 1 δ f
β ( t + 1 ) = β ( t ) + γ 2 δ f
where δ f represents the change in current fitness, and γ 1 and γ 2 are adjustment parameters that control the update rate of the degrees of freedom and mutation step size, respectively. Through these adjustments, individuals can dynamically modify the mutation operation based on the search progress, enhancing the precision and efficiency of the search process.
The values of ν and β are not arbitrary; they are based on empirical observations and sensitivity analyses conducted during the experiments. Initially, both parameters are set to values that facilitate broad exploration of the solution space. Typical initial values of ν are around 1.5, as smaller values introduce stronger tail effects that aid in escaping local optima. The initial β is usually set at a moderate level, such as 0.5, to ensure controlled mutation intensity and prevent excessive mutation during the early stages of the search.
The parameters Δν and Δβ are derived from experimental results and sensitivity analyses to dynamically adjust the mutation dynamics during the search process. Specifically, the performance of the algorithm at different stages is observed, and these parameters are adjusted to maintain an equilibrium between exploration (global search) and exploitation (local search). Conducting a more detailed sensitivity analysis can further refine these parameters for specific problem types, ensuring optimal performance across various optimization tasks. For instance, higher values of ν are typically selected when the algorithm needs to explore large regions of the search space, whereas lower values are preferred for more precise local searches. This adaptive mechanism ensures that the mutation process can dynamically adjust as the search progresses, resulting in an efficient and effective optimization process. By adjusting ν and β based on the search state, the AtSMA algorithm effectively explores the solution space, avoids local optima, and converges to high-quality solutions.

3.3. LSSVM

SVM is a robust classification tool widely used in pattern recognition, regression analysis, and fault diagnosis. However, the conventional SVM involves solving a quadratic programming problem during its optimization phase, which is computationally intensive, particularly for large-scale datasets. In contrast, LSSVM, an extension of SVM, employs an equality-constrained optimization method that greatly enhances efficiency in solving problems. By transforming the optimization challenge into a simplified equality-constrained problem, LSSVM alleviates the computational bottleneck associated with traditional SVM, offering substantial benefits in both small sample learning and large-scale data processing.
LSSVM aims to optimize classification performance by minimizing a loss function, thereby deriving an optimal decision function for precise sample classification []. Consider a training set { ( x i , y i ) } , where x i represents the input feature vector and y i denotes the class label. The decision function of LSSVM is typically expressed as follows:
f ( x ) = w T ϕ ( x ) + b
In this context, ϕ ( x ) denotes the feature mapping function, w represents the weight vector, and b is the bias term. LSSVM seeks an optimal solution by minimizing the loss function:
J ( w , b , γ ) = 1 2 w T w + γ 2 i = 1 m e i 2
Here, e i indicates the error for the i-th sample, and γ is the regularization parameter that balances model complexity and error minimization.
To enhance classification performance further, LSSVM integrates equality constraints into its optimization framework, thus ensuring more accurate and dependable decision boundaries.
y i ( w T ϕ ( x i ) + b ) = 1 e i , i = 1 , 2 , , m
This constraint guarantees that the samples are classified as accurately as possible while maintaining the errors e i within a minimal range. By reformulating the original problem as an equality-constrained optimization issue, LSSVM sidesteps the quadratic programming challenge found in standard SVM, thus streamlining and expediting the resolution process. To address the LSSVM optimization problem, the Lagrange function is formulated, and its partial derivatives are computed to ascertain the optimal solution. The solution can be expressed as:
w = i = 1 m α i y i ϕ ( x i )
Here, α i are the Lagrange multipliers. Solving this system of equations yields the optimal values of w and b , which in turn provide the final classification decision function. The LSSVM classification decision function can be simplified as follows:
f ( x ) = i = 1 m α i y i k ( x i , x ) + b
where k ( x i , x ) is the kernel function in the input space, which replaces the feature mapping process and avoids the computational complexity in the high-dimensional space.
LSSVM as an extension of SVM significantly improves the solution efficiency by converting the optimization problem into an equality-constrained form, making it particularly suitable for small sample learning and large-scale data processing. In bearing fault diagnosis, LSSVM can effectively improve classification speed and accuracy, but still faces the challenge of parameter optimization. Thus, AtSMA is combined to optimize the parameters of LSSVM, thereby enhancing the accuracy and robustness of bearing fault diagnosis.

3.4. Bearing Fault Diagnosis Model Based on AtSMA-LSSVM

The AtSMA-LSSVM bearing fault diagnosis model initially collects vibration signals from bearings and decomposes these signals into multiple PFs using LMD. This method proves particularly adept at managing non-stationary signals as it adjusts to the characteristics of the signal, thereby facilitating an improved separation of fault-related components from background noise. The Pearson correlation coefficient is subsequently employed to reconstruct the signal, effectively eliminating noise while preserving essential fault information. This enhances the quality of the features extracted. Following this, features such as sample entropy, permutation entropy, and energy entropy are extracted to form a comprehensive feature vector. This vector reflects the signal’s complexity, sequential variations, and energy distribution characteristics. The adoption of this hybrid entropy-based approach is advantageous as it captures diverse aspects of fault characteristics, thereby enhancing the discriminative power of the feature set.
During the parameter optimization phase, AtSMA is utilized to optimize the kernel parameters and other critical parameters of LSSVM. This enhancement improves the performance of the classification model. The synergy between AtSMA and LSSVM facilitates enhanced global exploration and expedited local convergence, effectively addressing the limitations commonly encountered in traditional optimization methods. Fault classification is ultimately performed using the optimized LSSVM, enabling rapid and efficient fault diagnosis of bearings under complex operational conditions. By integrating these techniques, the model not only enhances accuracy but also ensures robustness across various noise levels and dynamic operational scenarios. The specific process is illustrated in Figure 1.
Figure 1. AtSMA-LSSVM Bearing Fault Diagnosis Process Flow.

4. Experimental Analysis

This study utilizes public bearing data provided by Case Western Reserve University to validate the effectiveness of the proposed AtSMA-LSSVM bearing fault diagnosis method. As depicted in Figure 2, the selected specimen is a 6205-2RS deep groove ball bearing, operating at a motor speed of 1797 r/min and equipped with a sensor sampling at a frequency of 12 kHz.
Figure 2. Experimental Signal Acquisition Device.
Bearing fault signals often exhibit nonlinear and non-stationary characteristics, which pose significant challenges for traditional analysis methods, particularly in effectively separating noise from fault features under complex operating conditions. To address these challenges, this study employs the LMD method to decompose vibration signals. LMD separates complex signals into a series of physically meaningful intrinsic mode functions, which not only preserve the local characteristics of the signals but also enable the effective separation of noise from fault information, thus ensuring the accurate extraction of key features. The LMD method iteratively extracts local extrema, gradually isolating the intrinsic mode functions, thereby aligning the frequency and amplitude of each component closely with the actual fault information. This decomposition approach not only improves the accuracy of fault feature extraction but also provides a reliable foundation for subsequent steps, such as component screening based on correlation coefficients and feature extraction, significantly enhancing the effectiveness and robustness of the overall fault diagnosis process. Figure 3 illustrates the result of decomposing a fault signal into six modal components using LMD.
Figure 3. Component Plots of Fault Signal After LMD.
As observed from Figure 3, following the LMD, each PF component displays distinct frequency characteristics. Some components exhibit pure high-frequency components, effectively reflecting the fault features of the bearing, particularly subtle changes caused by local faults and abnormal wear. However, not all components are pertinent for representing fault information; some may predominantly consist of noise or irrelevant data. Directly using all components could obscure essential fault features and diminish diagnostic accuracy. Thus, following decomposition, it is crucial to screen these modal components to ensure the high quality and effectiveness of the reconstructed signal. This study uses Pearson’s correlation coefficient to screen the modal components derived from decomposition because it quantifies the linear correlation between each component and the original signal, thereby determining whether a component captures the main features of the signal. By establishing a threshold for the correlation coefficient, only the modal components with a high correlation to the original signal are retained for signal reconstruction, further eliminating noise interference and providing reliable input data for feature extraction. The correlation coefficients for each state’s decomposed PF are calculated and presented in Table 1.
Table 1. Pearson’s Correlation Coefficient Calculation for Components and Original Signal.
Based on the values of the correlation coefficient between the components and the original signal presented in Table 1, the top three PF components, each with a correlation coefficient greater than 0.3, were selected for signal reconstruction. This method facilitates the retention of essential fault information while eliminating noise and irrelevant data. The reconstructed signals for each condition are depicted in Figure 4.
Figure 4. Reconstructed signals for different states. (a) Normal; (b) Inner Ring Fault; (c) Outer Ring Fault; (d) Roller Fault.
By selecting and reconstructing the signal components, researchers can focus on those components most closely associated with bearing faults, thus avoiding interference from components with low correlation. The features of the reconstructed signal are clearer, with significant noise reduction, which underscores the key fault characteristics. Furthermore, the differences in energy distribution across various fault states become more pronounced, providing a robust foundation for feature extraction and classification.
During the feature extraction phase, MATLAB 2019 was utilized to extract sample entropy, permutation entropy, and energy entropy as the primary features of each signal. To enhance the comprehensiveness and robustness of these features, the reconstructed signals were segmented into several parts, each containing 1000 sampling points, and entropy features were computed for each segment. Subsequently, the resulting feature vectors were constructed for model training.
Additionally, to facilitate accurate bearing fault classification, the normal state was labeled as “Label 1,” the inner race fault as “Label 2,” the outer race fault as “Label 3,” and the roller fault as “Label 4.” These labels provide precise target data for model training and evaluation, ensuring that the classifier can effectively distinguish between different fault states, thereby enhancing diagnostic accuracy and reliability. To ensure a fair evaluation, the dataset was randomly divided into 70% for training and 30% for testing, and a 5-fold cross-validation strategy was employed to mitigate overfitting and verify model stability. Moreover, the number of samples across different fault categories was balanced to avoid classification bias.
In addition, key parameters such as ν and β in AtSMA were tuned through sensitivity analysis, where ν varied between 0.1 and 1.0 and β between 0.5 and 2.0. Experimental results indicated that ν = 0.6 and β = 1.2 achieved the optimal balance between convergence speed and diagnostic accuracy, while γ1 and γ2 were set at 0.8 and 0.6, respectively, following empirical optimization.
To assess the performance of different optimization algorithms, both AtSMA and the traditional SMA were utilized for training. This comparison facilitated an analysis of the differences in optimization effectiveness, convergence speed, and stability between the two algorithms. The outcomes are illustrated in Figure 5.
Figure 5. Iterative process of different models.
The analysis of iteration outcomes indicates that AtSMA achieves faster convergence and superior optimization performance. In the initial phase, both AtSMA and traditional SMA exhibit a rapid decline in fitness values. However, AtSMA’s reduction in fitness values is more pronounced, suggesting its enhanced efficacy in probing the optimal solution space during the early iterations. As the iteration count increases, the convergence velocity of SMA gradually diminishes, and it subsequently stagnates at a local optimum, where the fitness values plateau at a relatively elevated level. In contrast, AtSMA, equipped with its adaptive t-distribution mutation, augments the global search capability and persists in optimization, ultimately achieving a lower fitness value. This underscored AtSMA’s improved stability and precision in converging towards the global optimum.
These findings imply that AtSMA more effectively optimizes the parameters of LSSVM, thereby reducing the objective function value, augmenting the model’s generalization capacity, and enhancing the accuracy of fault diagnosis.
Subsequent to integrating the optimal parameters derived from both algorithms into the LSSVM model, two fault diagnosis models, SMA-LSSVM and AtSMA-LSSVM, were developed. Classification experiments on test samples were conducted, comparing the diagnostic accuracy of the two models under various conditions. The results of these fault diagnosis assessments are depicted in Figure 6.
Figure 6. Model diagnostic results before and after algorithm improvement. (a) SMA-LSSVM diagnostic results; (b) AtSMA-LSSVM diagnostic results.
The comparative analysis reveals that the AtSMA-LSSVM model surpasses SMA-LSSVM in prediction accuracy across all fault categories. SMA-LSSVM displays considerable deviations in predictions for certain sample points, characterized by substantial fluctuations in predicted labels, suggesting that the optimized parameters may induce instability in the model. Conversely, AtSMA-LSSVM yields more consistent classification outcomes, with predicted labels closely aligning with the actual labels. This efficacy demonstrates that the AtSMA-LSSVM approach more effectively discerns the characteristic differences among various fault categories, thereby boosting the reliability and accuracy of fault diagnosis.
To ascertain the adaptability and superiority of the proposed AtSMA-LSSVM method in fault diagnosis, this study evaluated various combinations of optimization algorithms and classification models, including AtSMA-LSSVM, AtSMA-SVM, AtSMA-KNN, AtSMA-BP, Deep Convolutional Neural Networks (DCNN), XGBoost, as well as SSA-LSSVM, SMA-LSSVM, and PSO-LSSVM. The diagnostic accuracies of each model across different fault categories are visually represented in a radar chart, with an emphasis on the comparative improvement in diagnostic performance of AtSMA-LSSVM against other optimization methods. The fault diagnosis outcomes are illustrated in Figure 7 and detailed in Table 2.
Figure 7. Fault diagnosis results of different algorithm models.
Table 2. Fault diagnosis results of different algorithm models (%).
The results of the experiments clearly demonstrate that AtSMA-LSSVM significantly surpasses other fault categories in terms of diagnostic accuracy, achieving an overall accuracy of 96%, which is markedly superior to that of other optimization methods. This suggests that the adaptive t-distribution mutation strategy effectively enhances both the global search capability and local search precision of the optimization algorithm, thereby optimizing the LSSVM parameters and improving fault classification accuracy. Furthermore, compared to other AtSMA-optimized classification models, LSSVM exhibits a more stable classification capability and a distinct advantage. This stability is primarily attributed to the model’s high computational efficiency, as it obviates the need for the quadratic programming optimization problem typical of traditional SVMs by solving linear systems, thus accelerating the computational speed. It also displays strong robustness against noisy data, which stabilizes the diagnostic results.
In terms of optimization algorithm comparison, the AtSMA-optimized LSSVM classifier significantly outperforms those optimized by SSA, SMA, and PSO. While the SSA-LSSVM and SMA-LSSVM models demonstrate relatively high classification accuracy, they nevertheless perform suboptimally in some fault categories compared to AtSMA-LSSVM. The overall classification performance of the PSO-LSSVM is notably inferior, particularly in certain fault categories where diagnostic accuracy dips below 80%, indicative of poor local convergence and a propensity to fall into local optima. Although the DCNN offers high accuracy, it is computationally demanding and requires extensive training time, rendering it less suitable for real-time applications. Conversely, XGBoost demonstrates commendable performance but still falls short of AtSMA-LSSVM in terms of fault classification accuracy, especially in Fault 2. Overall, this study confirms the superiority of the AtSMA optimization method in bearing fault diagnosis and offers an efficient and accurate optimization strategy for intelligent fault diagnosis.
To further validate the effectiveness and robustness of the proposed method, supplementary experiments were conducted using the bearing dataset provided by Xi’an Jiaotong University (Dataset 2). To ensure the reliability of the comparative analysis, the same signal processing, feature extraction, and model training procedures as described in the previous section were employed. The experimental setup used for data acquisition is depicted in Figure 8, and the corresponding diagnostic results are tabulated in Table 3. In addition to classification accuracy, several performance metrics, including recall, were calculated to provide a more comprehensive evaluation of the diagnostic performance and to mitigate potential accuracy bias.
Figure 8. Experimental Signal Acquisition Device of Xi’an Jiaotong University bearing dataset.
Table 3. Fault diagnosis results of different algorithm models on the Xi’an Jiaotong University bearing dataset (%).
To rigorously assess the stability and robustness of the proposed method, each experiment was replicated ten times, with subsequent calculation of mean and standard deviation for diagnostic accuracy. The findings indicate that the AtSMA-LSSVM model exhibits minimal fluctuation (±0.6%), thus demonstrating exceptional stability across repeated trials. As depicted in Table 3, the AtSMA-LSSVM model consistently achieves the highest classification accuracy on the Xi’an Jiaotong University bearing dataset, with values ranging from 92% to 94% across all fault categories. In comparison to other AtSMA-based models, this model exhibits markedly superior diagnostic performance. In contrast, traditional optimization algorithms such as SSA, SMA, and PSO achieve relatively lower accuracies, with the PSO-LSSVM model performing the least effectively. Furthermore, the AtSMA-LSSVM outperforms in terms of both precision and recall, thereby ensuring consistent classification reliability and a balanced performance between these two metrics. While the DCNN and XGBoost models also perform commendably, they do not match the overall diagnostic accuracy of the AtSMA-LSSVM, which affirms a superior balance between efficiency, accuracy, and robustness. These outcomes corroborate the findings from the CWRU dataset, thus confirming the generalizability and robustness of the proposed AtSMA-LSSVM method for bearing fault diagnosis under varied experimental conditions.

5. Conclusions

This study presents a bearing fault diagnosis method that utilizes an AtSMA to optimize LSSVM, with the method’s effectiveness corroborated by empirical testing. The principal conclusions drawn are as follows. To counteract the non-stationarity and noise interference prevalent in fault signals, a signal decomposition and reconstruction methodology based on LMD was employed. This approach involved selecting components that exhibited high Pearson correlation coefficients, which effectively minimized redundant noise while preserving critical fault information. For the extraction of features, a novel hybrid entropy method was developed, incorporating sample entropy, permutation entropy, and energy entropy to enhance the distinguishability of various fault types. This method produced stable, information-rich feature vectors that improved classifier performance and generalization capabilities. Additionally, to address the shortcomings of traditional optimization algorithms, the AtSMA was introduced, integrating an adaptive t-distribution mutation and an innovative step-size adjustment strategy. Experimental evaluations on the CWRU dataset demonstrated that the AtSMA-LSSVM model achieved an overall diagnostic accuracy of 96%, significantly surpassing several established benchmarks. Moreover, validation on the Xi’an Jiaotong University bearing dataset further confirmed the generalization capacity of the proposed method, achieving an average diagnostic accuracy of 93.25% across various fault categories. These results underscore the robust optimization and classification efficacy of the AtSMA-LSSVM model under diverse operational conditions.
Future research will aim to expand the proposed method along several avenues. Firstly, the integration of time-frequency domain analysis techniques could enrich feature representation further. Secondly, investigating domain adaptation strategies could enhance the model’s applicability to unfamiliar working conditions. Lastly, the application of this method to more intricate mechanical systems and compound fault scenarios will be crucial in assessing its generalization and practical utility.
Furthermore, subsequent studies will explore the implementation of the proposed method in real-world industrial settings to gauge its effectiveness under practical operating conditions. Specific efforts will focus on innovating hardware platforms and data collection methods to augment the efficiency and reliability of fault diagnosis. By integrating the AtSMA-LSSVM model with cutting-edge data acquisition systems, real-time monitoring, and intelligent fault prediction capabilities, this initiative aims to contribute significantly to the development of robust, scalable, and efficacious diagnostic tools for predictive maintenance. These advancements will enhance the method’s adaptability and render it suitable for a broad spectrum of industrial applications.

Author Contributions

Conceptualization, J.Q. and K.Z.; methodology, J.Q. and L.H.; software, J.Q., K.Z. and L.H.; validation, K.Z. and L.H.; formal analysis, J.Q.; investigation, L.H. and Y.F.; resources, K.Z.; data curation, J.Q. and P.L.; writing—original draft preparation, J.Q.; writing—review and editing, J.Q. and K.Z.; visualization, Y.F. and P.L.; supervision, K.Z.; project administration, K.Z.; funding acquisition, K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Changzhou Applied Basic Research Program (Pre-funded) (CJ20240036) and Zhenjiang Key Research and Development Program (GY2023020).

Data Availability Statement

All data included in this study are available upon request by contact with the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, X.; Yang, R.; Xue, Y.; Huang, M.; Ferrero, R.; Wang, Z. Deep transfer learning for bearing fault diagnosis: A systematic review since 2016. IEEE Trans. Instrum. Meas. 2023, 72, 1–21. [Google Scholar] [CrossRef]
  2. Hoang, D.T.; Kang, H.J. A survey on deep learning based bearing fault diagnosis. Neurocomputing 2019, 335, 327–335. [Google Scholar] [CrossRef]
  3. Li, B.; Chow, M.Y.; Tipsuwan, Y.; Hung, J. Neural-network-based motor rolling bearing fault diagnosis. IEEE Trans. Ind. Electron. 2000, 47, 1060–1069. [Google Scholar] [CrossRef]
  4. Li, Y.; Cheng, G.; Liu, C. Research on bearing fault diagnosis based on spectrum characteristics under strong noise interference. Measurement 2021, 169, 108509. [Google Scholar] [CrossRef]
  5. Altaf, M.; Akram, T.; Khan, M.A.; Iqbal, M.; Ch, M.M.I.; Hsu, C.-H. A new statistical features based approach for bearing fault diagnosis using vibration signals. Sensors 2022, 22, 2012. [Google Scholar] [CrossRef] [PubMed]
  6. Jie, H.; Wang, C.; See, K.Y.; Li, H.; Zhao, Z. A Systematic EV Bearing Degradation Testing Approach Considering Circulating Bearing Currents. IEEE/ASME Trans. Mechatron. 2025, 1–6. [Google Scholar] [CrossRef]
  7. Xiao, Z.; Hu, M.; Chen, S.; Cao, K. Bearing electrical-erosion damage in electrical drive systems: A review. IEEE Trans. Transp. Electrif. 2023, 10, 3428–3442. [Google Scholar] [CrossRef]
  8. You, K.; Lian, Z.; Gu, Y. A performance-interpretable intelligent fusion of sound and vibration signals for bearing fault diagnosis via dynamic CAME. Nonlinear Dyn. 2024, 112, 20903–20940. [Google Scholar] [CrossRef]
  9. Gong, T.; Yang, J.; Liu, S.; Liu, H. Non-stationary feature extraction by the stochastic response of coupled oscillators and its application in bearing fault diagnosis under variable speed condition. Nonlinear Dyn. 2022, 108, 3839–3857. [Google Scholar] [CrossRef]
  10. Saidi, L.; Ali, J.B.; Fnaiech, F. Bi-spectrum based-EMD applied to the non-stationary vibration signals for bearing faults diagnosis. ISA Trans. 2014, 53, 1650–1660. [Google Scholar] [CrossRef]
  11. Cheng, J.; Yang, Y.; Yang, Y. A rotating machinery fault diagnosis method based on local mean decomposition. Digit. Signal Process. 2012, 22, 356–366. [Google Scholar] [CrossRef]
  12. Li, Y.; Xu, M.; Wang, R.; Huang, W. A fault diagnosis scheme for rolling bearing based on local mean decomposition and improved multiscale fuzzy entropy. J. Sound Vib. 2016, 360, 277–299. [Google Scholar] [CrossRef]
  13. Goyal, D.; Choudhary, A.; Sandhu, J.K.; Srivastava, P.; Saxena, K.K. An intelligent self-adaptive bearing fault diagnosis approach based on improved local mean decomposition. Int. J. Interact. Des. Manuf. (IJIDeM) 2022, 1–11. [Google Scholar] [CrossRef]
  14. Han, M.; Wu, Y.; Wang, Y.; Liu, W. Roller bearing fault diagnosis based on LMD and multi-scale symbolic dynamic information entropy. J. Mech. Sci. Technol. 2021, 35, 1993–2005. [Google Scholar] [CrossRef]
  15. Wang, Z.; Yao, L.; Cai, Y. Rolling bearing fault diagnosis using generalized refined composite multiscale sample entropy and optimized support vector machine. Measurement 2020, 156, 107574. [Google Scholar] [CrossRef]
  16. Han, M.; Pan, J. A fault diagnosis method combined with LMD, sample entropy and energy ratio for roller bearings. Measurement 2015, 76, 7–19. [Google Scholar] [CrossRef]
  17. Wu, S.D.; Wu, P.H.; Wu, C.W.; Ding, J.-J.; Wang, C.-C. Bearing fault diagnosis based on multiscale permutation entropy and support vector machine. Entropy 2012, 14, 1343–1356. [Google Scholar] [CrossRef]
  18. Zhang, X.; Zhao, B.; Lin, Y. Machine learning based bearing fault diagnosis using the case western reserve university data: A review. IEEE Access 2021, 9, 155598–155608. [Google Scholar] [CrossRef]
  19. Goyal, D.; Choudhary, A.; Pabla, B.S.; Dhami, S.S. Support vector machines based non-contact fault diagnosis system for bearings. J. Intell. Manuf. 2020, 31, 1275–1289. [Google Scholar] [CrossRef]
  20. Li, X.; Yang, Y.; Pan, H.; Cheng, J.; Cheng, J. A novel deep stacking least squares support vector machine for rolling bearing fault diagnosis. Comput. Ind. 2019, 110, 36–47. [Google Scholar] [CrossRef]
  21. Zhou, J.; Xiao, M.; Niu, Y.; Ji, G. Rolling bearing fault diagnosis based on WGWOA-VMD-SVM. Sensors 2022, 22, 6281. [Google Scholar] [CrossRef]
  22. Gao, X.; Wei, H.; Li, T.; Yang, G. A rolling bearing fault diagnosis method based on LSSVM. Adv. Mech. Eng. 2020, 12, 1687814019899561. [Google Scholar] [CrossRef]
  23. Tian, H.; Fan, H.; Feng, M.; Cao, R.; Li, D. Fault diagnosis of rolling bearing based on HPSO algorithm optimized CNN-LSTM neural network. Sensors 2023, 23, 6508. [Google Scholar] [CrossRef]
  24. Deng, W.; Yao, R.; Zhao, H.; Yang, X.; Li, G. A novel intelligent diagnosis method using optimal LS-SVM with improved PSO algorithm. Soft Comput. 2019, 23, 2445–2462. [Google Scholar] [CrossRef]
  25. Xu, H.; Chen, G. An intelligent fault identification method of rolling bearings based on LSSVM optimized by improved PSO. Mech. Syst. Signal Process. 2013, 35, 167–175. [Google Scholar] [CrossRef]
  26. Gao, S.; Xu, L.; Zhang, Y.; Pei, Z. Rolling bearing fault diagnosis based on SSA optimized self-adaptive DBN. ISA Trans. 2022, 128, 485–502. [Google Scholar] [CrossRef] [PubMed]
  27. Yan, X.; Jia, M. Application of CSA-VMD and optimal scale morphological slice bispectrum in enhancing outer race fault detection of rolling element bearings. Mech. Syst. Signal Process. 2019, 122, 56–86. [Google Scholar] [CrossRef]
  28. Wang, C.; Liu, X.; Yang, J.; Jie, H.; Gao, T.; Zhao, Z. Addressing unknown faults diagnosis of transport ship propellers system based on adaptive evolutionary reconstruction metric network. Adv. Eng. Inform. 2025, 65, 103287. [Google Scholar] [CrossRef]
  29. Li, Y.; Jiang, J. Fault diagnosis of bearing based on LMD and MSE. In Proceedings of the 2017 Prognostics and System Health Management Conference (PHM-Harbin), Harbin, China, 9–12 July 2017; IEEE: New York, NY, USA, 2017; pp. 1–4. [Google Scholar]
  30. Wang, Y.; Zhao, J.; Yang, C.; Xu, D.; Ge, J. Remaining useful life prediction of rolling bearings based on Pearson correlation-KPCA multi-feature fusion. Measurement 2022, 201, 111572. [Google Scholar] [CrossRef]
  31. Xiong, J.; Liang, Q.; Wan, J.; Zhang, Q.; Chen, X.; Ma, R. The order statistics correlation coefficient and PPMCC fuse non-dimension in fault diagnosis of rotating petrochemical unit. IEEE Sens. J. 2018, 18, 4704–4714. [Google Scholar] [CrossRef]
  32. Zhuang, D.; Liu, H.; Zheng, H.; Xu, L.; Gu, Z.; Cheng, G.; Qiu, J. The IBA-ISMO method for rolling bearing fault diagnosis based on VMD-sample entropy. Sensors 2023, 23, 991. [Google Scholar] [CrossRef]
  33. He, D.; Liu, C.; Jin, Z.; Ma, R.; Chen, Y.; Shan, S. Fault diagnosis of flywheel bearing based on parameter optimization variational mode decomposition energy entropy and deep learning. Energy 2022, 239, 122108. [Google Scholar] [CrossRef]
  34. Vashishtha, G.; Chauhan, S.; Singh, M.; Kumar, R. Bearing defect identification by swarm decomposition considering permutation entropy measure and opposition-based slime mould algorithm. Measurement 2021, 178, 109389. [Google Scholar] [CrossRef]
  35. Gharehchopogh, F.S.; Ucan, A.; Ibrikci, T.; Arasteh, B.; Isik, G. Slime mould algorithm: A comprehensive survey of its variants and applications. Arch. Comput. Methods Eng. 2023, 30, 2683–2723. [Google Scholar] [CrossRef] [PubMed]
  36. Yan, X.; Hua, X.; Jiang, D.; Xiang, L. A novel robust intelligent fault diagnosis method for rolling bearings based on SPAVMD and WOA-LSSVM under noisy conditions. Meas. Sci. Technol. 2024, 35, 056121. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.