Next Article in Journal
Research and Overview of Crop Straw Chopping and Returning Technology and Machine
Previous Article in Journal
A Band-Stop Filter-Based LQR Control Method for Semi-Active Seat Suspension to Mitigate Motion Sickness
Previous Article in Special Issue
Superiority of Fault-Caused-Speed-Fluctuation-Based Dynamics Modeling: An Example on Planetary Gearbox with Cracked Sun Gear
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Episodic Training and Feature Orthogonality-Driven Domain Generalization for Rotating Machinery Fault Diagnosis Under Unseen Working Conditions

1
Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou 510070, China
2
Guangdong Key Laboratory of Modern Control Technology, Guangzhou 510070, China
*
Author to whom correspondence should be addressed.
Machines 2025, 13(7), 563; https://doi.org/10.3390/machines13070563
Submission received: 22 May 2025 / Revised: 17 June 2025 / Accepted: 27 June 2025 / Published: 28 June 2025

Abstract

In recent years, domain generalization-based fault diagnosis (DGFD) methods have shown significant potential in rotating machinery fault diagnosis in unseen target domains. However, these methods focus on learning domain-invariant representations via feature distribution adaptation. The generalization of classifiers and the orthogonality between fault-related and domain-related features have not been thoroughly explored, which hinders further improvements in DGFD performance. To address these limitations, an episodic training and feature orthogonality-driven domain generalization (EODG) method is proposed. In this method, episodic training is introduced to jointly improve the generalization capabilities of both the feature extractor and fault classifier, while a novel feature transfer loss is proposed for learning domain-invariant representations. Furthermore, the orthogonality between fault-related and domain-related features is enhanced by minimizing their cosine similarity, thereby improving the generalization capability of the DGFD model. The experimental results validated the effectiveness and superiority of the proposed method on domain generalization-based fault diagnosis tasks.

1. Introduction

Rotating machinery is a critical component of mechanical systems and is widely used in industrial applications [1,2]. With the growing complexity and intelligence of mechanical systems, the unexpected failures of rotating machinery can lead to severe economic loss and even safety accidents. Consequently, rotating machinery fault diagnosis becomes more and more important for ensuring the operational reliability of mechanical systems, which can significantly reduce safety accidents and downtime [3,4].
With the advancement of artificial intelligence, intelligent fault diagnosis (IFD) has gained widespread attention and became the mainstream technology for rotating machinery fault diagnosis [5,6]. Many researchers have already integrated a whole IFD system into a bearing [7]. Among these IFD methods, deep learning (DL)-based methods are particularly prominent due to their extraordinary nonlinear mapping capability, which allows them to map the raw operating data of rotating machinery to its health condition without manual feature extraction [8,9]. However, DL models typically require a large amount of labeled training data and assume that the test data are independent and identically distributed with the training data at the inference stage. Because the working condition of rotating machinery has a significant influence on data distributions, the DL model used needs to be trained from scratch to avoid performance degradation when the working condition is changed, which limits the application of DL based methods [10,11].
To solve the above problems, researchers introduced transfer learning for rotating machinery fault diagnosis, which includes domain adaptation-based fault diagnosis (DAFD) and domain generalization-based fault diagnosis (DGFD) [12,13,14]. DAFD leverages labeled source-domain datasets to learn fault diagnosis knowledge and transfers it to target-domain fault diagnosis tasks via collaborative training with unlabeled target-domain datasets. Tian et al. [15] proposed a multi-source information transfer learning method for DAFD, which used local maximum mean discrepancy (MMD) for fine-grained local alignment and used distribution distance to weigh source domains. Qian et al. [16] developed a novel distribution discrepancy metric for cross-machine fault diagnosis by combining MMD and CORAL. Huo et al. [17] proposed a novel linear superposition network with pseudo-label learning for DAFD. Li et al. [18] proposed an auto-regulated universal domain adaptation network for universal domain adaptation fault diagnosis, which does not require prior knowledge about the label space of the target domain. DAFD can effectively retrain the model for fault diagnosis tasks under new working conditions without obtaining a labeled dataset, which significantly reduces the cost of training. However, DAFD needs to collect target-domain datasets, and the trained model is only available for the target domain, while the diagnosis performance of the trained model still suffers from significant degradation under unseen working conditions.
In practical industrial applications, the working condition of rotating machinery always needs to be adjusted to satisfy the manufacturing requirements, and collecting faulty data is expensive and time-consuming [19]. DGFD is proposed for this concern and to further broaden the applications of the DL model. The goal of DGFG is to learn domain-invariant fault diagnosis knowledge from multiple source domains and train a DGFD model which can effectively apply for the fault diagnosis task in an unseen target domain. Therefore, the trained DGFD model can maintain its performance under unseen working conditions. Recently, DGFD research has made considerable progress, and many works have been published. Li et al. [20] proposed a time-stretching method for domain augmentation and combined it with domain-adversarial training and distance metric learning to learn domain-invariant fault diagnosis knowledge. Zhang et al. [21] proposed conditional generative adversarial networks (CGANs) for bearing DGFD, which used a discriminator that can simultaneously classify the fault type and domain label for domain-adversarial training. Chen et al. [22] proposed adversarial-domain-invariant generalization (ADIG) for bearing fault diagnosis under unseen conditions, in which adversarial learning and feature normalization strategies were leveraged to learn domain-invariant knowledge. Ragab et al. [23] used mutual information to capture shareable fault information and learn domain-independent representation. Li et al. [24] proposed causal consistency loss and collaborative training loss to learn consistent causality knowledge. Jia et al. [25] proposed causal disentanglement domain generalization for machine domain generalization fault diagnosis, which used a structural causal model to disentangle fault-related and domain-related representations. Aiming for imbalanced DGFD, Zhao et al. [26] used a semantic regularization-based mix-up strategy to synthesize samples for minority classes; they acquired discriminative knowledge by minimizing the triplet loss. Zhu et al. [27] proposed a decoupled interpretable robust domain generalization network (DIRNet), which used dynamic Shapley to prune the fault-unrelated neural basis functions. Pang [28] maximized the independence between features and domain labels to obtain domain-invariant features based on the Hilbert–Schmidt Independence Criterion (HSIC). Xu et al. [29] proposed a Domain-Private-Suppress Meta-Recognition Network (DPSMR), which can recognize unknown fault types in domain generalization fault diagnosis tasks. Recent studies have applied transformer and self-supervised learning for DGFD: Lu et al. [30] proposed a prior knowledge-embedded convolutional autoencoder (PKECA), which constructed a centroid-based self-supervised learning strategy to improve the generalization of the model; Xiao et al. [31] proposed a Bayesian variational transformer that treated all the attention weights as latent random variables to train an ensemble of networks for enhancing the generalization of the fault diagnosis model. In the existing literature, most DGFD methods focus on learning domain-invariant representations across source domains, while the generalization of fault classifiers is overlooked. Because the domain-invariant representations are learned by aligning feature distributions, the learned representations are not strictly domain-invariant and independent with domain-related features. Therefore, these methods would be less effective if the discrepancy between the target domain and source domain is substantial.
To address these challenges, this paper proposed an episodic training and feature orthogonality driven domain generalization (EODG) method. This method introduces episodic training between the general modules and domain-specific modules to improve the generalization capabilities of both the fault classifier and feature extractor. For example, the general feature extractor is paired with domain-specific classifiers, and the general classifier is combined with domain-specific feature extractors, while the hybrid models are trained by supervised learning. Via episodic training, the classifier learns to classify the features with or without domain information, thereby broadening the decision boundaries. In addition, a novel feature transfer loss is proposed for learning domain-invariant representation. This loss minimizes the distribution discrepancy between same-class samples across different source domains, while maximizing the distribution discrepancy between different-class samples. As a result, the intra-class feature distribution becomes more compact, while the inter-class separability is improved. Furthermore, a feature orthogonalization constrain is applied on fault-related and domain-related features to further eliminate domain information.
In EODG, the basic domain generalization capability of the DGFD model is achieved by minimizing the feature transfer loss, whereas the combination of episodic training and feature orthogonality further improves the generalization of both the general feature extractor and the general fault classifier. The main contributions of this study are as follows:
(1)
A novel EODG method is proposed for the DGFD of rotating machinery. The proposed EODG method can effectively diagnose the health state of rotating machinery under unseen working conditions by jointly improving the generalization capabilities of the feature extractor and fault classifier.
(2)
Episodic training is introduced to broaden the decision boundaries of the general fault classifier. The general module is integrated with domain-specific modules, and the hybrid models are trained by supervised learning.
(3)
Feature orthogonalization constraint is combined with the proposed feature transfer loss to train a general feature extractor that can extract domain-invariant features.

2. Materials and Methods

2.1. Problem Formulation

In this paper, the heterogeneous DGFD problem is studied. We consider the source domains D s = D i s i = 1 n s = X i s , P X i s i = 1 n s and their learning tasks T s = T i s i = 1 n s = Y i s , f i s i = 1 n s , as well as the target domains D t = D i t i = 1 n t = X i t , P X i t i = 1 n t and learning tasks T t = T i t i = 1 n t = Y i t , f i t i = 1 n t , where X i s represents the feature space, P X i s represents the marginal distribution, Y i s represents the label space, and f i t represents the predictive function.
In a heterogeneous DGFD setting, the marginal distributions vary across domains; that is, P X 1 s P X n s s P X 1 t P X n t t . The label space of different source domains can be different; however, for every source domain, its label space Y i s 1 i n s is required to overlap with at least one other source domain Y j s 1 j i n s ; that is, Y j s Y i s . The label spaces of target domains are the subset of the union set of label spaces of source domains; that is, Y l t i = 1 n s Y i s , l = 1 , 2 , , n t . The aim of DGFD is to train an intelligent fault diagnosis model with source-domain samples, and the trained model can accurately diagnose faults under unseen target domains.

2.2. The Proposed Method

As illustrated in Figure 1, in the proposed episodic training and feature orthogonality driven domain generalization (EODG) method, the general feature extractor G and general fault classifier C are combined to create the final DGFD model C G , whereas the domain classification model D G d that consists of a domain feature extractor G d and domain classifier D and the domain-specific fault diagnosis model C i s G i s that consists of domain-specific feature extractors G i s and domain-specific fault classifiers C i s i = 1 , 2 , , n s are used to facilitate the learning of domain-invariant fault diagnosis knowledge. The learning procedure of EODG consists of three parts: (1) supervised learning; (2) domain-invariant representation learning; (3) episodic training. Supervised learning aims to enable the DGFD model, domain-specific fault diagnosis models, and the domain classification model to complete their respective fundamental tasks effectively. In addition, domain-invariant representation learning aims to enable G to extract domain-invariant features. Finally, episodic training is applied to enhance the generalization capabilities of both C and G .

2.2.1. Supervise Learning

Supervised learning is applied to train these models to acquire basic abilities on their respective classification tasks. Via supervised learning, G learns to extract fault-related features across multiple source domains, C learns to diagnose the health conditions of machinery in these domains, G d learns to extract domain-related features from multiple source-domain data, D learns to classify their domain label, G i s learns to extract fault-related features from specific source domains, and C i s learns to diagnose the health conditions of machinery in specific source domains.
In this study, cross-entropy loss is used for all supervised learning tasks; supervised learning loss for the DGFD model L S , the domain classification model L S D , and the domain-specific fault diagnosis model L S i is defined as follows:
L S = E y s log y ^ s
L S D = E d s log d ^ s
L S i = E y i s log y ^ i s , i = 1 , 2 , , n s
where n s is the number of source domains; y s is the fault label of the source-domain sample x s ; y ^ s = C G x s is the output of the DGFD model; d s is the domain label of the source-domain sample x s ; d ^ s = D G d x s is the output of the domain classification model; y i s is the fault label of the i-th source-domain sample x i s , and y ^ i s = C i s G i s x i s is the output of the i-th source-domain-specific model.

2.2.2. Domain-Invariant Representation Learning

In EODG, feature orthogonalization constraint and feature transfer loss are combined to learn domain-invariant representations. Feature orthogonalization encourages orthogonality between fault-related and domain-related features by minimizing their cosine similarity. To align the feature distribution of same-class samples across different domains, and to separate the feature distribution of different-class samples, a novel feature transfer loss is proposed based on MMD in this paper [32]. The feature orthogonalization loss L F O and feature transfer loss L F T are defined as follows:
L F O = E c s G x s , G d x s
L F T = E i , j , c M M D 2 Z i , c s , Z j , c s + E i , c M M D 2 Z i , c s , Z ¯ c s E c , c ^ M M D 2 Z c s , Z c ^ s + E c M M D 2 Z c s , Z ¯ s
where c s , represents the cosine similarity function; M M D , represents the maximum mean discrepancy function; Z i , c s = G X i , c s and Z j , c s = G X j , c s represent the features of dataset X i , c s and X j , c s , respectively; X i , c s and X j , c s contain all c-th-category samples that belong to the i-th source domain and the j-th source domain, respectively; Z c s = G X c s represents the features set of dataset X c s , which contains all samples belonging to the c-th category; Z ¯ c s = E z c s ~ Z c s z c s represents the center of Z c s , which is the feature center of the c-th category; Z ¯ s = E G x s is the feature center of all samples; i = 1 , 2 , , n s 1 , j = i + 1 , i + 2 , , n s , c = 1 , 2 , , n c 1 , and c ^ = c + 1 , c + 2 , , n c , n c represent the number of categories.
It can be seen from Equation (5) that by minimizing L F T , the feature distributions of samples that have the same fault label but different domain labels are aligned, and all feature distributions of same-class samples are aligned with their feature center; meanwhile, the feature distributions of different categories are separated, and the feature distributions of each category are separated from the feature center of all samples.

2.2.3. Episodic Training

Most existing DGFD methods focus on training feature extractors to extract domain-invariant features, while the generalization capability of the fault classifier is overlooked. In EODG, episodic training [33] is applied to improve the generalization capabilities of both the general feature extractor G and the general fault classifier C . In episodic training, general modules and domain-specific modules are combined to form hybrid models which are trained in a supervised learning manner. As shown in Figure 2, in the case of the two source domains, the i-th and j-th domain-specific feature extractors are combined with the general fault classifier to form hybrid models C G i s and C G j s , respectively; meanwhile, the i-th and j-th domain-specific fault classifiers are combined with a general feature extractor to form hybrid models C i s G and C j s G . Then, labeled samples from other domains are used for the supervised training of each domain-specific hybrid model. Finally, the trained G is able to extract general features from samples of other domains which can be classified by the domain-specific fault classifier, and the trained C is able to classify the features extracted by the domain-specific feature extractor. Therefore, the generalization capabilities of G and C are effectively improved.
To meet the requirements of a heterogeneous DGFD setting, only shared-category samples from other domains are used for the supervised training of each domain-specific hybrid model in EODG. The loss of episodic training is defined as follows:
L E T = L E T , G + L E T , C        = E i , j , x i j s y i j s log C i s G x i j s + E i , j , x i j s y i j s log C G i s x i j s
where x i j s is the sample of the i-th source domain that comes from the shared categories between i-th and j-th source domains, and y i j s is the fault label of x i j s , i = 1 , 2 , , n s 1 , j = i + 1 , i + 2 , , n s .

2.3. Diagnosis Procedures

In EODG, supervised learning, domain-invariant representation learning and episodic training are used for model training. During the training procedure, the losses for the general feature extractor G , the general fault classifier C , the domain feature extractor G d , the domain classifier D , domain-specific feature extractors D and domain-specific fault classifiers C i s are defined as follows:
L G = L S + α L F O + β L F T + λ L E T , G
L C = L S + λ L E T , C
L G d = L S D + α L F O
L D = L S D
L G i s = L S i , i = 1 , 2 , , n s
L C i s = L S i , i = 1 , 2 , , n s
where α , β , and λ are tradeoff parameters.
The procedures of the proposed EODG method are presented in Figure 3, and summarized as follows:
Step 1: Collect vibration signals from rotating machinery and partition them into labeled source-domain signals for model training and unseen target-domain signals for model evaluation. Then, segment and standardize these signals to form labeled source-domain datasets D i s i = 1 n s and testing datasets D i t i = 1 n t .
Step 2: Construct a general feature extractor G , a general fault classifier C , a domain feature extractor G d , a domain classifier D , domain-specific feature extractors G i s i = 1 n s and domain-specific fault classifiers C i s i = 1 n s . Pre-train these modules via supervised learning.
Step 3: Sample a batch of training data B i s i = 1 n s from D i s i = 1 n s . Train G i s i = 1 n s and C i s i = 1 n s . Then, freeze the parameters of G i s i = 1 n s and C i s i = 1 n s , and train G , C , G d and D .
Step 4: Repeat Step 3 until the labeled source-domain dataset D i s i = 1 n s is traversed.
Step 5: Repeat Step 3 to Step 4 until the preset maximum number of epochs is reached.

3. Experimental Study

3.1. Datasets Description

In this study, a well-known public HUST bearing dataset and a CNC bearing dataset are used for the verification of the proposed method. Detailed descriptions of these datasets are provided below.

3.1.1. Huazhong University of Science and Technology (HUST) Bearing Dataset

The HUST bearing dataset was provided by Zhao et al. [34]. The test rig of this dataset is shown in Figure 4, and it consists of the following: 1. speed control; 2. a motor; 3. a shaft; 4. an accelerometer; 5. a bearing; and 6. a data acquisition board.
This dataset contains a normal state and four types of failure state, and each failure state has two severity levels. Vibration signals are collected at a sampling rate of 25.6 kHz. The data on the five types of health state (normal, inner-race fault, outer-race fault, ball fault, and inner- and outer-race combination fault) under six rotating speeds (20 Hz, 25 Hz, 30 Hz, 35 Hz, 40 Hz, and varying speeds (0–40–0 Hz)) are selected to evaluate the proposed method. The details of the HUST bearing dataset are listed in Table 1.
The test bearing used in the HUST dataset is a deep groove ball bearing, Rexnord ER16K, and its detailed specifications are listed in Table 2.

3.1.2. CNC Bearing Dataset

The test rig of the CNC bearing dataset is shown in Figure 5. The spindle of CNC is supported by four rolling bearings, and four types of faults (inner-race fault, outer-race fault, and cage fault) are introduced to the third bearing (marked in red). The experimental setup involved cutting aluminum materials under seven spindle speeds, with a feed rate of 2500 mm/min, a cutting depth of 0.1 mm, and a cutting width of 3 mm. Vibration data were acquired using an accelerometer mounted on the bearing housing with a sampling rate of 25.6 kHz. The details of the CNC bearing dataset are listed in Table 3.
The test bearing used in the CNC dataset is an angular contact ball bearing, NSK 40BNR10, and its detailed specifications are listed in Table 4.

3.2. Implementation Details

3.2.1. Network Structure and Hyperparameters

The network structures of the general feature extractor G , the general fault classifier C , the domain feature extractor G d , the domain classifier D , domain-specific feature extractors G i s and domain-specific fault classifiers C i s are listed in Table 5. The structure of ResBlock is shown in Figure 6, where Ch represents the output channel, and W represents the convolutional kernel size.
As can be seen from Table 3, G , G d , and G i s share the same network structure. The distinction between C , D , and C i s lies in their output layers, where the output sizes of C and C i s are determined by the number of classes n c , and the output size of D is determined by the number of source domains n s .
In this study, the negative slope of Leaky_ReLU is set as 0.1, and the Adam optimizer is used for training with a learning rate of 0.004. The model is trained using a batch size of 64 for 128 epochs. The tradeoff parameters are set as follows: α = 1 , β = 4 , and β = 4 .

3.2.2. Experimental Setting

To evaluate the effectiveness of the proposed method, the HUST bearing dataset and CNC bearing dataset are used for multiple-source-domain generalization fault diagnosis experiments; the details of the experimental settings used are listed in Table 6 and Table 7, respectively. In this paper, the domain shifts between source and target domains arise from variations in rotating speed, which are directly proportional to the fault characteristics frequency. In the HUST bearing dataset, each category has 100 samples, with a sample length of 2048 data points. In CNC bearing datasets, each category has 200 samples, with the sample length of 2048 data points.
For each task, three speeds are randomly selected, and the corresponding domains are designated as source domains, whereas the remaining domains serve as target domains. To fulfill the requirements of a heterogeneous setting, the fault types differ across source domains. Specifically, all domains contain the normal state, as it is easy to achieve. In addition, each target fault type is required to appear in at least two different source domains, to learn consistent representations across domains. To verify the model’s DGFD performance for each fault type, all target domains share the same fault types, which cover all fault types that have occurred in the source domains.
Datasets of each domain are identified by their working conditions and fault categories. In Table 4, the working condition is represented by the rotating speed in units of Hz, where 20 denotes 20 Hz, 25 denotes 25 Hz, and so on. Specifically, VS denotes varying speed (0-40-0 Hz). The definitions of the fault codes can be found by referring to Table 1. In Table 5, the working condition is represented by the spindle rotation speed in units of rpm, where 6k denotes 6000 rpm, 7k denotes 7000 rpm, etc. The definitions of the fault codes are listed in Table 2. The dataset pertaining to the first source domain (S1) of the first HUST task (H1) is represented by 20 (H IRF BF ComF), where 20 indicates that the test rig is operating at a speed of 20 Hz, and (N IRF BF ComF) refers to the inclusion of four types of data (normal, inner-race fault, ball fault, and inner- and outer-race fault). For target domains, all datasets shared the same fault categories, and these datasets are uniformly represented, such as the datasets of target domains of H1, which is represented by 35 40 VS (N IRF ORF BF ComF), where 35 40 VS indicates the rotating speeds of the three target domains, each containing five types of data (normal, inner-race fault, outer-race fault, ball fault, and inner- and outer-race fault).

3.3. Benchmarked Approaches

To evaluate the effectiveness of the proposed methods, five methods are used for comparison.
(1) Convolutional Neural Networks (CNNs): All source-domain datasets are combined and used to train the model in a supervised learning manner.
(2) Conditional generative adversarial networks (CGANs) [21]: CGANs use a discriminator that can simultaneously classify the fault type and domain label for domain-adversarial training.
(3) Adversarial-Domain-Invariant Generalization (ADIG) [22]: In ADIG, the reshaped two-dimensional frequency spectrum is used as the input of the model, and adversarial learning and feature normalization strategies are leveraged to learn domain-invariant knowledge.
(4) Conditional Contrastive Domain Generalization (CCDG) [23]: CCDG uses mutual information to capture shareable fault information and learn domain-independent representations for rotary machine fault diagnosis in unseen domains.
(5) Causal Consistency Networks (CCNs) [24]: CCNs use the proposed causal consistency loss and collaborative training loss to learn consistent causality knowledge for bearing domain generalization fault diagnosis.

3.4. Results and Discussion

In this study, all experiments were implemented on an NVIDIA TITAN V GPU (Nvidia Corporation, Santa Clara, CA, USA) with the Pytorch 2.6.0 framework. The diagnostic accuracy, defined as the ratio of the number of correctly predicted test samples to the total number of test samples, is adopted as the evaluation metric. Each task is repeated ten times to reduce randomness.

3.4.1. Experimental Results of HUST Bearing Dataset

The diagnosis results of the HUST bearing dataset are presented in Table 8, which includes the mean and standard deviation of the accuracies of ten trials for each task. Specifically, the final row (Avg.) shows the average performance across all tasks. To further illustrate these results, Figure 7 shows the accuracy curve of each target domain, along with the average accuracy curve of all target domains, while Figure 8 presents a histogram of the diagnostic accuracy and corresponding standard deviations.
CNN achieves the lowest overall accuracy of 73.06%, which reveals the limitations of traditional supervised learning while applied to DGFD tasks. Among the DGFD methods, CGANs and ADIG are adversarial methods, and CCDG, CCNs and the proposed EODG method are non-adversarial methods. ADIG achieves the second highest overall accuracy of 83.74%, significantly outperforming CGANs, CCDG and CCNs. The superior performance of ADIG over that of CGANs indicates that incorporating frequency-spectrum inputs and feature normalization strategies can enhance generalization.
Among these methods, the proposed EODG method achieves the highest overall accuracy of 87.47%, outperforming all other benchmark methods. EODG consistently achieves superior performance across nearly all tasks and achieves the highest overall accuracy. The only exception is Task H2, where its accuracy (80.65%) is marginally lower (by 0.55%) than the best-performing method (ADIG, 81.20%). As illustrated in Figure 7, EODG demonstrates consistently strong performance across individual target domains. Figure 8 shows that EODG achieved the best overall performance with relatively small standard deviations.
In summary, the results of HUST bearing datasets demonstrate the effectiveness and superiority of the proposed EODG method in DGFD tasks. The combination of episodic training (EPI), feature transfer (FT) constraint and feature orthogonalization (FO) constraint can significantly improve the generalization of intelligent fault diagnosis model.

3.4.2. Experimental Results of CNC Bearing Dataset

The results of the CNC bearing datasets are shown in Table 9, and they are similar to the results of the HUST bearing datasets: CNNs achieve the lowest overall accuracy of 70.84%. The overall accuracies of CGANs, CCDG and CCNs are close, all slightly outperforming that of CNNs. ADIG achieves the second highest overall accuracy of 80.88%. In some tasks, ADIG shows higher accuracy than the proposed EODG method.
EODG achieves the highest overall accuracy of 85.56%, significantly outperforming other methods. EODG achieves the highest accuracy in most individual tasks, except C3, C5, and C6, in which its accuracy falls short of the best-performing method by no more than 1.5%. Figure 9 and Figure 10 show the accuracy curves and histogram of the experimental results, respectively. In these figures, the EODG method shows the best diagnosis performance in most individual target domains and shows the best overall performance with relatively small standard deviations, demonstrating the effectiveness, superiority and robustness of the proposed method.

3.5. Feature Visualization

To further evaluate the effectiveness of the proposed method, t-distributed stochastic neighbor embedding (t-SNE) is used for feature visualization [35]. Because the output features of the feature extractor are used for domain-invariant representation learning for all methods except CNNs, feature visualization is conducted on these features. Task C7 is selected for the visualization.
Figure 11 presents the results of feature visualization, where the legend consists of the fault label and the domain label. For example, “IFR(T)” represents that the inner-race fault samples from the target domain. It can be seen from Figure 11 that CNNs, CGANs, and CCNs exhibit category confusion in the extracted features. ADIG exhibits good inter-class discrimination, with almost no category confusion. However, its inter-domain integration is poor, as few target-domain samples are integrated with the corresponding source-domain samples of the same category. CCDG shows good inter-class discrimination and better inter-domain integration than ADIG. Among these methods, EODG exhibits the best domain-invariant feature extraction ability, with clear inter-class boundaries and excellent inter-domain integration. The target-domain samples are clustered near the corresponding source-domain samples of the same category.
The feature visualization results demonstrate that EODG can effectively extract domain-invariant features, even in the presence of unseen working conditions.

3.6. Ablation Study

In this section, the influences of FO, EPI, and FT on the performance of the model are analyzed. Six variants of EODG are used for the ablation study: (1) FO: the model is trained with supervised learning and feature orthogonalization constraint; (2) EPI: the model is trained with supervised learning and episodic training; (3) FT: the model is trained with supervised learning and feature transfer constraint; (4) EPI + FO: the model is trained with supervised learning, episodic training, and feature orthogonalization constraint; (5) FT + FO: the model is trained with supervised learning, feature transfer constraint, and feature orthogonalization constraint; (6) FT + EPI: the model is trained with supervised learning, feature transfer constraint, and episodic training.
The results of the ablation study are listed in Table 10 and shown in Figure 12. The results demonstrate that the contribution ranking is FT (78.47%) > EPI (76.76%) > FO (74.70%). The combination of FT and EPI achieves the second highest overall accuracy of 79.45%, which is higher than that of the individual EPI and FT variants. For EPI + FO, it also achieves higher overall accuracy than EPI and FO. FT + FO performs slightly better than FO, but slightly worse than FT. EODG with a combination of FO, EPI and FT outperforms all six variants in all individual tasks, which demonstrates the effectiveness of this combination.
In Figure 12, EODG shows the highest accuracy in every task and the smallest standard deviation in almost all tasks. The results of the ablation study demonstrate that FO, EPI and FT can effectively improve the generalization of the DGFD model. Furthermore, FT gives model the basic ability to perform DGFD by learning domain-invariant representations, while the additions of EPI and FO further boost the DGFD performance of the intelligent fault diagnosis model. In the practical industry, rotating machinery is always required to operate under varying working conditions. The proposed EODG method can train a DGFD model that can mitigate the performance degradation caused by changes in working conditions, thereby broadening the applications of intelligent fault diagnosis methods in practical industry.

4. Conclusions

This paper proposed an EODG method for domain generalization fault diagnosis tasks which aims to improve the fault diagnosis performance of intelligent fault diagnosis models under unseen working conditions. In EODG, the proposed feature transfer loss gives model the basic ability to perform domain generalization fault diagnosis, whereas the combination of episodic training and feature orthogonality further improves the generalization of both the general feature extractor and the general fault classifier. The model is trained using only the labeled data from source domains, and the samples from unseen target domains are used to evaluate the domain generalization fault diagnosis performance. Two bearing fault diagnosis datasets were used in this study for the evaluation of EODG performance; EODG achieved the highest overall accuracy and the highest accuracy in almost all individual tasks. The experimental results of the comparative study, of feature visualization, and of the ablation study demonstrate that EODG can significantly improve the performance of the intelligent fault diagnosis model under unseen working conditions by taking the advantages of feature orthogonalization constraint, episodic training and feature transfer constraint.

Author Contributions

Conceptualization, Y.L. (Yixiao Liao) and S.Z.; methodology, Y.L. (Yixiao Liao); software, J.L.; validation, L.Z. and C.L.; data curation, J.L.; writing—original draft preparation, Y.L. (Yixiao Liao) and K.P.; writing—review and editing, Y.L. (Yisen Liu); supervision, S.Z.; project administration, S.Z. and Y.L. (Yixiao Liao); funding acquisition, Y.L. (Yixiao Liao) and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the GDAS’ Project of Science and Technology Development (grant number 2022 GDASZH 2022010108).

Data Availability Statement

The data used in this study are available on request to the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Misbah, I.; Lee, C.K.M.; Keung, K.L. Fault diagnosis in rotating machines based on transfer learning: Literature review. Knowl.-Based Syst. 2024, 283, 111158. [Google Scholar] [CrossRef]
  2. Lin, H.; Huang, X.; Chen, Z.; He, G.; Xi, C.; Li, W. Matching pursuit network: An interpretable sparse time–frequency representation method toward mechanical fault diagnosis. IEEE Trans. Neural Netw. Learn. Syst. 2024, 1–12. [Google Scholar] [CrossRef] [PubMed]
  3. Yang, X.; He, G.; Ding, K.; Li, Y.; Ding, X.; Li, W. A novel optimization demodulation method for gear fault vibration overmodulation signal and its application to fault diagnosis. IEEE Trans. Instrum. Meas. 2023, 72, 3517812. [Google Scholar] [CrossRef]
  4. Tang, S.; Ma, J.; Yan, Z.; Zhu, Y.; Khoo, B.C. Deep transfer learning strategy in intelligent fault diagnosis of rotating machinery. Eng. Appl. Artif. Intell. 2024, 134, 108678. [Google Scholar] [CrossRef]
  5. Li, J.; Yue, K.; Chen, Z.; Xia, J.; Li, W.; Zhang, X. An Uncertainty-Aware Continual Learning Framework for Fault Diagnosis of Rotating Machinery With Homogeneous-Heterogeneous Faults. IEEE Trans. Autom. Sci. Eng. 2024, 1–15. [Google Scholar] [CrossRef]
  6. Yuan, B.; Lei, L.; Chen, S. Optimized Variational Mode Decomposition and Convolutional Block Attention Module-Enhanced Hybrid Network for Bearing Fault Diagnosis. Machines 2025, 13, 320. [Google Scholar] [CrossRef]
  7. Wang, S.; Zhang, X.; Ma, T.; Kong, Y.; Gao, S.; Han, Q. Symmetrical Triboelectric In Situ Self-Powered Sensing and Fault Diagnosis for Double-Row Tapered Roller Bearings in Wind Turbines: An Integrated and Real-Time Approach. Adv. Sci. 2025, 12, 2500981. [Google Scholar] [CrossRef]
  8. Ren, Z.; Lin, T.; Feng, K.; Zhu, Y.; Liu, Z.; Yan, K. A systematic review on imbalanced learning methods in intelligent fault diagnosis. IEEE Trans. Instrum. Meas. 2023, 72, 3508535. [Google Scholar] [CrossRef]
  9. Dai, J.; Tian, L.; Chang, H. An Intelligent Diagnostic Method for Wear Depth of Sliding Bearings Based on MGCNN. Machines 2024, 12, 266. [Google Scholar] [CrossRef]
  10. Qian, Q.; Zhang, B.; Li, C.; Mao, Y.; Qin, Y. Federated transfer learning for machinery fault diagnosis: A comprehensive review of technique and application. Mech. Syst. Signal Process. 2025, 223, 111837. [Google Scholar] [CrossRef]
  11. Wang, J.; Yang, S.; Liu, Y.; Wen, G. Deep subdomain transfer learning with spatial attention ConvLSTM network for fault diagnosis of wheelset bearing in high-speed trains. Machines 2023, 11, 304. [Google Scholar] [CrossRef]
  12. Xiao, Y.; Shao, H.; Yan, S.; Wang, J.; Peng, Y.; Liu, B. Domain generalization for rotating machinery fault diagnosis: A survey. Adv. Eng. Inform. 2025, 64, 103063. [Google Scholar] [CrossRef]
  13. Davoodabadi, A.; Behzad, M.; Arghand, H.A.; Mohammadi, S.; Gelman, L. Intelligent Diagnosis of Rolling Element Bearings Under Various Operating Conditions Using an Enhanced Envelope Technique and Transfer Learning. Machines 2025, 13, 351. [Google Scholar] [CrossRef]
  14. Ma, S.; Leng, J.; Zheng, P.; Chen, Z.; Li, B.; Li, W.; Liu, Q.; Chen, X. A digital twin-assisted deep transfer learning method towards intelligent thermal error modeling of electric spindles. J. Intell. Manuf. 2025, 36, 1659–1688. [Google Scholar] [CrossRef]
  15. Tian, J.; Han, D.; Li, M.; Shi, P. A multi-source information transfer learning method with subdomain adaptation for cross-domain fault diagnosis. Knowl.-Based Syst. 2022, 243, 108466. [Google Scholar] [CrossRef]
  16. Qian, Q.; Qin, Y.; Luo, J.; Wang, Y.; Wu, F. Deep discriminative transfer learning network for cross-machine fault diagnosis. Mech. Syst. Signal Process. 2023, 186, 109884. [Google Scholar] [CrossRef]
  17. Huo, C.; Jiang, Q.; Shen, Y.; Zhu, Q.; Zhang, Q. Enhanced transfer learning method for rolling bearing fault diagnosis based on linear superposition network. Eng. Appl. Artif. Intell. 2023, 121, 105970. [Google Scholar] [CrossRef]
  18. Li, J.; Zhang, X.; Yue, K.; Chen, J.; Chen, Z.; Li, W. An auto-regulated universal domain adaptation network for uncertain diagnostic scenarios of rotating machinery. Expert Syst. Appl. 2024, 249, 123836. [Google Scholar] [CrossRef]
  19. Xia, J.; Huang, R.; Chen, Z.; He, G.; Li, W. A novel digital twin-driven approach based on physical-virtual data fusion for gearbox fault diagnosis. Reliab. Eng. Syst. Saf. 2023, 240, 109542. [Google Scholar] [CrossRef]
  20. Li, X.; Zhang, W.; Ma, H.; Luo, Z.; Li, X. Domain generalization in rotating machinery fault diagnostics using deep neural networks. Neurocomputing 2020, 403, 409–420. [Google Scholar] [CrossRef]
  21. Zhang, Q.; Zhao, Z.; Zhang, X.; Liu, Y.; Sun, C.; Li, M.; Wang, S.; Chen, X. Conditional adversarial domain generalization with a single discriminator for bearing fault diagnosis. IEEE Trans. Autom. Sci. Eng. 2021, 70, 3514515. [Google Scholar] [CrossRef]
  22. Chen, L.; Li, Q.; Shen, C.; Zhu, J.; Wang, D.; Xia, M. Adversarial domain-invariant generalization: A generic domain-regressive framework for bearing fault diagnosis under unseen conditions. IEEE Trans. Ind. Inform. 2021, 18, 1790–1800. [Google Scholar] [CrossRef]
  23. Ragab, M.; Chen, Z.; Zhang, W.; Eldele, E.; Wu, M.; Kwoh, C.-K.; Li, X. Conditional contrastive domain generalization for fault diagnosis. IEEE Trans. Autom. Sci. Eng. 2022, 71, 3506912. [Google Scholar] [CrossRef]
  24. Li, J.; Wang, Y.; Zi, Y.; Zhang, H.; Li, C. Causal consistency network: A collaborative multimachine generalization method for bearing fault diagnosis. IEEE Trans. Ind. Inform. 2022, 19, 5915–5924. [Google Scholar] [CrossRef]
  25. Jia, L.; Chow, T.W.S.; Yuan, Y. Causal disentanglement domain generalization for time-series signal fault diagnosis. Neural Netw. 2024, 172, 106099. [Google Scholar] [CrossRef]
  26. Zhao, C.; Shen, W. Imbalanced domain generalization via semantic-discriminative augmentation for intelligent fault diagnosis. Adv. Eng. Inform. 2024, 59, 102262. [Google Scholar] [CrossRef]
  27. Zhu, Q.; Liu, H.; Bao, C.; Zhu, J.; Mao, X.; He, S.; Peng, F. Decoupled interpretable robust domain generalization networks: A fault diagnosis approach across bearings, working conditions, and artificial-to-real scenarios. Adv. Eng. Inform. 2024, 61, 102445. [Google Scholar] [CrossRef]
  28. Pang, S. Stacked maximum independence autoencoders: A domain generalization approach for fault diagnosis under various working conditions. Mech. Syst. Signal Process. 2024, 208, 111035. [Google Scholar] [CrossRef]
  29. Xu, M.; Zhang, Y.; Lu, B.; Liu, Z.; Sun, Q. A novel domain-private-suppress meta-recognition network based universal domain generalization for machinery fault diagnosis. Knowl.-Based Syst. 2025, 309, 112775. [Google Scholar] [CrossRef]
  30. Lu, F.; Tong, Q.; Jiang, X.; Du, X.; Xu, J.; Huo, J. Prior knowledge embedding convolutional autoencoder: A single-source domain generalized fault diagnosis framework under small samples. Comput. Ind. 2025, 164, 104169. [Google Scholar] [CrossRef]
  31. Xiao, Y.; Shao, H.; Wang, J.; Yan, S.; Liu, B. Bayesian variational transformer: A generalizable model for rotating machinery fault diagnosis. Mech. Syst. Signal Process. 2024, 207, 110936. [Google Scholar] [CrossRef]
  32. Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A.J. A kernel method for the two-sample-problem. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 4–7 December 2006. [Google Scholar]
  33. Li, D.; Zhang, J.; Yang, Y.; Liu, C.; Song, Y.-Z.; Hospedales, T. Episodic training for domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  34. Zhao, C.; Zio, E.; Shen, W. Domain generalization for cross-domain fault diagnosis: An application-oriented perspective and a benchmark study. Reliab. Eng. Syst. Saf. 2024, 245, 109964. [Google Scholar] [CrossRef]
  35. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. Overview of EODG.
Figure 1. Overview of EODG.
Machines 13 00563 g001
Figure 2. The schematic diagram of episodic training.
Figure 2. The schematic diagram of episodic training.
Machines 13 00563 g002
Figure 3. The diagnosis procedures of EODG.
Figure 3. The diagnosis procedures of EODG.
Machines 13 00563 g003
Figure 4. Test rig of HUST bearing dataset.
Figure 4. Test rig of HUST bearing dataset.
Machines 13 00563 g004
Figure 5. Test rig of CNC bearing dataset. (a) CNC picture. (b) Structure diagram.
Figure 5. Test rig of CNC bearing dataset. (a) CNC picture. (b) Structure diagram.
Machines 13 00563 g005
Figure 6. Structure of ResBlock.
Figure 6. Structure of ResBlock.
Machines 13 00563 g006
Figure 7. Accuracy curves (%) of HUST bearing dataset. (a) Accuracy curve of Target Domain 1. (b) Accuracy curve of Target Domain 2. (c) Accuracy curve of Target Domain 3. (d) Average accuracy curve of all target domains.
Figure 7. Accuracy curves (%) of HUST bearing dataset. (a) Accuracy curve of Target Domain 1. (b) Accuracy curve of Target Domain 2. (c) Accuracy curve of Target Domain 3. (d) Average accuracy curve of all target domains.
Machines 13 00563 g007
Figure 8. Accuracy histogram (%) of HUST bearing dataset.
Figure 8. Accuracy histogram (%) of HUST bearing dataset.
Machines 13 00563 g008
Figure 9. Accuracy curves (%) of CNC bearing dataset. (a) Accuracy curve of Target Domain 1. (b) Accuracy curve of Target Domain 2. (c) Accuracy curve of Target Domain 3. (d) Accuracy curve of Target Domain 4. (e) Average accuracy curve of all target domains.
Figure 9. Accuracy curves (%) of CNC bearing dataset. (a) Accuracy curve of Target Domain 1. (b) Accuracy curve of Target Domain 2. (c) Accuracy curve of Target Domain 3. (d) Accuracy curve of Target Domain 4. (e) Average accuracy curve of all target domains.
Machines 13 00563 g009
Figure 10. Accuracy histogram (%) of CNC bearing dataset.
Figure 10. Accuracy histogram (%) of CNC bearing dataset.
Machines 13 00563 g010
Figure 11. Feature visualization of task C7.
Figure 11. Feature visualization of task C7.
Machines 13 00563 g011
Figure 12. Accuracy histogram (%) of CNC bearing dataset ablation experiment.
Figure 12. Accuracy histogram (%) of CNC bearing dataset ablation experiment.
Machines 13 00563 g012
Table 1. Details of HUST bearing dataset.
Table 1. Details of HUST bearing dataset.
LabelBearing StateMarkRotating Speeds (Hz)
1NormalN20, 25, 30, 35, 40, and varying speed (0-40-0)
2Inner-race faultIRF
3Outer-race faultORF
4Ball FaultBF
5Inner- and outer-race faultComF
Table 2. Detailed specifications of HUST bearing.
Table 2. Detailed specifications of HUST bearing.
Pitch Diameter/mmBall Diameter/mmNumber of BallsContact Angle/°
38.527.9490
Table 3. Details of CNC bearing dataset.
Table 3. Details of CNC bearing dataset.
LabelBearing StateMarkWorking ConditionSpindle Speeds (rpm)
1NormalNAluminum cutting6000, 7000, 8000, 9000,
10,000, 11,000, 12,000
2Inner-race faultIRF
3Outer-race faultORF
4Cage FaultCF
Table 4. Detailed specifications of CNC bearing.
Table 4. Detailed specifications of CNC bearing.
Pitch Diameter/mmBall Diameter/mmNumber of BallsContact Angle/°
545.62218
Table 5. Network structures.
Table 5. Network structures.
ModulesLayer TypeActivation FunctionKernel SizeOutput
G
G d
G i s
Input//(1, 2048)
Conv1ReLU16 × 129(16, 1920)
MaxPooling1/8(16, 240)
ResBlock1ReLU32 × 5(32, 240)
MaxPooling2/4(32, 60)
ResBlock2ReLU32 × 5(32, 60)
MaxPooling3/4(32, 15)
Conv2ReLU64 × 15(64, 1)
Flatten//(64)
C
D
C i s
Linear1Leaky_ReLU32(32)
Linear2Leaky_ReLU16(16)
Linear3 (Output)SoftMax n c / n s ( n c / n s )
Table 6. Experimental setting of HUST bearing dataset.
Table 6. Experimental setting of HUST bearing dataset.
TaskSource DomainsTarget Domains
S1S2S3
H120
(N IRF BF ComF)
25
(N IRF ORF BF)
30
(N ORF BF ComF)
35 40 VS
(N IRF ORF BF ComF)
H230
(N IRF BF ComF)
35
(N IRF ORF BF)
40
(N ORF BF ComF)
20 25 VS
(N IRF ORF BF ComF)
H320
(N IRF BF ComF)
30
(H IRF ORF BF)
40
(H ORF BF ComF)
25 35 VS
(H IRF ORF BF ComF)
H420
(N IRF BF ComF)
25
(N IRF ORF BF)
40
(N ORF BF ComF)
30 35 VS
(N IRF ORF BF ComF)
H520
(N IRF BF ComF)
35
(N IRF ORF BF)
40
(N ORF BF ComF)
25 30 VS
(N IRF ORF BF ComF)
H625
(N IRF BF ComF)
30
(N IRF ORF BF)
35
(N ORF BF ComF)
20 40 VS
(N IRF ORF BF ComF)
Table 7. Experimental setting of CNC bearing dataset.
Table 7. Experimental setting of CNC bearing dataset.
TaskSource DomainsTarget Domains
S1S2S3
C16k
(N ORF CF)
7k
(N IRF ORF)
8k
(N IRF CF)
9k 10k 11k 12k
(N IRF ORF CF)
C210k
(N ORF CF)
11k
(N IRF ORF)
F12k
(N IRF CF)
6k CB7k 8k 9k
(N IRF ORF CF)
C36k
(N ORF CF)
9k
(N IRF ORF)
12k
(N IRF CF)
7k 8k 10k 11k
(N IRF ORF CF)
C48k
(N ORF CF)
9k
(N IRF ORF)
10k
(N IRF CF)
6k 7k 11k 12k
(N IRF ORF CF)
C57k
(N ORF CF)
9k
(N IRF ORF)
11k
(N IRF CF)
6k 8k 10k 12k
(N IRF ORF CF)
C67k
(N ORF CF)
8k
(N IRF ORF)
9k
(N IRF CF)
6k 10k 11k 12k
(N IRF ORF CF)
C79k
(N ORF CF)
10k
(N IRF ORF)
11k
(N IRF CF)
6k 7k 8k 12k
(N IRF ORF CF)
C86k
(N ORF CF)
7k
(N IRF ORF)
12k
(N IRF CF)
8k 9k 10k 11k
(N IRF ORF CF)
C96k
(N ORF CF)
11k
(N IRF ORF)
12k
(N IRF CF)
7k 8k 9k 10k
(N IRF ORF CF)
Table 8. Diagnosis results (%) of HUST bearing dataset.
Table 8. Diagnosis results (%) of HUST bearing dataset.
TasksCNNCGANsADIGCCDGCCNProposed
H167.84 ± 3.0866.89 ± 2.4978.10 ± 5.0466.71 ± 3.0572.29 ± 2.4881.91 ± 3.95
H256.46 ± 3.2564.81 ± 4.4481.20 ± 5.2565.84 ± 5.2569.19 ± 2.0480.65 ± 3.12
H380.66 ± 3.0184.28 ± 3.2086.63 ± 3.5282.45 ± 2.6782.55 ± 1.2294.47 ± 1.16
H478.56 ± 3.6979.87 ± 2.7985.70 ± 3.1078.69 ± 2.5371.53 ± 2.5389.92 ± 2.57
H582.15 ± 1.8783.78 ± 4.7588.63 ± 3.7081.26 ± 3.7881.21 ± 0.7695.57 ± 0.78
H669.55 ± 4.1478.93 ± 1.5982.17 ± 2.9774.59 ± 4.3872.56 ± 0.7282.31 ± 2.76
Avg.72.54 ± 8.9876.43 ± 7.7483.74 ± 3.5874.92 ± 6.6074.89 ± 5.0887.47 ± 6.12
The highest accuracy of each row is marked in bold.
Table 9. Diagnosis results (%) of CNC bearing dataset.
Table 9. Diagnosis results (%) of CNC bearing dataset.
TasksCNNCGANsADIGCCDGCCNProposed
C165.18 ± 1.1667.57 ± 0.4277.60 ± 1.9468.65 ± 4.6773.47 ± 2.5086.38 ± 0.61
C266.77 ± 2.7059.88 ± 2.6173.69 ± 3.1166.44 ± 3.4876.55 ± 3.2980.52 ± 2.20
C367.64 ± 2.1573.70 ± 2.6880.30 ± 1.2573.19 ± 2.4573.18 ± 1.8880.03 ± 0.90
C474.80 ± 3.1280.49 ± 1.0979.43 ± 0.1674.66 ± 2.6582.10 ± 3.5889.45 ± 2.42
C568.22 ± 1.7179.68 ± 3.7987.95 ± 1.9873.87 ± 0.9468.96 ± 0.6487.18 ± 1.83
C673.42 ± 2.9675.45 ± 2.7187.72 ± 0.5283.27 ± 2.2873.40 ± 4.8186.22 ± 0.59
C785.95 ± 3.0069.76 ± 2.5786.56 ± 4.9484.89 ± 1.7880.38 ± 2.1394.67 ± 1.06
C872.07 ± 1.0579.33 ± 1.6678.49 ± 1.3775.83 ± 1.4374.22 ± 1.6984.73 ± 2.14
C963.51 ± 1.5065.84 ± 3.0176.19 ± 2.4266.43 ± 2.5873.16 ± 1.8980.87 ± 3.32
Avg.70.84 ± 6.4372.41 ± 6.7380.88 ± 4.9674.14 ± 6.2575.05 ± 3.8185.56 ± 4.48
The highest accuracy of each row is marked in bold.
Table 10. Ablation experiment results (%) of CNC bearing dataset.
Table 10. Ablation experiment results (%) of CNC bearing dataset.
TasksFOEPIFTEPI + FOFT + FOFT + EPIProposed
C166.94 ± 1.6373.41 ± 2.7179.24 ± 1.3370.91 ± 0.9479.42 ± 2.2182.46 ± 1.9186.38 ± 0.61
C264.76 ± 2.2566.51 ± 1.8371.54 ± 1.7568.29 ± 2.3664.82 ± 0.9969.10 ± 4.9480.52 ± 2.20
C373.31 ± 1.8671.81 ± 2.3471.56 ± 1.9171.91 ± 1.3870.22 ± 1.5076.35 ± 1.3780.03 ± 0.90
C483.02 ± 0.9982.75 ± 1.3978.18 ± 1.8682.06 ± 3.8976.30 ± 3.5880.68 ± 2.2389.45 ± 2.42
C573.03 ± 1.9081.64 ± 2.4777.68 ± 3.7279.20 ± 5.6376.09 ± 2.1876.89 ± 2.9087.18 ± 1.83
C678.17 ± 4.2080.92 ± 0.8783.30 ± 1.6681.30 ± 1.1781.87 ± 1.6882.64 ± 1.3786.22 ± 0.59
C789.23 ± 2.3886.92 ± 0.9792.61 ± 2.1391.64 ± 2.6090.53 ± 0.7093.69 ± 2.0994.67 ± 1.06
C877.38 ± 2.4877.34 ± 2.1680.92 ± 1.8177.13 ± 1.0279.08 ± 2.4880.96 ± 1.1984.73 ± 2.14
C966.49 ± 3.4069.55 ± 2.6071.24 ± 2.0570.50 ± 1.2268.75 ± 2.2972.32 ± 3.2280.87 ± 3.32
Avg.74.70 ± 7.6876.76 ± 6.4478.47 ± 6.4876.99 ± 7.0376.34 ± 7.2879.45 ± 6.6785.56 ± 4.48
The highest accuracy of each row is marked in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, Y.; Zhou, S.; Liu, Y.; Pang, K.; Li, J.; Li, C.; Zhao, L. Episodic Training and Feature Orthogonality-Driven Domain Generalization for Rotating Machinery Fault Diagnosis Under Unseen Working Conditions. Machines 2025, 13, 563. https://doi.org/10.3390/machines13070563

AMA Style

Liao Y, Zhou S, Liu Y, Pang K, Li J, Li C, Zhao L. Episodic Training and Feature Orthogonality-Driven Domain Generalization for Rotating Machinery Fault Diagnosis Under Unseen Working Conditions. Machines. 2025; 13(7):563. https://doi.org/10.3390/machines13070563

Chicago/Turabian Style

Liao, Yixiao, Songbin Zhou, Yisen Liu, Kunkun Pang, Jing Li, Chang Li, and Lulu Zhao. 2025. "Episodic Training and Feature Orthogonality-Driven Domain Generalization for Rotating Machinery Fault Diagnosis Under Unseen Working Conditions" Machines 13, no. 7: 563. https://doi.org/10.3390/machines13070563

APA Style

Liao, Y., Zhou, S., Liu, Y., Pang, K., Li, J., Li, C., & Zhao, L. (2025). Episodic Training and Feature Orthogonality-Driven Domain Generalization for Rotating Machinery Fault Diagnosis Under Unseen Working Conditions. Machines, 13(7), 563. https://doi.org/10.3390/machines13070563

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop