Next Article in Journal
Enhancing Innovation and Resilience in Entrepreneurial Ecosystems Using Digital Twins and Fuzzy Optimization
Previous Article in Journal
Generative AI for Text-to-Video Generation: Recent Advances and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification

1
Qingyuan Power Supply Bureau, Guangdong Power Grid Corporation Limited, No. 38, Beijiang Road, Qingcheng District, Qingyuan 511500, China
2
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Digital 2026, 6(1), 24; https://doi.org/10.3390/digital6010024
Submission received: 14 January 2026 / Revised: 3 March 2026 / Accepted: 5 March 2026 / Published: 10 March 2026

Abstract

Dissolved gas analysis (DGA) provides valuable information for transformer condition monitoring, yet accurate multi-class fault identification remains challenging due to overlapping gas patterns and the sensitivity of classifier hyperparameters. This study proposes a hybrid optimization framework that combines Particle Swarm Optimization and Grey Wolf Optimization to tune the hyperparameters of a Support Vector Machine (SVM) for transformer fault diagnosis based on gas classification. The model is evaluated on a DGA dataset using a strict protocol that separates cross-validation–based tuning from held-out test assessment. Experimental results show that the proposed hybrid PSO-GWO-SVM achieves superior diagnostic performance and more stable convergence compared with representative single-optimizer baselines, demonstrating its potential for practical transformer fault identification.

1. Introduction

Transformers are one of the most critical components in power systems, playing a vital role in ensuring the stable operation of the electrical grid, and are commonly monitored through multi-dimensional condition-sensing approaches, including electrical, chemical, thermal, and acoustic voiceprint-based techniques. Incorrect or delayed fault diagnosis may lead to unexpected outages, accelerated insulation deterioration, and even catastrophic transformer failures, which can cause substantial repair/replacement costs, loss of revenue due to service interruption, and increased safety and environmental risks. Therefore, improving diagnostic accuracy and early identification of fault types is crucial for condition-based maintenance and reliable grid operation. As electrical networks grow in complexity, transformer failure can have catastrophic effects, leading to power outages, significant repair costs, and system instability. Thus, early fault detection and diagnosis are paramount for ensuring the reliability and efficiency of power grids. Effective fault diagnosis helps reduce downtime, prevents further damage, and improves the overall maintenance strategy of transformers, which directly contributes to grid security and operational continuity [1,2].
Transformers, particularly oil-immersed types, can develop faults over time, leading to the generation of gases as a byproduct of these faults. These gases, typically hydrogen (H2), methane (CH4), ethane (C2H6), ethylene (C2H4), and acetylene (C2H2), are dissolved in the transformer oil and can provide valuable insight into the fault mechanisms. The type and concentration of these gases are closely related to specific fault types, such as partial discharge, overheating, or electrical arcing [3]. Therefore, monitoring the gases dissolved in transformer oil becomes a crucial method for diagnosing faults.
Dissolved Gas Analysis (DGA) is a widely adopted technique used to detect and analyze the dissolved gases in transformer oil. Through DGA, it is possible to detect the concentration of specific gases and identify the fault type in a transformer. Several methods, including the IEEE standard and the Rogers’ ratio, have been proposed for interpreting DGA results and diagnosing transformer faults [4,5,6,7].
However, as fault patterns become more complex and subtle, traditional diagnostic methods may not offer sufficient accuracy. This has led to the development of advanced diagnostic tools based on artificial intelligence (AI) and machine learning (ML) techniques.
In recent years, many scholars across various countries have focused on the application of machine learning algorithms for fault classification using DGA data. Several studies have explored the use of Support Vector Machines (SVM), Artificial Neural Networks (ANN), Decision Trees (DT), and Random Forest (RF) for diagnosing transformer faults. These methods have shown varying degrees of success in classifying faults based on gas concentration data [8,9,10]. However, the performance of these models is highly dependent on the selection of features, optimization of model parameters, and the handling of class imbalances in the data [11,12].
In the broader field of condition monitoring and fault diagnostics, statistical learning has also been applied to other types of condition indicators. For example, Niola et al. employed discriminant analysis to characterize and classify the vibrational behavior of a gas micro-turbine under different fuel conditions, highlighting the effectiveness of data-driven classification for machine monitoring [13]. In addition, recent surveys have discussed the opportunities and challenges of large models (e.g., large language/multimodal models) for machine monitoring and fault diagnostics, which may further promote more adaptive and interpretable diagnostic frameworks in the future [14].
To improve classification accuracy and robustness, hybrid models combining optimization algorithms with machine learning classifiers have been proposed. For example, Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), and Artificial Bee Colony (ABC) algorithms have been integrated with SVM to optimize hyperparameters and enhance the model’s classification performance [15,16]. Despite these advancements, challenges such as local minima and the need for more computationally efficient algorithms still persist [17,18].
Furthermore, many existing models have shown limitations when applied to diverse datasets or in real-time fault detection scenarios [19].
This study introduces a hybrid optimization model combining PSO and GWO with SVM, referred to as PSO-GWO-SVM. This model aims to improve the accuracy of transformer fault classification by optimizing the parameters of SVM, thereby enhancing its ability to classify faults based on the dissolved gas concentration data. PSO is employed to explore the parameter space efficiently, while GWO enhances the global search capability, ensuring the optimal balance between exploration and exploitation. SVM is adopted as the base classifier due to its strong generalization capability and robustness on small-to-medium tabular datasets, which is suitable for DGA-based fault classification. Since the diagnostic performance of SVM is sensitive to hyperparameters, metaheuristic optimizers are introduced to perform data-driven hyperparameter tuning. The optimizers considered in this study (PSO, GWO, IGWO, and ISSA) are representative population-based methods that have been widely used for model parameter optimization in fault diagnosis applications.
The proposed PSO-GWO hybrid is motivated by a complementary exploration–exploitation mechanism: PSO’s velocity-driven updates help preserve population diversity and improve global exploration, whereas GWO’s elite-guided encircling strengthens local exploitation and refinement around promising regions. Combining these complementary behaviors is expected to reduce premature convergence and improve the stability of hyperparameter search. Based on this rationale, the manuscript evaluates PSO-SVM, GWO-SVM, IGWO-SVM, ISSA-SVM, and the hybrid PSO-GWO-SVM under the same evaluation protocol to verify the effectiveness of the proposed combination.
Preliminary experiments have demonstrated the superior performance of the PSO-GWO-SVM model in comparison to other hybrid models such as PSO-SVM, GWO-SVM, and IGWO-SVM.
This paper contributes to the field by proposing an advanced hybrid model for transformer fault diagnosis, addressing the challenges faced by traditional machine learning algorithms. The proposed approach not only optimizes the classification process but also provides a more robust and reliable tool for the early detection of transformer faults, which is critical for ensuring the security and stability of power systems.
To clearly summarize the objective and contributions of this work, the key points are as follows:
(1) A hybrid PSO-GWO strategy is proposed to tune SVM hyperparameters for DGA-based multi-class transformer fault diagnosis.
(2) The hybrid search is designed to balance exploration and exploitation, improving convergence stability and diagnostic accuracy.
(3) A strict evaluation protocol is adopted, in which hyperparameters are optimized via cross-validation on the training set and final performance is assessed on a held-out test set.
(4) Comparative experiments are conducted against representative single-optimizer baselines, with performance evaluated using accuracy and confusion-matrix analysis.
The remainder of this paper is organized as follows: Section 2 presents the proposed method, Section 3 describes the dataset and experimental settings, Section 4 reports the experimental results, and Section 5 concludes the paper and outlines future work.

2. PSO-GWO-SVM Hybrid Model

2.1. Principles of Particle Swarm Optimization

Particle Swarm Optimization is a population-based optimization algorithm proposed by Kennedy and Eberhart in 1995 [20]. PSO simulates the collective behavior of natural systems such as bird flocks and fish schools, where particles communicate and share information to achieve global optimization.
The position of each particle in the search space is represented as:
x i t = x i 1 t , x i 2 t , , x i d t
where d is the problem dimension, and x i t is the position of particle i at time t . The velocity update rule for each particle v i is given by:
v i t + 1 = w · v i t + c 1 · r 1 · p i x i t + c 2 · r 2 · g x i t
where w is the inertia weight, controlling the balance between exploration and exploitation; c 1 and c 2 are the acceleration coefficients, determining the influence of personal and global best positions; r 1 and r 2 are two random numbers typically in the range [0,1]; p i is the personal best position of particle i ; g is the global best position found by the swarm.
The position update rule is given by:
x i t + 1 = x i t + v i t + 1
In high-dimensional and complex constraint spaces, PSO may suffer from premature convergence to local optima, thus limiting its search performance. PSO has been widely applied in various fields, including function optimization, neural network training, data mining, and path planning [21,22,23].

2.2. Principles of Grey Wolf Optimization

Grey Wolf Optimization is a nature-inspired optimization algorithm introduced by Mirjalili et al. in 2014 [24]. It is based on the hunting behavior and social hierarchy of grey wolves, which have an organized social structure. In this algorithm, wolves are considered as search agents, and their movements are influenced by both their individual experiences and the experiences of other wolves in the pack. The social hierarchy of the grey wolf pack is shown in Figure 1.
The position of each wolf x i t in the search space is represented as:
x i t = x i 1 t , x i 2 t , , x i d t
where d is the problem dimension, and x i t is the position of the i -th wolf at time t . The mathematical formulation for updating the position of a wolf x i is given by:
D α = C α · X α X i t
X i t + 1 = X α A α · D α
where D α denotes the (scaled) distance between the current wolf position X and the alpha wolf position X α ; A α and C α are coefficient vectors that control the exploration and exploitation balance. They are updated dynamically based on a random value and the current iteration number. X α is the position of the alpha wolf. The process of encircling, chasing, and attacking helps the wolves explore the search space and converge towards the global optimum.
Grey Wolf Optimization has been successfully applied to a wide range of optimization problems, including multi-objective optimization, feature selection, machine learning, and engineering design optimization [25,26,27].

2.3. Principles of Support Vector Machine

Support Vector Machine is a supervised machine learning algorithm primarily used for classification and regression tasks. It was first introduced by Vapnik and Cortes in 1995 [28]. SVM works by finding a hyperplane that best separates the data into distinct classes. In a two-dimensional space, this hyperplane is simply a line that divides the dataset into two classes. The goal of SVM is to find the optimal hyperplane that maximizes the margin between the two classes. The margin is defined as the distance between the hyperplane and the nearest data points from each class, called support vectors, as shown in Figure 2.
In higher dimensions, the concept of the hyperplane extends to a higher-dimensional space. The objective of SVM is to find a hyperplane w T x + b = 0 that best separates the classes while maximizing the margin. The decision function is given by:
f x = w T x + b
where w is the normal vector to the hyperplane, and b is the bias term. The margin is:
2 W
And the objective is to maximize this margin, which can be done by solving an optimization problem.
SVM can handle non-linearly separable data by using the kernel trick. The idea is to map the input data into a higher-dimensional feature space where a linear hyperplane can be used to separate the data. The kernel function computes the inner product of the data points in the high-dimensional space without explicitly transforming the data, allowing SVM to find optimal hyperplanes in complex, non-linear decision boundaries. Common kernels include the linear kernel:
K x i , x j = x i T x j
The polynomial kernel:
K x i , x j = x i T x j + c d
And the radial basis function (RBF) kernel:
K x i , x j = e x p γ x i x j 2
This allows SVM to handle complex decision boundaries and improve its performance in non-linear classification tasks [23].
SVM is known for its ability to perform well with high-dimensional data and in cases where the number of dimensions exceeds the number of data points. However, it is sensitive to the choice of kernel function, regularization parameters, and other hyperparameters. Additionally, SVM training can be computationally expensive, especially for large datasets, and it can be sensitive to noisy data and outliers, which can influence the optimal hyperplane [24]. Despite these limitations, SVM has been widely applied in various fields such as image recognition, bioinformatics, text classification, and speech recognition. Recent studies have explored its applications and further refinements to improve its performance in these areas [29].

2.4. Multi-Model Hybrid Approach

Support Vector Machine is a supervised machine learning algorithm primarily used for classification and regression. A multi-model hybrid approach involves combining the strengths of different optimization algorithms to improve overall performance. In the case of the GWO, PSO, and SVM, a hybrid model leverages the exploration capabilities of GWO and PSO to optimize the hyperparameters of the SVM, thereby enhancing classification accuracy. In this approach, GWO and PSO work together to find optimal values for the SVM hyperparameters, which are crucial for achieving good performance in classification tasks.
The flow of the hybrid model algorithm is shown in Figure 3.
By combining the exploration capabilities of GWO and PSO, the hybrid approach ensures a more thorough search of the hyperparameter space. GWO provides a global search, and PSO refines this search with local updates to optimize the SVM’s hyperparameters. This hybrid approach improves both the efficiency and accuracy of the SVM model.
To avoid information leakage and ensure an unbiased evaluation, the dataset is first split into a training set (70%) and an independent test set (30%) using stratified sampling. The test set is strictly held out and is not involved in any stage of preprocessing parameter estimation, hyperparameter tuning, or model selection.
During the hybrid GWO-PSO optimization, the fitness of each candidate SVM parameter set is computed only on the training set using stratified 5-fold cross-validation. In each fold, the normalization (or other preprocessing) parameters are fitted on the fold-training subset and then applied to the fold-validation subset. The fitness is defined as the mean classification performance (e.g., accuracy) across the 5 folds.
After the optimization converges, the SVM is retrained on the full training set using the optimal hyperparameters. Finally, the trained model is evaluated once on the independent test set to report the final generalization performance.
Although PSO and GWO are both well-known optimizers, the proposed PSO-GWO hybrid is motivated by their complementary search behaviors. PSO maintains population diversity through velocity-driven updates, which helps preserve global exploration and reduces premature convergence. In contrast, GWO performs exploitation by encircling and refining the search around elite solutions. By combining these two roles in a cooperative manner, the hybrid strategy achieves a more balanced exploration–exploitation trade-off for SVM hyperparameter tuning, which in turn improves the stability of the optimization process and enhances the generalization performance on multi-class transformer fault diagnosis.
To improve reproducibility and methodological transparency, we summarize the key settings of the training protocol and the optimization algorithms in Table 1. The SVM classifier is implemented as an ECOC multi-class model with an RBF kernel. Hyperparameter tuning is performed in the log10 domain, i.e., C = 10 x 1 (BoxConstraint) and S = 10 x 2 (KernelScale), where x 1 and x 2 are optimized by the corresponding metaheuristic algorithm. The objective function is the 5-fold cross-validation accuracy on the training set, while the test set is strictly held out and used only for the final evaluation.

3. Data Processing and Evaluation Indicators

3.1. Data Analysis and Treatment of Dissolved Gases in Transformer Oils

In this study, the data used for analysis is derived from multiple measurement datasets of dissolved gases in transformer oils. These datasets include the concentration values of five common gases typically found in transformer oil: hydrogen (H2), methane (CH4), ethane (C2H6), ethylene (C2H4), and acetylene (C2H2). These gases are important indicators of the health and performance of electrical transformers, as their concentrations can reveal underlying faults or potential failures within the transformer [25].
The dataset used in this study is publicly available at: https://github.com/m-karimanmajd/Transformer-Oil-DGA-Dataset-with-Gaussian-Copula-Augmentation.git (accessed on 4 March 2026) The repository provides synthetic (augmented) transformer-oil DGA data generated via a Gaussian Copula approach and states that the underlying statistical patterns are derived from the IEC TC 10 database (the original IEC data are not included).
The dissolved gases in transformer oils are primarily produced by the decomposition of oil insulation material under electrical and thermal stresses. The concentration of these gases can vary depending on the type and severity of the fault occurring within the transformer [26].
The partial data of dissolved gases in oil and their corresponding fault types are shown in Table 2.
The variation of characteristic gas concentration at different fault stages of the transformer is shown in Figure 4.

3.2. Data Sources and Measurement Uncertainty

Although the DGA dataset and fault categories are introduced above, it is important to clarify practical factors that may affect gas concentration measurements. In real-world DGA practice, gas concentrations can vary with sampling and laboratory procedures (e.g., oil sampling location and timing, sample sealing and storage, transportation time, and instrument calibration), and are typically quantified by gas chromatography following standard utility/laboratory protocols. Such factors may introduce measurement uncertainty and variability across samples. In addition, the recorded DGA values may be affected by background operating conditions (load and temperature), oil aging and moisture, and occasional contamination, which can lead to noise and potential bias in the measured gas patterns.
From a data-analysis perspective, two types of uncertainty are most relevant: (i) random noise (repeatability error and small fluctuations in measured concentrations) and (ii) systematic bias (differences caused by instrument calibration, laboratory procedures, or site-specific operating conditions). To reduce the influence of scale differences and improve robustness, we apply consistent feature preprocessing before model training and adopt a strict evaluation protocol that separates 5-fold cross-validation (for hyperparameter optimization on the training set) from final testing (on a held-out test set). This design helps avoid optimistic bias due to information leakage and provides a more realistic estimate of generalization performance.

3.3. Evaluation Metrics for Model Performance

To evaluate the performance of the classification model, this study employs several commonly used metrics, including accuracy, confusion matrix, and the accuracy-iteration curve during the optimization process.
The dataset used consists of 4200 samples, covering seven fault categories, with 600 instances per class, ensuring a balanced distribution. The data is split into a training set (70%) and a test set (30%) using stratified sampling, which maintains the proportion of each class in both subsets. This stratification is essential for reducing sampling bias and ensuring reliable evaluation of model performance [26]. Unless otherwise specified, all reported metrics (accuracy and confusion matrix) are computed on the held-out test set. The 5-fold cross-validation accuracy is used only as the fitness function during the hyperparameter optimization on the training set, and it is not mixed with the test-set evaluation.
The sample distribution of the test set and training set is shown in Table 3.
Accuracy is defined as the proportion of correctly classified samples among all samples. It provides an overall measure of model performance but may not fully reflect class-wise performance, especially in imbalanced datasets. However, since the current dataset is class-balanced, accuracy serves as a reliable metric in this case.
In order to a more detailed picture of the categorization performance of all the categories, the confusion matrix was adopted to provide a more detailed evaluation of class-wise classification performance. This matrix shows the number of true positives, false positives, false negatives and true negatives for each category. It helps to determine which categories are more prone to classification errors and provides insight into specific patterns of model errors [30].
During the hyperparameter optimisation phase using each algorithm, the cross-validation accuracy is recorded and visualised for each iteration. This accuracy-iteration curve reflects the improvement in model performance over time and helps to identify convergence behaviour, stagnation points and possible overfitting [10].

4. Experimental Results

4.1. Accuracy Comparison Among Different Models

In this study, the classification performance of multiple models for gas data was evaluated using MATLAB R2024a (MathWorks, Natick, MA, USA), and the results were imported into Origin 2024 (OriginLab Corporation, Northampton, MA, USA) for visualization. To evaluate the effectiveness of the proposed hybrid optimization approach, the classification performance of five different models is compared, including GWO-SVM, IGWO-SVM, ISSA-SVM, PSO-SVM, and the proposed PSO-GWO-SVM.
All models are evaluated using the same dataset and experimental settings, including a stratified train–test split and 5-fold cross-validation. The overall classification accuracy on the test set is used as the primary metric for performance comparison. The results of the experiment are summarised in Table 4.
The specific classification results of each classification algorithm are shown in Figure 5. The results in Table 4 and Figure 5 are evaluated on the independent test set, while cross-validation is only used within the training set for hyperparameter optimization.
Figure 5 provides a sample-level visualization of the classification results. By comparing the predicted labels (red) with the ground-truth labels (blue) across the test samples, the figure directly shows where misclassifications occur and whether errors are concentrated in certain fault categories. This visualization complements the overall accuracy and confusion-matrix results by offering an intuitive interpretation of prediction consistency at the individual-sample level.
The PSO-GWO-SVM model achieved the highest test accuracy of 96.98%, outperforming PSO-SVM (93.57%), GWO-SVM (92.94%), IGWO-SVM (93.17%), and ISSA-SVM (91.03%) under the same evaluation protocol. In particular, the accuracy gains of PSO-GWO-SVM over these baselines are 3.41, 4.04, 3.81, and 5.95 percentage points, respectively. These improvements indicate that the proposed hybrid optimization provides a more effective hyperparameter search for SVM tuning than the compared single-optimizer baselines and improved variants.
To quantify variability caused by the stochastic nature of the metaheuristic optimization (random population initialization and random coefficients in PSO/GWO updates), the experiment was repeated 10 times under the same data split and experimental settings. The resulting accuracy is 97.24% ± 0.33% (mean ± standard deviation), and the 95% confidence interval for the mean accuracy across runs is [97.01%, 97.47%].
While PSO-SVM achieved an accuracy of 93.57%, and GWO-SVM reached 92.94%, the hybrid PSO-GWO-SVM significantly improved performance by leveraging the strengths of both algorithms. PSO is known for its excellent exploration capabilities, while GWO excels at balancing exploration and exploitation in optimization tasks [31,32]. The combination of these algorithms enables PSO-GWO-SVM to achieve faster convergence and a more effective search of the parameter space, resulting in superior classification accuracy [33,34]. Previous studies have demonstrated that hybrid optimization methods, like PSO-GWO, outperform single optimization algorithms, particularly in handling complex, high-dimensional datasets [35]. Thus, the PSO-GWO-SVM model provides a significant improvement in parameter optimization, making it highly effective for classification tasks.

4.2. Confusion Matrix Comparison

In this paper, the confusion matrix is used as the main evaluation tool. In this paper, the confusion matrix is used as a primary evaluation tool. A confusion matrix is a tabular representation of classification performance that summarizes the numbers of true positives, true negatives, false positives, and false negatives for each class, thereby providing detailed insight into class-wise prediction behavior beyond overall accuracy [30]. Based on the confusion matrix, performance metrics such as precision, recall, and F1-score can be further derived. The confusion matrix of each algorithm shows the prediction results between different categories, including the number of correctly classified samples and the number of misclassified samples. The following is an analysis of the confusion matrices of various algorithms for the seven categories of ‘normal, low energy discharge, high energy discharge, medium-low temperature overheat, high temperature overheat, medium temperature overheat, and low temperature overheat’. The results of the confusion matrix evaluation are shown in Figure 6.
In addition to accuracy, macro-averaged precision, recall, and F1-score are reported to provide a more complete evaluation of multi-class diagnostic performance. These metrics are computed from the confusion matrices. Since the test set is class-balanced, macro-averaged and weighted-averaged results are identical. The detailed performance comparison of the five models on the test set is presented in Table 5.
The results of the confusion matrix analysis reveal that the PSO-GWO-SVM model outperforms the other algorithms in terms of classification accuracy. This hybrid approach of combining Particle Swarm Optimization and Gray Wolf Optimization allows for improved parameter optimization, reducing misclassification rates across all categories. Compared to the other models, PSO-GWO-SVM demonstrated fewer misclassifications, especially for Normal, T1/T2, and T3 classes, highlighting its robustness in handling complex classification tasks. The GWO-SVM and IGWO-SVM models also performed well, especially in the Normal and T3 categories, but they showed slightly higher misclassification rates in D1 and T1/T2 compared to PSO-GWO-SVM. ISSA-SVM performed reasonably well but had higher error rates in the D1 and T1/T2 classes, which impacted its overall performance. These findings are consistent with recent literature that emphasizes the effectiveness of hybrid optimization strategies [36,37].
The superior performance of the hybrid PSO-GWO-SVM is not only reflected by faster convergence, but also by the quality and robustness of the hyperparameter search. The objective landscape induced by cross-validation accuracy is generally non-convex and may contain multiple local optima. In this context, single-optimizer strategies may suffer from either insufficient exploration (prematurely converging to a suboptimal region) or insufficient exploitation (failing to refine around promising regions). The proposed hybrid strategy leverages the complementary behaviors of PSO and GWO: PSO’s velocity-driven update helps maintain population diversity and facilitates escaping local optima, while GWO’s elite-guided encircling mechanism strengthens exploitation and refines the search around high-quality candidate solutions. This cooperative exploration–exploitation balance increases the probability of locating better-performing hyperparameter configurations, leading to improved generalization on the held-out test set. As a result, the hybrid model yields more reliable decision boundaries for multi-class DGA fault discrimination, which is consistent with the observed improvements in accuracy and confusion-matrix performance compared with PSO-SVM, GWO-SVM, IGWO-SVM, and ISSA-SVM.

4.3. Convergence Speed Analysis

In this study, the optimization iteration curves are plotted using the mean 5-fold cross-validation accuracy on the training set, which serves as the fitness value during the optimization process. The independent test set is not used to monitor the optimization progress and is only employed for the final evaluation after the optimal hyperparameters are obtained, as shown in Figure 7.
The PSO-GWO-SVM algorithm demonstrates the best performance, achieving the highest classification accuracy most rapidly compared to other models. This hybrid approach of combining Particle Swarm Optimization for global exploration and Gray Wolf Optimization for local exploitation allows the model to quickly converge to an optimal solution, reaching a high accuracy level early in the iterations. The PSO-SVM shows a steady improvement over iterations. The GWO-SVM also shows a gradual increase in accuracy, but it requires more iterations to catch up to the hybrid methods, reflecting a slower convergence rate. The ISSA-SVM does not perform as well as the other models in terms of the final classification accuracy. Finally, the IGWO-SVM shows similar behavior to GWO-SVM, with improvements over time but not reaching the level of performance seen in the hybrid models. These results suggest that hybrid optimization algorithms, such as PSO-GWO-SVM, are highly effective for transformer fault diagnosis, as they balance exploration and exploitation to find optimal solutions more efficiently than single optimization approaches.
A possible reason for the relatively slower convergence observed in some baseline optimizers is the exploration–exploitation imbalance and parameter sensitivity inherent to stochastic metaheuristics. For example, certain algorithms may maintain excessive exploration (slow refinement near promising regions) or become prematurely trapped in suboptimal regions if the population diversity decreases too quickly. In addition, the cross-validation accuracy used as the optimization objective can induce a non-smooth and multi-modal search landscape, making it difficult for some optimizers to consistently improve within limited iterations. By contrast, the proposed PSO-GWO hybrid benefits from complementary behaviors—diversity-preserving exploration and elite-guided exploitation—thereby improving convergence stability and efficiency.
The improvements in classification accuracy and convergence speed also have practical relevance for transformer condition monitoring and maintenance. First, higher diagnostic accuracy can reduce misclassification risk (including false alarms and missed detections), which supports more reliable maintenance prioritization and targeted inspections, especially when multiple units compete for limited maintenance resources. Second, faster and more stable hyperparameter tuning facilitates model updating when new DGA samples become available, making the approach more suitable for periodic re-training in an online/near-online monitoring pipeline and thereby improving early-warning responsiveness. Third, although the current study focuses on fault-type identification, the SVM decision scores (e.g., margins or class-wise confidence scores from the ECOC model) can be further utilized as a proxy for diagnostic confidence; combined with trend analysis of key gases, these scores can support practical alarm thresholding and preliminary fault severity discrimination (e.g., higher confidence and persistent abnormal trends may indicate higher urgency for maintenance actions).

5. Discussion

In this paper, various optimization algorithms integrated with Support Vector Machines are investigated for transformer fault diagnosis, including PSO-SVM, ISSA-SVM, GWO-SVM, IGWO-SVM, and a hybrid model, PSO-GWO-SVM. Through rigorous evaluation using accuracy comparisons, confusion matrix analysis, and optimization iteration curves, the results demonstrate that the proposed hybrid PSO-GWO-SVM algorithm achieves superior performance, obtaining the highest classification accuracy (96.98%) and the most efficient convergence behavior among the tested models.
The hybrid algorithm effectively leverages the global search capabilities of Particle Swarm Optimization and the local exploitation strengths of Gray Wolf Optimization, providing balanced and robust parameter optimization for the SVM.
The confusion matrix analysis further validated the superiority of the PSO-GWO-SVM model, exhibiting fewer misclassifications across different fault categories compared to single optimization-based models. Moreover, the optimization iteration curves confirmed that PSO-GWO-SVM quickly converges to optimal or near-optimal solutions, highlighting its practical applicability in transformer fault diagnosis. In practical substation monitoring scenarios involving multi-source condition perception, including transformer operational data and acoustic voiceprint-based detection, the proposed method can serve as a reliable decision-support component for fault trend analysis and abnormal state identification.
Beyond faster convergence. It is worth noting that the superiority of the hybrid PSO-GWO-SVM is not merely due to faster convergence. The hybrid search tends to retain broader exploration in early iterations while providing stronger refinement around elite regions in later iterations, thereby lowering the risk of being trapped in local optima. This complementary behavior offers a plausible explanation for the observed improvement in test accuracy over single-optimizer baselines.
Recent advances in fault diagnosis increasingly emphasize incorporating mechanism-related constraints into feature extraction and model design to improve physical consistency and interpretability. For example, Cheng et al. proposed CFFsBD by explicitly exploiting candidate fault frequencies to guide blind deconvolution for bearing fault feature enhancement [38], and further developed an improved envelope spectrum via candidate fault frequency optimization-gram to adaptively select informative spectral bands, both demonstrating the benefit of embedding mechanism-related priors into the diagnostic pipeline. Inspired by these studies, future work on DGA-based transformer diagnosis may consider integrating mechanism-guided constraints (e.g., gas-generation signatures and physically meaningful ratio/consistency rules) into feature construction or learning objectives to further enhance interpretability [16,39].
Despite the improved accuracy and convergence behavior, several limitations should be acknowledged. First, the current evaluation is conducted on a dataset of limited scale, and real-world DGA data can be larger and class-imbalanced; therefore, future work will investigate scalability and imbalance-aware learning (e.g., cost-sensitive training, re-sampling strategies, and more extensive cross-site validation). Second, DGA patterns may be influenced by operating conditions (load, temperature, oil aging, moisture) and measurement variability, which may cause distribution shifts; future studies will consider robustness evaluation under varying operating regimes and potential domain adaptation/normalization strategies [40,41]. Third, while this paper focuses on offline diagnosis, practical deployment often requires online/continuous monitoring; future work will explore incremental model updating, drift detection, and computationally efficient implementation for near-real-time early warning.
In future work, deep learning can be incorporated mainly as a feature-learning module for DGA inputs (e.g., a lightweight MLP/1D-CNN to learn more discriminative representations), and the proposed PSO-GWO strategy can still be used for robust hyperparameter tuning of the subsequent classifier. Moreover, deep learning–based techniques such as class-balanced training and domain adaptation may help mitigate limitations related to data imbalance and operating-condition variability, and incremental updating can facilitate online monitoring.

6. Conclusions

This study proposed a hybrid PSO-GWO-SVM framework for transformer fault diagnosis based on dissolved gas analysis (DGA) gas classification. By integrating PSO and GWO for SVM hyperparameter tuning, the proposed method achieved improved diagnostic performance and stable convergence compared with representative single-optimizer baselines. Under a strict evaluation protocol that separates cross-validation–based tuning from held-out test assessment, the PSO-GWO-SVM model obtained the best classification accuracy (96.98%) among the tested methods, and the confusion-matrix results further indicated fewer misclassifications across fault categories.
Future work will focus on integrating deep learning for enhanced feature extraction and exploring adaptive and multi-objective optimization strategies for dynamic monitoring scenarios [33]. In addition, limitations related to dataset scale/imbalance, operating-condition variability, and online deployment will be addressed through larger cross-site validation, robustness evaluation under distribution shifts, and efficient incremental updating for near-real-time early warning.

Author Contributions

Conceptualization, J.L. and D.W.; methodology, F.X. and Y.X.; software, J.L.; validation, D.W., Y.X. and C.Y.; formal analysis, C.Y.; investigation, D.W.; resources, J.L.; data curation, Y.C., Q.Z. and C.Y.; writing—original draft preparation, J.L. and F.X.; writing—review and editing, D.W., F.X., Y.X., Q.Z. and C.Y.; visualization, Y.C. and C.Y.; supervision, Y.C.; project administration, Y.C., Q.Z. and C.Y.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This project is funded by the scientific research project of Guangdong Power Grid Co., Ltd. (GDKJXM20240405).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to ethical and legal reasons.

Conflicts of Interest

Authors J.L., D.W., F.X. and Y.X. were employed by the company Guangdong Power Grid Corporation Limited. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Zheng, H.; Shioya, R. A Comparison between Artificial Intelligence Method and Standard Diagnosis Methods for Power Transformer Dissolved Gas Analysis Using Two Public Databases. IEEJ Trans. Electr. Electron. Eng. 2020, 15, 1305–1311. [Google Scholar] [CrossRef]
  2. Zhong, M.; Yi, S.; Fan, J.; Zhang, Y.; He, G.; Cao, Y.; Feng, L.; Tan, Z.; Mo, W. Power transformer fault diagnosis based on a self-strengthening offline pre-training model. Eng. Appl. Artif. Intell. 2023, 126, 107142. [Google Scholar] [CrossRef]
  3. Li, J.; Zhang, Q.; Wang, K.; Wang, J.; Zhou, T.; Zhang, Y. Optimal dissolved gas ratios selected by genetic algorithm for power transformer fault diagnosis based on support vector machine. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 1198–1206. [Google Scholar] [CrossRef]
  4. IEEE Std C57.91-2011 (Revision of IEEE Std C57.91-1995); IEEE Guide for Loading Mineral-Oil-Immersed Transformers and Step-Voltage Regulators. The Institute of Electrical and Electronics Engineers, Inc.: New York, NY, USA, 2012.
  5. Thango, B.A. Dissolved Gas Analysis and Application of Artificial Intelligence Technique for Fault Diagnosis in Power Transformers: A South African Case Study. Energies 2022, 15, 9030. [Google Scholar] [CrossRef]
  6. Hoballah, A.; Mansour, D.E.A.; Taha, I.B.M. Hybrid Grey Wolf Optimizer for Transformer Fault Diagnosis Using Dissolved Gases Considering Uncertainty in Measurements. IEEE Access 2020, 8, 139176–139187. [Google Scholar] [CrossRef]
  7. Shang, H.; Liu, Z.; Wei, Y.; Zhang, S. A Novel Fault Diagnosis Method for a Power Transformer Based on Multi-Scale Approximate Entropy and Optimized Convolutional Networks. Entropy 2024, 26, 186. [Google Scholar] [CrossRef]
  8. Yu, S.; Tan, W.; Zhang, C.; Fang, Y.; Tang, C.; Hu, D. Research on hybrid feature selection method of power transformer based on fuzzy information entropy. Adv. Eng. Inform. 2021, 50, 101433. [Google Scholar] [CrossRef]
  9. Abdelwahab, S.A.M.; Taha, I.B.M.; Fahim, R.; Ghoneim, S.S.M. Transformer fault diagnose intelligent system based on DGA methods. Sci. Rep. 2025, 15, 8263. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Tang, Y.; Liu, Y.; Liang, Z. Fault diagnosis of transformer using artificial intelligence: A review. Front. Energy Res. 2022, 10, 1006474. [Google Scholar] [CrossRef]
  11. Taha, I.B.M.; Ibrahim, S.; Mansour, D.E.A. Power Transformer Fault Diagnosis Based on DGA Using a Convolutional Neural Network With Noise in Measurements. IEEE Access 2021, 9, 111162–111170. [Google Scholar] [CrossRef]
  12. Wu, Y.; Sun, X.; Dai, B.; Yang, P.; Wang, Z. A transformer fault diagnosis method based on hybrid improved grey wolf optimization and least squares-support vector machine. IET Gener. Transm. Distrib. 2022, 16, 1950–1963. [Google Scholar] [CrossRef]
  13. Niola, V.; Savino, S.; Quaremba, G.; Cosenza, C.; Nicolella, A.; Spirto, M. Discriminant Analysis of the Vibrational Behavior of a Gas Micro-Turbine as a Function of Fuel. Machines 2022, 10, 925. [Google Scholar] [CrossRef]
  14. Chen, X.; Lei, Y.; Li, Y.; Parkinson, S.; Li, X.; Liu, J.; Lu, F.; Wang, H.; Wang, Z.; Yang, B.; et al. Large Models for Machine Monitoring and Fault Diagnostics: Opportunities, Challenges and Future Direction. J. Dyn. Monit. Diagn. 2025, 4, 76–90. [Google Scholar] [CrossRef]
  15. Baker, E.; Nese, S.V.; Dursun, E. Hybrid Condition Monitoring System for Power Transformer Fault Diagnosis. Energies 2023, 16, 1151. [Google Scholar] [CrossRef]
  16. Rajesh, K.N.V.P.S.; Rao, U.M.; Fofana, I.; Rozga, P.; Paramane, A. Influence of Data Balancing on Transformer DGA Fault Classification with Machine Learning Algorithms. IEEE Trans. Dielectr. Electr. Insul. 2023, 30, 385–392. [Google Scholar] [CrossRef]
  17. Kari, T.; Gao, W.; Zhao, D.; Abiderexiti, K.; Mo, W.; Wang, Y.; Luan, L. Hybrid feature selection approach for power transformer fault diagnosis based on support vector machine and genetic algorithm. IET Gener. Transm. Distrib. 2018, 12, 5672–5680. [Google Scholar] [CrossRef]
  18. Taha, I.B.M.; Dessouky, S.S.; Ghoneim, S.S.M. Transformer fault types and severity class prediction based on neural pattern-recognition techniques. Electr. Power Syst. Res. 2021, 191, 106899. [Google Scholar] [CrossRef]
  19. Yavuz, L.; Soran, A.; Önen, A.; Li, X.; Muyeen, S.M. Adaptive Fault Detection Scheme Using an Optimized Self-healing Ensemble Machine Learning Algorithm. CSEE J. Power Energy Syst. 2022, 8, 1145–1156. [Google Scholar] [CrossRef]
  20. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November 1995–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  21. Chen, F.; Xu, J.; Zhu, J.; Lei, W.; An, Y. An Adaptive Multi-Objective Particle Swarm Optimization Method with Self-Learning Strategy. In Proceedings of the 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 20–22 September 2024; pp. 673–678. [Google Scholar]
  22. Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  23. Chen, T.Y.; Chen, W.N.; Wei, F.F.; Hu, X.M.; Zhang, J. Multiagent Swarm Optimization with Adaptive Internal and External Learning for Complex Consensus-Based Distributed Optimization. IEEE Trans. Evol. Comput. 2025, 29, 906–920. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Zhang, H.; Li, Y.; Shi, S.; Xia, Q.; Chai, M. A novel multi-objective robust optimization method based on improved Gray Wolf optimizer and Kriging model. Proc. Inst. Mech. Eng. Part. C J. Mech. Eng. Sci. 2024, 239, 1872–1892. [Google Scholar] [CrossRef]
  26. Tu, B.; Wang, F.; Huo, Y.; Wang, X. A hybrid algorithm of grey wolf optimizer and harris hawks optimization for solving global optimization problems with improved convergence performance. Sci. Rep. 2023, 13, 22909. [Google Scholar] [CrossRef] [PubMed]
  27. Al-Tashi, Q.; Md Rais, H.; Abdulkadir, S.J.; Mirjalili, S.; Alhussian, H. A Review of Grey Wolf Optimizer-Based Feature Selection Methods for Classification. In Evolutionary Machine Learning Techniques: Algorithms and Applications; Mirjalili, S., Faris, H., Aljarah, I., Eds.; Springer: Singapore, 2020; pp. 273–286. [Google Scholar]
  28. Cortes, C.; Vapnik, V. Support-vector networks. Mach Learn 1995, 20, 273–297. [Google Scholar] [CrossRef]
  29. Seyyedabbasi, A.; Kiani, F. I-GWO and Ex-GWO: Improved algorithms of the Grey Wolf Optimizer to solve global optimization problems. Eng. Comput. 2021, 37, 509–532. [Google Scholar] [CrossRef]
  30. Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
  31. Du, J.; Liu, Y.; Yu, Y.; Yan, W. A Prediction of Precipitation Data Based on Support Vector Machine and Particle Swarm Optimization (PSO-SVM) Algorithms. Algorithms 2017, 10, 57. [Google Scholar] [CrossRef]
  32. Sulaiman, M.H.; Mustaffa, Z.; Mohamed, M.R.; Aliman, O. Using the gray wolf optimizer for solving optimal reactive power dispatch problem. Appl. Soft Comput. 2015, 32, 286–292. [Google Scholar] [CrossRef]
  33. Otair, M.; Ibrahim, O.T.; Abualigah, L.; Altalhi, M.; Sumari, P. An enhanced Grey Wolf Optimizer based Particle Swarm Optimizer for intrusion detection system in wireless sensor networks. Wirel. Netw. 2022, 28, 721–744. [Google Scholar] [CrossRef]
  34. El-kenawy, E.-S.; Eid, M. Hybrid Gray Wolf and Particle Swarm Optimization for Feature Selection. Int. J. Innov. Comput. Inf. Control 2020, 16, 831–844. [Google Scholar] [CrossRef]
  35. Dai, J.; Song, H.; Sheng, G.; Jiang, X. Dissolved gas analysis of insulating oil for power transformer fault diagnosis with deep belief network. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 2828–2835. [Google Scholar] [CrossRef]
  36. Azevedo, B.F.; Rocha, A.M.A.C.; Pereira, A.I. Hybrid approaches to optimization and machine learning methods: A systematic literature review. Mach. Learn. 2024, 113, 4055–4097. [Google Scholar] [CrossRef]
  37. Yao, J.; Luo, X.; Li, F.; Li, J.; Dou, J.; Luo, H. Research on hybrid strategy Particle Swarm Optimization algorithm and its applications. Sci. Rep. 2024, 14, 24928. [Google Scholar] [CrossRef]
  38. Cheng, Y.; Zhou, N.; Wang, Z.; Chen, B.; Zhang, W. CFFsBD: A Candidate Fault Frequencies-Based Blind Deconvolution for Rolling Element Bearings Fault Feature Enhancement. IEEE Trans. Instrum. Meas. 2023, 72, 3506412. [Google Scholar] [CrossRef]
  39. Cheng, Y.; Wang, S.; Chen, B.; Mei, G.; Zhang, W.; Peng, H.; Tian, G. An improved envelope spectrum via candidate fault frequency optimization-gram for bearing fault diagnosis. J. Sound. Vib. 2022, 523, 116746. [Google Scholar] [CrossRef]
  40. Khodayar, M.; Liu, G.; Wang, J.; Khodayar, M.E. Deep learning in power systems research: A review. CSEE J. Power Energy Syst. 2021, 7, 209–220. [Google Scholar] [CrossRef]
  41. Mashifane, L.D.; Mendu, B.; Monchusi, B.B. State-of-the-Art Fault Detection and Diagnosis in Power Transformers: A Review of Machine Learning and Hybrid Methods. IEEE Access 2025, 13, 48156–48172. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the social structure of the wolf pack.
Figure 1. Schematic diagram of the social structure of the wolf pack.
Digital 06 00024 g001
Figure 2. SVM Principle Diagram.
Figure 2. SVM Principle Diagram.
Digital 06 00024 g002
Figure 3. Flowchart of the hybrid model algorithm.
Figure 3. Flowchart of the hybrid model algorithm.
Digital 06 00024 g003
Figure 4. Changes in characteristic gas concentrations at different stages of transformer failure.
Figure 4. Changes in characteristic gas concentrations at different stages of transformer failure.
Digital 06 00024 g004
Figure 5. Classification prediction results of each model. The x-axis denotes the sample index in the test set, and the y-axis denotes the corresponding fault category. For each sample, the blue marker indicates the ground-truth label, while the red marker indicates the predicted label produced by the model. If the red and blue markers overlap, the sample is correctly classified; otherwise, a misclassification occurs. If shown, the vertical connector highlights the discrepancy between the predicted and true labels. For example, at x = 100, the ground-truth label is Normal (blue), whereas the predicted label is D1 (red), indicating a misclassification. Subfigures (ae) correspond to the five models evaluated in this study.
Figure 5. Classification prediction results of each model. The x-axis denotes the sample index in the test set, and the y-axis denotes the corresponding fault category. For each sample, the blue marker indicates the ground-truth label, while the red marker indicates the predicted label produced by the model. If the red and blue markers overlap, the sample is correctly classified; otherwise, a misclassification occurs. If shown, the vertical connector highlights the discrepancy between the predicted and true labels. For example, at x = 100, the ground-truth label is Normal (blue), whereas the predicted label is D1 (red), indicating a misclassification. Subfigures (ae) correspond to the five models evaluated in this study.
Digital 06 00024 g005
Figure 6. Confusion matrix for different classification algorithms.
Figure 6. Confusion matrix for different classification algorithms.
Digital 06 00024 g006
Figure 7. Comparison of optimization curves (mean 5-fold CV accuracy on the training set).
Figure 7. Comparison of optimization curves (mean 5-fold CV accuracy on the training set).
Digital 06 00024 g007
Table 1. Optimizer-specific settings and search bounds.
Table 1. Optimizer-specific settings and search bounds.
ModelMax IterationsOptimized Variables (log10)Key Optimizer HyperparametersAdditional Operators/Stopping
PSO-SVM20(x1, x2) = (log10C, log10S)Inertia weight w = 0.7; earning factors c1 = 1.5, c2 = 1.5Early stopping if the best CV accuracy shows no meaningful improvement for 5 consecutive iterations (threshold 10−4); boundary handling by clipping
ISSA-SVM20(x1, x2) = (log10C, log10S)Control coefficient a linearly decreases from 2 to 0; standard α/β/δ encircling updateBoundary handling by clipping
GWO-SVM20(x1, x2) = (log10C, log10S)a decreases 2 → 0 (linear); elite-guided encircling updateChaotic initialization (Logistic map); Lévy-flight perturbation (scale 0.01, β = 1.5); boundary clipping
IGWO-SVM20(x1, x2) = (log10C, log10S)PD = 0.7, SD = 0.2, ST = 0.8; adaptive weight w(t) = 0.4cos (πt/(2T))Lévy flight (β = 1.5) and Cauchy mutation (scouts); boundary clipping
PSO-GWO-SVM20(x1, x2) = (log10C, log10S)GWO update with a decreasing 2 → 0 + PSO velocity term (w = 0.7, c1 = c2 = 1.5)Boundary clipping
Table 2. Selected data on dissolved gases in oil and their corresponding fault types.
Table 2. Selected data on dissolved gases in oil and their corresponding fault types.
Volume Fraction of Characteristic Gas (μL/L)Fault TypeCorresponding Code
H2CH4C2H6C2H4C2H2
7.55.73.42.63.2NormalNormal
46.8136.888.517.520.28NormalNormal
5505334200Low energy dischargeD1
802062062Low energy dischargeD1
5405561570128High energy dischargeD2
9404403530460High energy dischargeD2
4.321931181250Medium-low temperature overheatingT1/T2
170320535203.2Medium-low temperature overheatingT1/T2
56286969287OverheatingT3
27437655100217OverheatingT3
30049018036095Medium temperature overheatingT2
279042630.2Medium temperature overheatingT2
1506040701Low temperature overheatingT1
18126241280Low temperature overheatingT1
Table 3. Sample distribution of the dataset.
Table 3. Sample distribution of the dataset.
Corresponding CodeNormalD1D2T1/
T2
T3T2T1
Total sample size600600600600600600600
Number of training samples420420420420420420420
Number of test samples180180180180180180180
Table 4. Accuracy on the independent test set for different models.
Table 4. Accuracy on the independent test set for different models.
ModelAccuracy
PSO-SVM0.9357
ISSA-SVM0.9103
GWO-SVM0.9294
IGWO-SVM0.9317
PSO-GWO-SVM0.9698
Table 5. Performance comparison on the test set.
Table 5. Performance comparison on the test set.
ModelPrecision (Macro)Recall (Macro)F1-Score (Macro)
PSO-SVM0.93670.93570.9352
ISSA-SVM0.91200.91030.9285
GWO-SVM0.92990.92930.9313
IGWO-SVM0.93140.93190.9097
PSO-GWO-SVM0.97000.96980.9697
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lai, J.; Weng, D.; Xian, F.; Xie, Y.; Chen, Y.; Zhou, Q.; Yuan, C. A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification. Digital 2026, 6, 24. https://doi.org/10.3390/digital6010024

AMA Style

Lai J, Weng D, Xian F, Xie Y, Chen Y, Zhou Q, Yuan C. A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification. Digital. 2026; 6(1):24. https://doi.org/10.3390/digital6010024

Chicago/Turabian Style

Lai, Junju, Dongpeng Weng, Feng Xian, Yuandong Xie, Yujie Chen, Qian Zhou, and Chao Yuan. 2026. "A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification" Digital 6, no. 1: 24. https://doi.org/10.3390/digital6010024

APA Style

Lai, J., Weng, D., Xian, F., Xie, Y., Chen, Y., Zhou, Q., & Yuan, C. (2026). A Hybrid Optimization Model for Transformer Fault Diagnosis Based on Gas Classification. Digital, 6(1), 24. https://doi.org/10.3390/digital6010024

Article Metrics

Back to TopTop