Next Article in Journal
Hierarchical Multi-Population Cooperative Evolutionary Approach for Constrained Multi-Objective Optimization
Previous Article in Journal
Information-Geometric Models in Data Analysis and Physics II
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection

by
Musaed Alrashidi
Department of Electrical Engineering, College of Engineering, Qassim University, Buraydah 52571, Saudi Arabia
Mathematics 2026, 14(5), 784; https://doi.org/10.3390/math14050784
Submission received: 9 January 2026 / Revised: 7 February 2026 / Accepted: 18 February 2026 / Published: 26 February 2026
(This article belongs to the Section E: Applied Mathematics)

Abstract

Monitoring power quality (PQ) and classifying disturbances are essential for guaranteeing the reliable operation of contemporary electrical systems. Nonetheless, deriving discriminative features from PQ signals poses difficulties due to the complexity and non-stationary characteristics of disturbances. Therefore, this research introduces a novel Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE) approach for classifying power quality disturbances (PQDs). The proposed HAMRFE framework incorporates six synergistic techniques: adaptive signal decomposition, multi-resolution wavelet analysis, time–frequency analysis, morphological feature extraction, entropy-based feature extraction, and feature selection optimization. Experiments are performed on a dataset consisting of fifteen types of PQDs with differing noise levels. In addition, the performance of five classification algorithms is assessed, including Support Vector Machine (SVM), Artificial Neural Networks, Random Forest, Extreme Gradient Boosting, and K-nearest neighbor. The results indicate the exceptional efficacy of SVM utilizing HAMRFE features, with classification accuracies of 99.86% for noiseless signals, 99.85% at 40 dB, 99.82% at 30 dB, 99.74% at 20 dB, and 97.92% at 10 dB noise levels. Additionally, an analysis of different feature set sizes reveals that the set comprising 125 features is optimal at all noise levels, achieving a balance between computational efficiency and classification accuracy. Finally, the proposed HAMRFE approach exhibits remarkable resilience to noise and offers a thorough framework for classifying PQDs in practical applications.

1. Introduction

1.1. Background

Power quality (PQ) has become a critical concern in modern electrical networks due to the increasing integration of sensitive devices, renewable energy sources, and power devices. PQ denotes the characteristics of electricity at a particular location inside an electrical system, evaluated against predetermined reference criteria. PQ disturbances (PQDs) signify deviations in these characteristics from the baseline values, alterations that are unfavorable to the operators of the electrical power grid. PQDs, including voltage sags, swells, interruptions, harmonics, and transients, can cause significant damage to equipment, disrupt industrial operations, and lead to substantial economic losses [1,2]. Therefore, accurate detection and classification of these disturbances are essential for implementing appropriate mitigation strategies and ensuring the reliable operation of electrical systems.
Signal processing and pattern recognition are important to modern power systems, enhancing dependability, efficiency, and security. Advanced signal processing techniques offer accurate diagnosis and characterization of PQDs by extracting substantial information from signals at various resolution levels [3]. These technologies provide real-time monitoring systems that continuously assess power quality metrics and implement appropriate mitigation methods upon detecting anomalies. Conversely, pattern recognition algorithms employ this processed data to classify disturbances and subsequently facilitate the implementation of suitable responses. Hence, the incorporation of signal processing for feature extraction and pattern recognition for classification enhances the detection of PQDs, thereby strengthening the stability of power systems.
For feature extraction, various signal processing techniques have been established for the classification of PQDs. For instance, time-domain analysis techniques derive characteristics including root mean square (RMS) values, peak amplitude, and duration. On the other hand, frequency-domain methods utilize the Fourier Transform (FT) to examine harmonic content and spectral properties [4]. Time–frequency analysis techniques, such as Short-Time Fourier Transform (STFT) [5], Wavelet Transform (WT) [6], and S-transform [7], have become increasingly popular for their capacity to simultaneously capture temporal and spectral information. In addition, the classification of PQD signals has extensively utilized various machine learning (ML) methods, including Support Vector Machine (SVM) [8], Artificial Neural Networks (ANNs) [9], decision trees (DT) [10], and Random Forest (RF) [11], which have exhibited differing levels of efficacy. However, the efficacy of these classifiers significantly depends on the quality of the extracted features.

1.2. Related Works

The WT is widely employed for PQ analysis due to its multi-resolution capabilities. The Discrete Wavelet Transform (DWT) decomposes signals into approximation and detail coefficients at several scales, enabling the extraction of information across various frequency bands. For instance, Dekhandji, F.Z. [12] utilizes the DWT to detect and classify PQDs, such as sags, swells, transients, and harmonics. The approach involves deconstructing signals through DWT to identify disturbance characteristics, alongside employing ML techniques such as ANNs and SVM for classification. The results demonstrate that the proposed approach enables rapid and accurate identification of disturbances, with detection times of less than a quarter of a cycle. In addition, the study by [13] aims to enhance the identification and classification of PQDs by a dual-stage methodology that combines DWT and the Extreme Learning Machine (ELM). The methodology employs multi-resolution signal analysis with DWT for feature extraction, succeeded by the ELM for swift classification, executed on a Xilinx Zynq-7000 SoC FPGA for real-time processing. The system attained a classification accuracy of 99.69%, surpassing conventional methods, such as STFT and Fast Fourier Transform (FFT). Nevertheless, standard wavelet analysis is limited in its ability to analyze the local features of non-stationary signals.
Empirical Mode Decomposition (EMD) is an adaptive signal processing technique that separates signals into intrinsic mode functions (IMFs) based on their local oscillatory characteristics. Unlike the WT, EMD is data-driven and does not require a preexisting basis function, making it particularly suitable for non-linear and non-stationary signals, therefore rendering it advantageous for several research avenues. In electrical power system applications, the combination of EMD and the Hilbert–Huang Transform (HHT) was first employed by Kurbatsky et al. in [14,15]. In 2011, Kurbatsky et al. introduced a two-stage adaptive forecasting methodology wherein a time series is initially decomposed into basis functions using the HHT, and the resultant components are then utilized by an ANN to precisely forecast highly variable active power flows [14]. In addition, in 2014, Kurbatsky et al. developed a hybrid methodology for predicting non-stationary time series, specifically power prices, which combined the HHT with EMD [15]. Following these pioneering initiatives, several studies have employed EMD for power system analysis, such as for PQD classification. For instance, the authors in [16] use EMD in conjunction with the HHT for feature extraction to categorize the sources of voltage sags. They utilize Probabilistic Neural Network (PNN) and Multi-layer Neural Network (MLNN) for classification, emphasizing the efficacy of the EMD approach. The results demonstrate that the EMD approach with a PNN attains an overall efficiency of 98.63%, surpassing other classifiers in identifying the source of voltage sag. Furthermore, the study in [17] aims to classify ten PQD types, utilizing a PNN and evaluating its efficacy against a MLNN. The proposed method employs EMD to extract features from ten PQ signals, which are used as inputs into a PNN for classification. The findings indicate that the PNN effectively differentiates ten types of disturbances, achieving a classification accuracy of 98.3%. Nonetheless, despite the effectiveness of EMD in acquiring the properties of PQDs, EMD alone may not capture all relevant attributes of complex PQDs.
Conventional feature extraction methods are sometimes insufficient for detecting the distinct characteristics of several disturbance classes. The vast dimensionality of raw signal data demands efficient feature extraction techniques to reduce computational complexity while preserving discriminative information. Hence, in recent decades, several studies have concentrated on hybrid methodologies that integrate various feature extraction strategies to enhance classification efficacy. For instance, Sexena et al. [18] propose an integrated HT and WT for the classification of transient disturbances. The results demonstrate that SVM, employing data from both HT and WT, attains superior classification accuracy relative to other models. On the other hand, Xu et al. [19] develop an effective methodology for the identification and classification of five PQDs through a hybrid approach that integrates Wavelet Packet Transform (WPT) and Local Mean Decomposition (LMD). The results reveal that the LMD efficiently identifies signal anomalies, whilst the ELM exhibits enhanced generalization capabilities and resilience to noise in comparison to conventional techniques such as backpropagation neural networks and SVM. Despite the improved effectiveness of these hybrid methods, they sometimes lack a systematic framework for integrating complementary techniques and selecting the most discriminative features.

1.3. Main Contributions

According to the preceding discussion, several obstacles remain in the classification of PQDs, including the need for feature extraction that accurately represents various disturbance types, the reduction in computational complexity while preserving discriminative information, and the maintenance of classification effectiveness with different noise types. This research presents a novel Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE) framework for the classification of PQDs. The HAMRFE framework integrates six complementary methodologies: adaptive signal decomposition using EMD, multi-resolution analysis through WT, time–frequency analysis employing modified S-transform, morphological feature extraction, entropy-based feature extraction, and feature selection optimization. Therefore, the subsequent points are the main contributions of this study in relation to other published works in the literature:
  • An extensive feature extraction framework that integrates various complementary strategies to identify the unique attributes of diverse PQDs.
  • A systematic feature selection process that identifies the most distinguishing features for the classification of PQD signals.
  • A comprehensive assessment of the developed methodology utilizing various classifiers and noise levels.
  • Examination of the optimum feature set size for optimizing computational efficiency and classification accuracy.
The subsequent sections of this work are structured as follows: Section 2 outlines the comprehensive approach of the proposed HAMRFE architecture. Section 3 highlights the experimental configuration, including the development of the PQD dataset, classification methods, and performance evaluation metrics. Section 4 presents the results and discussion, including a comparison of classifier performance, an evaluation of noise robustness, optimization of feature set size, and an assessment of feature relevance. Finally, Section 5 concludes the study and explores the limitations of this work and potential research directions.

2. Materials and Methods

This section thoroughly explains the research framework and provides an overview of the HAMRFE algorithm.

2.1. HAMRFE Framework Overview

The Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE) framework incorporates six complementary algorithms to extract distinctive features from PQD data: adaptive signal decomposition, multi-resolution wavelet analysis, time–frequency analysis, morphological feature extraction, entropy-based feature extraction, and feature selection and optimization. Each component addresses distinct attributes of PQD signals, collectively offering a thorough description that encompasses both time-domain and frequency-domain features, along with signal complexity and irregularity patterns. The framework is engineered to be adaptable, autonomously modifying itself to the local characteristics of non-stationary signals, which is especially crucial for the classification of PQDs. Figure 1 depicts the comprehensive architecture of the HAMRFE framework.
The HAMRFE algorithm employs features that are both physically and mathematically interpretable, explicitly constructed across various domains: time, frequency, time–frequency, morphology (signal shape), and entropy (signal complexity). Hence, the HAMRFE algorithm executes multi-view signal representation rather than a single-mechanism representation. These make the proposed algorithm more beneficial for power system engineers.
The HAMRFE method’s workflow, seen in Figure 2, comprises five principal steps: signal preprocessing, feature extraction, feature selection, classification, and performance evaluation. These steps are explained as follows:
  • Step 1: Signal Preprocessing: The HAMRFE method starts with generating synthetic PQD signals. Thereafter, the signals should be processed before feeding them to the feature extraction components. The preprocessing procedures encompass noise augmentation (for robustness evaluation), normalization, and segmentation.
  • Step 2: Feature Extraction: The PQD signals are then fed to the six HAMRFE components for feature extraction, shown in Figure 1. Each of the PQD signals is processed by each component, accompanied by typical outputs from each approach. These features are statistical, spectral, morphological, and entropy-based.
  • Step 3: Feature Ranking: The extracted features in Step 2 are then ranked. The selection is based on the feature importance that reflects the relative value of various features, including the selection of optimal feature subsets of differing sizes (5, 10, 20, 25, 50, 75, 100, 125, and 148 features).
  • Step 4: Classification Model: The considered ML algorithms, including SVM, ANN, RF, Extreme Gradient Boosting (XGBoost) version 1.0.9, and KNN, are trained using the training dataset. Each of the algorithms is exposed to the same training and testing datasets.
  • Step 5: Performance Evaluation: The developed classification models are evaluated using the testing dataset. The evaluation metrics, including accuracy, precision, recall, F1-Score, and confusion matrix, are employed to evaluate classification performance.
The subsequent subsections offer a comprehensive discussion of each element of the HAMRFE framework.

2.2. Adaptive Signal Decomposition

The initial element of HAMRFE utilizes EMD to adaptively partition the signal into IMFs. The EMD algorithm was introduced in [20] and is an adaptive data-driven technique tailored for the analysis of non-linear and non-stationary signals, such as PQD signals. The fundamental concept of EMD is to deconstruct a complex signal into a limited collection of IMFs and a residual component, each signifying oscillatory modes at various scales.
Given a PQD signal V ( t ) , the EMD process decomposes it into a finite set of IMFs c i ( t ) and a residue r K t :
V t = i = 1 K c i t + r K t
where K is the predefined number for the extracted IMFs. The process of decomposition is as follows [21]:
  • Determine Extrema: Identify all local maxima and minima of the raw PQD signal V ( t ) .
  • Create Envelopes: Utilize cubic spline interpolation to independently connect the local maxima and minima, constructing the upper envelope e m a x ( t )  and the lower envelope e m i n ( t ) . The average of these two envelope lines is calculated as m ( t ) = e m a x ( t ) + e m i n ( t ) 2 .
  • Extract the Detail Component: Deduct the mean envelope m t   from the original signal to derive a candidate component: f t = V t m t .  This stage seeks to isolate the oscillatory mode included within the signal.
  • Verify IMF Conditions: The candidate function f t   is evaluated to ascertain its compliance with the predetermined conditions for IMFs.
  • The quantity of extrema and zero-crossings in f t   must be either equal or differ by no more than one.
  • The average of the upper and lower envelopes must consistently equal zero over the signal.
5.
If f t   meets these criteria, it is referred to as the first intrinsic mode function, i m f 1 ( t ) . If not, the sifting process (steps 1–4) is performed on f t   until the criteria for the IMF are satisfied.
6.
Extract IMF and Repeat: After extracting an IMF, subtract it from the original signal to produce a residual: r t = V t i m f 1 t .   The residual r t   subsequently transforms into the new signal, and steps 1–4 are reiterated to derive additional IMFs until the residual is either a monotonic function or possesses fewer than two extrema, indicating the conclusion of the decomposition process.
7.
Reconstruction: The original signal can be restored by aggregating all IMFs and the residual: V t = i = 1 p   i m f i t + r t .
The benefits of EMD stem from its adaptive qualities and physical interpretability, as each IMF represents an intrinsic oscillatory mode at a certain time scale, rendering it particularly effective for studying signals with non-linear and non-stationary properties, such as PQD data. After decomposing the PQD signal into a set of IMFs, the features are extracted and included the IMF energy distribution, statistical measures (mean, standard deviation, kurtosis, and skewness), zero-crossing rates, instantaneous frequency characteristics, and inter-IMF correlations.
Figure 3 illustrates the results of the EMD component in adaptively decomposing PQD signals (voltage transient oscillation) into IMFs. The figure displays the progression of the original signal by the EMD process, resulting in several IMF components, each representing distinct oscillatory modes of the signal. In addition, the IMF features, such as energy, standard deviation, and zero-crossing rate, are shown for such a signal.

2.3. Multi-Resolution Wavelet Analysis

The second element employs the Discrete Wavelet Transform (DWT). The DWT is an effective signal processing method that examines signals across many scales and resolutions by dividing the original signal into distinct frequency components [22]. This multiresolution approach is especially appropriate for non-stationary signals like PQDs, as it successfully captures both time and frequency information.
The DWT employs a designated wavelet function ψ t , referred to as the mother wavelet, to produce a set of wavelet basis functions via dilations and translations. The wavelets, scaled and translated, are expressed as follows: ψ j , k ( t ) =   2 j / 2 ψ 2 j t k , where j denotes the scale (degree of decomposition), and k signifies the position (translation). The aim of the DWT is to decompose the original signal V t into approximation and detail coefficients at various levels [23]:
  • Approximation coefficients A j : represent the low-frequency, coarse characteristics of the signal.
  • Detail coefficients ( D j ): represent the high-frequency, intricate characteristics at each level.
The complete signal can be reconstructed from these coefficients by utilizing the inverse wavelet series: f ( t ) = j = + k = + d j , k ψ j , k ( t ) , where d j , k is the continues wavelet coefficient, which can be found as follows:
d j , k = R V ( t ) ψ j , k ( t ) d t
The wavelet basis functions are discretized variants derived from ψ ( t ) , sampled at particular scales and locations. The DWT utilizes filter banks—low-pass and high-pass filters—to iteratively divide the signal into approximation and detail components. At each level, the approximation coefficients are split into finer scales, facilitating a multi-level layered analysis.
The DWT facilitates the analysis of signals over many frequency bands with varying resolutions, effectively capturing both transient and steady-state characteristics [24]. This can deliver accurate information regarding the timing of specific frequency components, which is crucial for PQD research. Filter bank implementation facilitates rapid computing, making it suitable for real-time applications like PQD deduction.
Our proposed method employs the Daubechies wavelet (db4) at a decomposition level of 5, which has demonstrated efficacy in PQD analysis. Hence, the features are extracted from the approximation and detail coefficients, and these features are related to wavelet energy distribution, the entropy of wavelet coefficients, statistical measures of coefficients, and energy ratios between decomposition levels. To exhibit the wavelet decomposition process, Figure 4 demonstrates the analysis of signals, such as voltage sag, swell, and impulse transient, across various resolution levels. The hierarchical structure demonstrates the division into approximation and detailed coefficients over several scales.

2.4. Time–Frequency Analysis

The third phase of the proposed framework is utilizing the S-transform. The S-transform is an advanced time–frequency analysis method intended for the examination of non-stationary signals, characterized by their evolving statistical features across time. The S-transform was introduced by Stockwell [25], and it enhances the STFT and WT by integrating the benefits of both, facilitating improved localization in both time and frequency domains.
The continuous S-transform of a signal V t   is mathematically defined as an integral that incorporates a Gaussian window function w ( t τ , f ) , which is frequency-dependent. The fundamental concept is to calculate a localized FT utilizing a window that adjusts its width based on frequency, facilitating multi-resolution research. The formula for the continuous S-transform is as follows:
S ( τ , f ) = +   V ( t ) | f | 2 π e ( t τ ) 2 f 2 2 e j 2 π f t d t
where τ is the spectral localization time, while f is the frequency, and the term | f | 2 π e ( t τ ) 2 f 2 2 is the Gaussian window function.
The discrete S-transform of a discrete signal sampled at intervals T calculates the Fourier coefficients using a frequency-dependent Gaussian window, encapsulating both amplitude and phase information throughout time and frequency. The transformation generates a complex matrix S ( τ , f ) , with each member representing the magnitude and phase at a particular time τ   and frequency f . The magnitude indicates the signal’s energy at a specific point, but the phase conveys intricate details regarding the waveform’s composition.
To improve the time–frequency resolution, the modified S-transform is employed in this study using an adjustable Gaussian window:
S γ τ , f = +   V ( t ) f γ 2 π e ( t τ ) 2 f 2 γ 2 e j 2 π f t d t
where γ is a parameter that regulates the resolution. When γ < 1, time resolution enhances while frequency resolution diminishes; conversely, when γ > 1, frequency resolution improves at the cost of time resolution.

Extraction of Time–Frequency Matrix

The extraction of the time–frequency matrix is an essential procedure in the preparation of PQD signals for classification by ML techniques [26]. This method entails converting raw signals into a structured representation that accurately represents their temporal and spectral attributes, while also enhancing computational performance.
Upon applying the S-transform to a power disturbance signal, a two-dimensional (2D) matrix is produced, depicting the signal’s time–frequency distribution. This matrix represents the amplitude (or energy) information across different frequencies and time periods, with rows denoting specific frequency components and columns indicating time instances. The matrix offers an extensive feature map that represents the signal’s dynamic behavior. The features extracted are related to global time–frequency characteristics, frequency band energies, energy ratios between frequency bands, and time–frequency pattern metrics. Figure 5 demonstrates the modified S-transform procedure, depicting the conversion from time-domain to time–frequency representation. The colorful spectrograms in the subplots illustrate the unique patterns shown by several disturbances, including voltage harmonics and flickers, within the time–frequency domain.

2.5. Morphological Features Extraction

Morphological features are a crucial category of signal attributes utilized in the classification of PQDs since they proficiently encapsulate the shape, structure, and temporal aspects of disturbance waveforms. These features examine the geometric and topological characteristics of PQD signals, offering supplementary information to spectral and entropy-based features.
In this study, the signal envelope is extracted using the Hilbert transform: H x t = 1 π + V ( t ) n t d t . The analytical signal is formulated as follows: z t = V t + j H x t = a t e j ( t ) , where a t represents the instantaneous amplitude (envelope) and t denotes the instantaneous phase. Hence, from the envelope, the following features are extracted [27,28]:
  • Peak and Crest Factors:
  • Peak Factor: Quantifies the ratio of the signal’s maximum amplitude to its RMS value. It signifies the occurrence of transitory spikes or anomalous peaks, typical of specific disturbances.
Peak   Factor   = m a x | V ( t ) | 1 N i = 1 N     V t i 2
  • Crest Factor: Analogous to the peak factor, it denotes the severity of waveform peaks in relation to its RMS value. Increased crest factors frequently indicate temporary or impulsive disruptions.
2.
Duration and Rise Time:
  • Duration: The total duration of the signal displays deviation beyond a specified threshold, which is essential for distinguishing between brief transient disturbances and prolonged defects.
  • Rise Time: The duration required for the signal to ascend from a defined low level to a higher level during a disturbance event, reflecting the features of the disturbance onset.
3.
Zero-Crossing Features: The zero-crossing rate and distribution are employed to examine the frequency of signal polarity alterations, indicating the oscillatory characteristics of the waveform. Abrupt surges in zero-crossings may signify impulsive occurrences or transitory disruptions.
4.
Symmetry and Skewness of Waveforms: Evaluate the symmetry of the waveforms to distinguish between symmetrical and asymmetrical disturbances. Skewness measures the asymmetry of the amplitude distribution, whereas kurtosis evaluates the sharpness of the waveform’s peak.
5.
Shape Characterizations and Morphological Procedures:
  • Envelope Characteristics: The amplitude envelope can be examined for its peaks, troughs, and overall contour, elucidating the transitory dynamics of the disturbance.
  • Morphological Transformations: Techniques such as dilation and erosion from mathematical morphology are utilized on waveform signals to extract features associated with signal peaks, valleys, and structural alterations, emphasizing particular disruption patterns.
6.
Area and Energy of Disturbance Segments: Computing the entire area beneath the waveform during a disturbance interval yields insight into its shape and energy characteristics. These parameters can differentiate between various disturbance types according to their amplitude and duration.
7.
Number and Distribution of Local Extrema: Assessing the quantity of local maxima and minima, as well as their temporal distribution, aids in delineating the waveform’s intricacy and transient characteristics.
Figure 6 depicts the analysis of signal morphology and envelope attributes. The figure illustrates the signal contours, peak detection, and gradient-based features that represent the geometric characteristics of the pure voltage signal, as well as those of voltage interruptions, harmonics, notching, sags with harmonics, and oscillations.

2.6. Entropy-Based Feature Extraction

Entropy-based characteristics are extensively utilized in PQD classification because they effectively measure the complexity, unpredictability, and disorder present in signals. These qualities encapsulate the inherent informational content of PQDs and are proficient in differentiating various forms of anomalies. The primary categories of entropy features utilized encompass Shannon entropy, spectral entropy, sample entropy, permutation entropy, wavelet energy entropy, and IMF entropy [29,30].
  • Shannon Entropy: Shannon entropy quantifies the total uncertainty or disorder within a signal, determined by its amplitude distribution. The calculation involves initially creating a probability distribution of the signal’s amplitude values, followed by the use of the Shannon entropy formula shown in Equation (6).
H = i   p i log 2 p i
where p i  denotes the probability associated with the i -th amplitude level. A greater Shannon entropy signifies increased unpredictability, which is beneficial for identifying significantly irregular disturbances.
2.
Spectral Entropy: Spectral entropy generalizes the notion of Shannon entropy to the frequency domain. Following the calculation of the power spectral density of the signal, spectral entropy assesses the distribution of spectral power. It offers a perspective on the complex nature of the signal’s frequency. The spectral entropy formula is shown in Equation (7).
H spectral   = f   p f log 2 p f
where P f denotes the normalized spectral power at frequency f . Spectral entropy accurately quantifies spectral uniformity or concentration, differentiating between constant and transitory disruptions.
3.
Sample Entropy: Sample entropy assesses the regularity and predictability of a signal by evaluating the probability that analogous sequences retain their similarity in consecutive points. This entails analyzing sequences within the data and determining the likelihood that structures of a specified length remain proximate when augmented by an additional point. Reduced sample entropy signifies regular, predictable signals, whereas elevated values indicate complexity or randomness. The sample entropy formula is shown in Equation (8).
S a m p E n m , r , N = I n A m ( r ) B m ( r )
where A m r denotes the chance that two sequences of length m + 1 align within a tolerance of r , whereas B m ( r ) represents the likelihood that two sequences of length m align within the same tolerance r .
4.
Permutation Entropy: Permutation entropy analyzes the temporal sequence of a signal by evaluating the permutation patterns of amplitude values across sliding frames. It assesses complexity according to the variety of ordinal patterns. The permutation entropy formula is shown in Equation (9).
H perm   = patterns     p pattern   l o g   p pattern  
where p pattern   denotes the probability of a particular ordinal pattern. Permutation entropy is computationally efficient and resilient to noise, rendering it beneficial for assessing the complexity of power disturbances.
5.
Wavelet Energy Entropy: Wavelet energy entropy assesses the distribution of a signal’s energy among wavelet coefficients at different scales. Through the application of the WT to decompose the signal, the energy at each scale is calculated, and the entropy of this energy distribution indicates the complexity of the signal. The wavelet energy entropy formula is shown in Equation (10).
H wavelet   = i   E i l o g   E i
where E i denotes the normalized energy at scale i . This measure encapsulates the multi-resolution attributes of power disruptions, facilitating the identification of transitory occurrences.
6.
IMF Entropy: Originating from EMD, IMF entropy evaluates the complexity of the intrinsic mode functions derived from the signal. Upon decomposing the signal into IMFs, the energy distribution across these modes is examined, and the calculated entropy indicates the signal’s irregularity. Elevated IMF entropy signifies more intricate or disrupted signals. The IMF entropy formula is shown in Equation (11).
I M F E = i = 1 K R E i   log 2   R E i
where R E i is the relative energy of the i -th IMF.
Figure 7 presents different entropy metrics employed to assess the complexity of the pure voltage signal: voltage interruption, harmonic, notching, and sag with harmonic and oscillation. The figure provides representations of Shannon entropy, sample entropy, and permutation entropy calculations, illustrating their distinct capacities to capture various aspects of each of the signal irregularities.

2.7. Feature Selection and Optimization

The final component identifies the most distinguishing features and refines the feature set for classification purposes. Finding suitable features is essential for minimizing computing complexity and enhancing classification performance by concentrating on the most pertinent features and eliminating redundant or irrelevant ones [31].
In this work, the modified ReliefF algorithm is utilized for the purpose of feature ranking [32]. The conventional ReliefF method evaluates the efficacy of features by assessing their capacity to differentiate across proximate examples. For each instance, the method identifies its k nearest neighbors from the same class (nearest hits) and k nearest neighbors from each distinct class (nearest misses). Subsequently, it revises the quality assessment for each characteristic by evaluating the disparity between the instance and its closest hits and misses.
The improved ReliefF algorithm integrates class-specific weights to address class imbalance [33]:
  • Feature weight initialization: W [ A ] = 0  for every feature A .
  • For i  = 1 to m  (the total number of iterations), select an instance at random R i  and identify the k  closest results H j  and k  nearest misses M j ( C )  for each category C c l a s s ( R i ) . Hence, for every attribute A
W A = W A j = 1 k d i f f ( A , R i , H j ) m . k + C c l a s s ( R i ) P ( C ) a P ( c l a s s ( R i ) ) j = 1 k d i f f ( A , R i , M j ( C ) ) m . k
where d i f f A , R i , H i denotes the distance disparity between R i and the i -th neighbor of the identical category H j ( j = 1, 2, …, k ) about feature A . d i f f ( A , R i , M i ( C ) ) indicates the variation in distance between R i and the i -th neighbor of an alternate category M j ( C ) ( j = 1, 2, …, k ) about feature A . P ( C ) represents the ratio of samples in category C to the total number of samples, while P ( c l a s s ( R i ) ) denotes the ratio of the category to which sample R i is assigned.
Following the ranking of features, a wrapper-based method is employed with cross-validation to identify the optimal subset of features.
  • Arrange features according to their ReliefF weights in descending order.
  • For various feature subset sizes (5, 10, 20, 25, 50, 75, 100, 125, and 148), identify the highest-ranked features and assess classification efficacy by 10-fold cross-validation. Finally, compute the accuracy, precision, recall, and F1-Score.
  • Determine the feature subset size that optimally balances performance and computational complexity.
The feature selection procedure guarantees the inclusion of the most discriminative features from each component of the HAMRFE architecture in the final feature set while eliminating redundant or unnecessary characteristics. Algorithm 1 presents a comprehensive algorithmic representation of the HAMRFE flow.
Algorithm 1: Algorithm for Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE)
Input: Power quality disturbance signals [signals] <- Signals in time-domain format
      SNR_level  <- Signal-to-noise ratio level for noise addition (optional)
Output: feature_vector           <- Extracted features for classification
    selected_features            <- Optimized feature subset
    classification_result          <- Final label of classification
1:function HAMRFE (signals [], SNR_level)
2:       features = []
———— Add noise if SNR_level is specified ————
3:   if (SNR_level is not Inf) then
4:    signals = add_gaussian_noise (signals, SNR_level)
5:   END If
————  Process each signal ————
6:  for (each signal in signals []) do
———— Component 1: Adaptive Signal Decomposition ————
7:    IMFs [] = EMD_decomposition (signal)
8:    IMF_features = extract_IMF_statistics (IMFs [])
9:    IMF_features.append(compute_IMF_correlations (IMFs []))
————  Component 2: Multi-Resolution Wavelet Analysis ————
10:    [approximation, details []] = wavelet_decomposition (signal, ‘db4’, 5)
11:    wavelet_features = extract_wavelet_features (approximation, details [])
————  Component 3: Time–Frequency Analysis ————
12:    S_matrix = modified_S_transform (signal, gamma = 0.9)
13:     stransform_features = extract_time_frequency_features (S_matrix)
————  Component 4: Morphological Feature Extraction ————
14:    envelope = extract_signal_envelope (signal)
15:    gradient = compute_gradient (signal)
16:     morphological_features = extract_morphological_features (signal, envelope, gradient)
————  Component 5: Entropy-Based Feature Extraction ————
17:     entropy_features = compute_entropy_measures (signal, IMFs [], details [])
————  Combine all features ————
18:    signal_features = concatenate (IMF_features, wavelet_features, stransform_features,
19:    morphological_features, entropy_features)
20:    features.append (signal_features)
21: END For
———— Component 6: Feature Selection and Optimization ————
22:   feature_weights = apply_modified_reliefF (features)
23:   selected_features = select_optimal_feature_subset (features, feature_weights)
————  Train classifier and predict ————
24:  model = train_SVM (selected_features, RBF_kernel, C = 100, gamma = 0.01)
25:  classification_result = model.predict (selected_features)
26:  return feature_vector, selected_features, classification_result
27:END HAMRFE

3. Experimental Setup

To assess the efficacy of the proposed HAMRFE technique, a synthetic dataset is generated, consisting of 15 classes of PQDs, namely pure voltage signal (C1), voltage sag (C2), voltage swell (C3), voltage interruption (C4), voltage harmonics (C5), transient oscillation (C6), voltage flicker (C7), impulse transient (C8), voltage notch (C9), voltage sag with harmonics (C10), voltage swell with harmonics (C11), voltage sag with oscillation (C12), voltage swell with oscillation (C13), voltage sag with harmonics and oscillation (C14), and voltage swell with harmonics and oscillation (C15). The synthetic data creation method facilitates regulated testing with diverse disturbance settings and degrees of noise. Each disturbance type is described using mathematical equations that precisely depict real-world power quality problems. A total of 52,500 signals were generated with varying properties. Each signal has 3500 data points, with a fundamental frequency of 50 Hz, a sampling rate of 3200 Hz, and a duration of 10 cycles per signal. To evaluate the resilience of the HAMRFE method against noise, white Gaussian noise is introduced to the signals at various signal-to-noise ratio (SNR) levels: noiseless (Inf dB), 40 dB, 30 dB, 20 dB, and 10 dB. The original voltage signal is depicted as a pure sinusoidal waveform:
V 0 t = A s i n ( 2 π f 0 t )
where A represents the amplitude (established at 1.0 p.u.) and f 0 denotes the fundamental frequency (50 Hz). Table 1 lists the 15 classes of PQDs contained in the dataset. The generated data ( D ) is divided into two subsets, the training subset ( D T r a i n ) and the testing subset ( D T e s t ), such that D = D T r a i n     D T e s t . In this work, 80% of the data are set for the training process, while the remaining 20% are employed for testing. This ratio is a widely recommended practice in ML, as it balances sufficient data for a classification algorithm to acquire hidden knowledge while preserving enough unseen data to examine its performance.

3.1. Classification Algorithms

This research presents multiple machine learning methods to create a classification model for PQDs. ML is a subset of artificial intelligence that uses statistical methods to facilitate computers in learning from data, identifying patterns, and making predictions or decisions. ML approaches can be broadly categorized into two types: supervised and unsupervised learning. Supervised learning uses labeled data to forecast or classify new data, employing widely used methods such as RF and SVM. Conversely, unsupervised learning utilizes unlabeled data to identify unique patterns or structures, applying methodologies like clustering, principal component analysis, and autoencoders. In this research, supervised learning methodologies are employed to develop the classification models. The following subsection offers a detailed description of the supervised machine learning techniques applied in the study.

3.1.1. Support Vector Machine

SVM is a commonly employed supervised learning technique for classification, regression, and outlier identification tasks [34]. The key objective is to identify the ideal hyperplane that can differentiate the data into various groups. The approach functions by partitioning the training dataset into discrete categories; hence, it optimizes the separation between them. When unique data is assessed, it is categorized into the nearest classification based on specific estimations. For data with linear separation, the algorithm utilizes a hyperplane; conversely, for non-linear data, it applies multiple kernel functions according to the data type.

3.1.2. Artificial Neural Networks

ANNs are computational structures designed to emulate the architecture and functionality of biological neural networks seen in the human brain. They comprise interlinked layers of nodes or neurons that analyze data by transmitting signals through weighted connections [35]. ANNs can learn complex structures and connections in data, rendering them appropriate for various tasks, such as classification, regression, pattern recognition, and prediction.
An ANN has an input layer for data acquisition, one or more hidden layers for feature extraction and transformation, and an output layer that produces the final outcome. Neurons in buried layers utilize activation functions—such as sigmoid, ReLU, or tanh—to incorporate non-linearity, allowing the network to represent intricate, non-linear interactions. ANNs obtain knowledge during a training phase, in which the network adjusts the weights of its connections to minimize the difference between its predictions and the actual outcomes, often utilizing algorithms such as backpropagation together with optimization techniques like gradient descent. This learning method incrementally adjusts the weights throughout several epochs until the network’s performance reaches stability.

3.1.3. Random Forest

RF is a widely utilized ensemble learning technique grounded in decision trees. It amalgamates several decision trees to enhance predictive accuracy for novel data and mitigate overfitting [36]. It is capable of managing highly dimensional data and is frequently employed for classification, regression, and feature selection applications. It represents a more sophisticated iteration of decision trees and is considered an enhancement over bagged decision trees. The algorithm constructs each decision tree by randomly choosing a subset of characteristics and samples from the training dataset. The ultimate prediction is derived by consolidating the forecasts of all the trees. RF has demonstrated efficacy in numerous classification and regression tasks [37].

3.1.4. Extreme Gradient Boosting

XGBoost is a widely utilized framework for enhancing forecasting precision in tasks such as classification and regression. It accomplishes this by employing a combination of gradient-boosting techniques and an ensemble of decision trees. It utilizes a recursive approach to integrate models until the target performance metrics are attained, rendering it an enhanced iteration of tree gradient-boosting methods [38]. XGBoost is exceptionally efficient and adaptable, enabling the optimization of memory and hardware resources for managing large-scale models. Moreover, it adeptly manages sparse data and employs a weighted quantile sketch for approximate learning.

3.1.5. The K-Nearest Neighbor

The KNN algorithm is a straightforward and efficient non-parametric method for classification and regression in ML. It operates on the principle of similarity, assessing a data point’s neighbors inside the feature space to ascertain its classification. KNN, in its most basic form, ascertains the classification of an unclassified sample by evaluating its proximity to the nearest surrounding classes [39]. The quantity of neighbors taken into account is represented by “K” in KNN, and the technique identifies the nearest K points by computing distances, typically employing Euclidean or alternative metrics. A majority vote among these neighbors is necessary for categorization, and the unclassified data point is assigned the class label that is most prevalent among them. In regression, the prediction of a continuous value is achieved by averaging the values of the K-nearest neighbors. KNN is a straightforward technique because of its interpretability; nonetheless, its implementation necessitates meticulous selection of a distance metric and an appropriate K value to achieve a balance between accuracy and complexity.

3.2. Performance Evaluation Metrics

To assess the efficacy of the ML algorithms, the confusion matrix is computed, and the subsequent metrics are derived [40,41,42]:
  • Overall Accuracy: It quantifies the total number of correct forecasts made by a particular model. The formula for accuracy is as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
  • Recall: It evaluates the effectiveness of the algorithm for each class. It assesses the ratio of true positives that were accurately recognized.
R e c a l l = T P T P + F N
  • Precision (P): It quantifies the algorithm’s accuracy in classifying each class. It measures the ratio of accurate positive identifications.
P = T P T P + F P
  • F1-Score: It assesses the equilibrium between precision and recall.
F 1 S c o r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
where TP denotes true positives, FP represents false positives, TN indicates true negatives, and FN signifies false negatives.

4. Results and Discussion

This section compares the forecasting models based on specific criteria to evaluate their efficacy in predicting PQD classes. The outcomes of the optimal forecasting models, as evaluated by accuracy, precision, recall, F1-Score, and confusion matrix, are listed in Table 2, Table 3 and Table 4. Furthermore, Figure 8, Figure 9 and Figure 10 illustrate the confusion matrices for the SVM, XGBoost, and KNN algorithms examined in this research with different noise levels.
In this study, five ML algorithms are tested for the PQD classification objective. These ML algorithms were selected above alternative existing algorithms due to their enhanced performance in the PQD prediction task. The SVM effectively addressed intricate decision boundaries, rendering it appropriate for classification challenges. The ANN algorithm is particularly flexible in modeling complex patterns in PQD signals and is suitable for simultaneously classifying various disturbance types, whereas KNN is straightforward to implement and effective for different classification tasks. In contrast, RF utilizes a bagging approach and the ensemble nature of multiple decision trees, enhancing resilience to noise in power measurements and minimizing overfitting during classification. Finally, the XGBoost algorithm employs gradient boosting and an ensemble of decision trees, rendering it suitable for diverse dataset sizes while achieving high-accuracy predictions.
For these models’ configurations, the hyperparameters and architectural settings are appropriately selected to avoid overfitting. The SVM is executed within a multiclass framework with a linear kernel function that maps HAMRFE features into higher-dimensional space for separation. The soft-margin parameter, C , of the linear kernel function is established by an empirical trial-and-error approach to ascertain the one that yielded optimal validation accuracy and generalization performance. The ANN is designed using a multilayer perceptron (MLP) and the backpropagation algorithm, with the Levenberg–Marquardt method as the training function. Three different layers are considered for BPNN, namely input, hidden, and output. The number of neurons in the hidden layer is set equal to the number of feature set sizes. For instance, 10 neurons are selected with the set that contains the best 10 features. Also, the ANN employs dropout (rate 0.3) and L2 weight regularization (λ = 0.001) to prevent neuronal co-adaptation and to penalize excessive weights. The RF contained 100 decision trees trained by bootstrap aggregation and Gini impurity for split optimization, whereas the XGBoost model utilized sequential tree learning with a maximum of 100 splits to regulate tree depth and mitigate overfitting. The KNN algorithm employed k = 5 and Euclidean distance to achieve an optimal equilibrium between sensitivity to noise and rapid decision-making in localized regions. Furthermore, in the experimental setup, we employed parameter-isolated splitting to ensure that models are evaluated on novel combinations of disturbance parameters. This offers a way against overfitting, ensuring generalization rather than memorization.

4.1. Classifier Performance Comparison

The efficacy of various classification methods is evaluated utilizing the HAMRFE features derived from original signals. The preliminary assumption is made by considering all the features attained with the proposed HAMRFE method. A total of 148 features is generated, and Table 2 displays the classification accuracy results for each classifier based on these 148 features at different SNR levels. In Section 4.3, an investigation is conducted to examine the impact of selecting different sets of features on the overall classification accuracy.
In this study, the bootstrap method is used to evaluate the stability and uncertainty of the classification algorithms. In the bootstrap implementation, for every SNR level and the five classifiers analyzed, the method conducts 20 iterations of sampling with replacement from the original dataset to create “pseudo-datasets”, which are then utilized to train and assess these classifiers. In each bootstrap iteration, the resampled data is divided into training (80%) and testing (20%) sets. Subsequently, each classifier is exposed to the bootstrap training set and validated on the bootstrap test set. This guarantees that the accuracy value considers both the data variation and the classifier’s stability. This procedure provides a distribution of outcomes for the accuracy, from which the mean and a 95% confidence interval (determined by the standard deviation (STD) as a margin of error) are calculated.
Table 2 indicates that the SVM attained the optimum efficiency across all SNR levels, with an accuracy of 99.83% ± 0.0003 (noiseless), 99.82% ± 0.0005 (40 dB), 99.80% ± 0.0005 (30 dB), 99.72% ± 0.0005 (20 dB), and 97.31% ± 0.0020 (10 dB). This outstanding performance is due to SVM’s capacity to identify optimal decision boundaries in high-dimensional feature spaces, which is especially beneficial for the HAMRFE features that reflect complex patterns in PQDs. For the noiseless signals, the ANN achieved the second highest accuracy value of 99.76% ± 0.0005, succeeded by RF at 99.68% ± 0.0006, XGBoost at 99.41% ± 0.0011, and KNN at 91.06% ± 0.0034, respectively. The suboptimal performance of KNN can be attributed to its tendency to overfit in high-dimensional feature spaces. In addition, Figure 8, Figure 9 and Figure 10 illustrate the confusion matrix for the SVM classifier results with the noiseless signals, the XGBoost classifier results with the 40 dB noise level, and the KNN classifier results with the 20 dB noise level. These figures offer a comprehensive overview of the classification efficacy for each type of disturbance.
Furthermore, Table 2 illustrates that SVM exhibits the most stable classifier, demonstrating minimal change across all SNR levels (ranging from 0.0003 to 0.0020). This indicates that it performs effectively even at high SNR levels. At a low SNR (0.0059 at 10 dB), the ANN exhibits a little increased STD, yet it remains very stable at higher SNRs (≤0.0006). RF remains consistently steady, with low STD values ranging from 0.0004 to 0.0013. XGBoost exhibits considerable variability, with the STD decreasing as the SNR increases, from 0.0039 at 10 dB to 0.0011 in a noiseless signal. KNN exhibits the largest standard deviation values (0.0027–0.0038), indicating that its performance is more susceptible to noise and data sampling variations. This aligns with the observation that it exhibits reduced mean accuracy with noisy signals.
The confusion matrices in Figure 8, Figure 9 and Figure 10 indicate that the majority of misclassifications appear between analogous disturbance classes, specifically between sag with harmonics (C10) and sag with harmonics and oscillation (C14), as well as between swell with harmonics (C11) and swell with harmonics and oscillation (C15). This is anticipated, as these disturbances possess similar properties, rendering them more difficult to differentiate.

4.2. Noise Robustness Analysis

To determine the resilience of the proposed HAMRFE approach against noise, the performance of the SVM classifier is tested using data at varying noise levels. Table 2 displays the classification accuracy corresponding to each SNR level. The results illustrate the remarkable resilience of the HAMRFE approach to noise. Despite a significant noise level of 10 dB, the classification accuracy exceeds 97%, which is exceptional for PQD classification. According to the accuracy results listed in Table 2, the accuracy values at a noise level of 30 dB have satisfactory prediction results, where accuracy results are SVM—99.80% ± 0.0005, ANN—99.63% ± 0.0006, RF—99.61% ± 0.0004, XGBoost—99.02% ± 0.0014, and KNN—89.25% ± 0.0038. This accurate performance is due to the complementary characteristics of the features obtained by several components of the HAMRFE framework, which offer redundancy and tolerance to noise. Table 3 and Table 4 list the classification performance evaluation metrics results for each type of disturbance with SVM, ANN, and RF at SNR levels of 30 dB and 10 dB, respectively.
Table 3 and Table 4 show that the SVM continuously surpasses other classifiers at all noise levels. The difference in performance between the SVM and other classifiers expands with increasing noise levels, underscoring the enhanced noise resilience of the SVM when utilized with HAMRFE features. In addition, the results exhibit that certain disturbance types, such as voltage swell (C3), interruption (C4), and notching (C9), exhibit greater resilience to noise, preserving good accuracy even at a 10 dB SNR. Conversely, complex disturbances such as sag with harmonics and oscillation (C14) and swell with harmonics and oscillation (C15) exhibit a more pronounced decline in accuracy as noise levels rise. This is anticipated, as the noise may affect the shape of such signals, making these complex disturbances difficult to differentiate.

4.3. Feature Set Size Optimization

The effectiveness of the SVM classifier is assessed using various feature set sizes to ascertain the best number of features for classification. Unlike Section 4.1, which assumes the inclusion of all 148 features, this subsection examines the performance of the SVM model with varying sets of features. The SVM is selected due to its superior accuracy achieved with 148 features compared to the other considered classifiers. The features were ranked with the modified ReliefF method outlined in Section 2.7, and the highest-ranked features were chosen for each specified set size. In this study, eight feature sets are considered, including the best 5, 10, 20, 50, 75, 100, 125, and 148 features. Five means that the set that contains the top five features is ranked with the modified ReliefF-based method. The intermediate set sizes are selected to identify the “elbow” in the performance curve as the number of features increases. Bootstrap analysis is conducted to construct 95% confidence intervals for each set of features. Table 5 displays the classification accuracy outcomes for these feature set sizes using SVM at different SNR levels.
Table 5 exhibits that classification accuracy rises with an increase in features; however, the rate of enhancement decreases as further features are incorporated, particularly from 75 to 148 features. Table 5 also shows that the set that contains 125 features is the optimal set size for all the SNR levels, followed by 148 and 100 features, respectively. The SVM classifier attains an accuracy of 99.86% ± 0.0002 with 125 features, while the accuracy is found to be 99.83% ± 0.0003 with 148 features for noiseless signals. For the SNR level of 20 dB, the table demonstrates that the SVM led to accuracy values of 76.72% ± 0.0052 (5 features), 90.41% ± 0.0032 (10 features), 93.79% ± 0.0022 (20 features), 98.90% ± 0.0010 (50 features), 99.53% ± 0.0005 (75 features), 99.71% ± 0.0006 (100 features), 99.74% ± 0.0003 (125 features), and 99.72% ± 0.0005 (148 features). Furthermore, Table 5 reveals that the accuracy curves start to stabilize between 75 and 125 features, implying that this range offers an optimal balance between computational complexity and classification efficacy. This discovery is especially significant for real-time applications with limited computational resources.
In addition, the feature importance scores derived from the modified ReliefF algorithm are examined to identify the best discriminative features for classifying PQDs. Figure 11, Figure 12 and Figure 13 display the 20 most significant features with SNR levels of noiseless, 30 dB and 10 dB. For each of the considered SNRs, the top 20 features are accompanied by their importance scores and the relevant HAMRFE component.
Figure 11, Figure 12 and Figure 13 reveal that all components of the HAMRFE framework augment the top-ranked features, confirming the synergistic interaction among the different feature extraction techniques. The time–frequency analysis, multi-resolution wavelet analysis components, and adaptive signal decomposition generate the greatest features, highlighting the necessity of acquiring both intrinsic mode functions and time–frequency characteristics of PQDs. The energy of the S-transform inside the fundamental frequency band (45–55 Hz) and the wavelet energy at the initial decomposition level are the most significant characteristics. This is succeeded by the initial IMF, which encompasses the highest frequency components of the signal.
Table 6 lists some of the studies in the literature to compare them with the proposed HAMRFE method. The comparison of HAMRFE is conducted with existing hybrid algorithms that use various methods and datasets for PQD classification. In addition, the novelty of HAMRFE lies not merely in its ability to aggregate existing technologies but in its capacity to integrate them in a manner that enhances the efficacy and adaptability of the feature extraction system. HAMRFE employs a cross-domain correlation methodology rather than the conventional approach to information integration. For example, the instantaneous frequency derived from the IMFs of EMD is validated against the spectral centroid obtained from the S-transform. This ensures that the extracted features are not redundant but collaboratively illustrate the physics of the disturbance from several perspectives, such as energy, shape, complexity, and spectral content.

4.4. Analysis of Execution Time

In this study, the computational software utilized is MATLAB R2024a, and the simulation system comprises an Intel Core i9-13900 CPU operating at 5.4 GHz, an NVIDIA GeForce RTX 5070 GPU, and 64 GB of RAM. Hence, the analyzing accuracy in relation to computational expense indicates that the SVM, ANN, and RF obtain the best accuracies (about 99.8% with noiseless signals), but with significantly varying time demands; see Table 2 and Figure 14. For the ANN algorithm, the results show that it achieves high accuracy and high training duration. This stems from the iterative backpropagation and a large number of neurons in the input layer. In addition, the SVM and RF achieved comparable performance with minimum training time, owing to their quadratic programming (SVM) and ensemble tree formation (RF). XGBoost is the fastest algorithm but has lower accuracy results. This can be attributed to its gradient-boosted trees prioritizing speed and regularization above precise decision bounds. Finally, the KNN results demonstrate a minimal training time and high testing time since all distance calculations occur during inference. In general, the results show that the high accuracy and extensive training processes are associated with more complex models (ANN, SVM, and RF), whereas simpler models (XGBoost and KNN) result in a reduction in accuracy and low total execution time.

5. Conclusions

This study introduced a novel Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE) framework for the classification of power quality disturbances (PQDs). The proposed approach incorporates six complementary techniques: adaptive signal decomposition, multi-resolution wavelet analysis, time–frequency analysis, morphological feature extraction, entropy-based feature extraction, and feature selection optimization. Each component addresses distinct attributes of PQD signals, offering a thorough representation that encompasses time and frequency domain features, signal complexity, and irregularity patterns.
The efficacy of the HAMRFE approach was assessed with a simulated dataset consisting of 15 classes of PQDs with differing noise levels. Five classification algorithms, namely Support Vector Machine (SVM), Artificial Neural Networks (ANNs), Random Forest (RF), Extreme Gradient Boosting (XGBoost), and K-nearest neighbor (KNN), were developed to predict these PQD signals. Furthermore, eight feature sets were evaluated, comprising the optimal 5, 10, 20, 50, 75, 100, 125, and 148 features to determine the most effective number of features for classification. Upon studying the results obtained from the most efficient prediction model and assessing the efficacy of classification algorithms, the conclusions can be summarized as follows:
-
Regarding the performance of the prediction algorithms, the SVM outperforms all other classification algorithms across all noise levels. The distinction in performance of the SVM relative to other classifiers grows as noise levels increase, highlighting the SVM’s superior noise resilience when employed with HAMRFE features.
-
The findings indicated the exceptional efficacy of the SVM utilizing HAMRFE features, with classification accuracies of 99.86% for noiseless signals, 99.85% at 40 dB, 99.82% at 30 dB, 99.74% at 20 dB, and 97.92% at 10 dB noise levels.
-
The optimization of feature set size revealed that a set with 125 features is optimum across all SNR levels, followed by sets of 148 and 100 features, offering an effective balance between computational complexity and classification accuracy. This discovery is crucial for real-time applications with potentially limited processing resources.
Despite the encouraging outcomes, there exist some limits and prospective avenues for enhancement in the present study. For instance, the assessment was performed on synthetic data, which may not entirely reflect the complexities and diversity of actual power quality issues. Subsequent research should encompass validation on real-world datasets from diverse power systems and contexts. Moreover, the present execution of the HAMRFE approach is computationally complex, especially the EMD process and the modified S-transform. Optimization methods and parallel processing may be investigated to enhance computational efficiency for real-time applications.
Finally, the proposed HAMRFE framework offers a thorough and resilient methodology for the classification of PQDs. However, there are possibilities for more research to improve PQD classification models’ accuracy. For example, this study used conventional ML algorithms to accomplish the classification tasks. Other researchers can investigate the integration of HAMRFE features with deep learning frameworks, such as convolutional neural networks and recurrent neural networks, which could enhance classification efficacy, especially for complex disturbances. Another avenue of expansion is that the HAMRFE framework can be employed to address developing power quality challenges in contemporary power systems, including those associated with renewable energy integration, microgrids, and electric vehicle charging. The method could additionally be applied to classify more complex disturbances that may arise in these advanced systems.

Funding

The researcher would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2026).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Saeed, F.; Aldera, S.; Alkhatib, M.; Al-Shamma’a, A.A.; Hussein Farh, H.M. A Data-Driven Convolutional Neural Network Approach for Power Quality Disturbance Signal Classification (DeepPQDS-FKTNet). Mathematics 2023, 11, 4726. [Google Scholar] [CrossRef]
  2. Ding, Z.; Ji, T.; Li, M. Improved Hadamard Decomposition and Its Application in Data Compression in New-Type Power Systems. Mathematics 2025, 13, 671. [Google Scholar] [CrossRef]
  3. Madgula, S.; Veeramsetty, V.; Durgam, R. Signal Processing Approaches for Power Quality Disturbance Classification: A Comprehensive Review. Results Eng. 2025, 25, 104569. [Google Scholar] [CrossRef]
  4. Wang, T.; Zhuo, J.; Hou, Y.; Lu, Z.; Li, Y. Power Quality Disturbance Classification via a Time-Frequency Feature-Fused Transformer Model with Cross-Attention Mechanism. Electr. Power Syst. Res. 2026, 251, 112330. [Google Scholar] [CrossRef]
  5. Satyanrayana, M.; Veeramsetty, V.; Rajababu, D. The Analysis of Short Duration Power Quality Disturbances Using Short Time Fourier Transform. In Proceedings of the IEEE 1st International Conference on Smart and Sustainable Developments in Electrical Engineering (SSDEE), Dhanbad, India, 28 February–2 March 2025; pp. 1–6. [Google Scholar] [CrossRef]
  6. Medina-Molina, J.A.; Reyes-Archundia, E.; Gutiérrez-Gnecchi, J.A.; Rodríguez-Herrejón, J.A.; Chávez-Báez, M.V.; Olivares-Rojas, J.C.; Guerrero-Rodríguez, N.F. Optimal Selection of Sampling Rates and Mother Wavelet for an Algorithm to Classify Power Quality Disturbances. Computers 2025, 14, 138. [Google Scholar] [CrossRef]
  7. Li, J.; Liu, H.; Wang, D.; Bi, T. Classification of Power Quality Disturbance Based on S-Transform and Convolution Neural Network. Front. Energy Res. 2021, 9, 708131. [Google Scholar] [CrossRef]
  8. Lin, W.M.; Wu, C.H. Fast Support Vector Machine for Power Quality Disturbance Classification. Appl. Sci. 2022, 12, 11649. [Google Scholar] [CrossRef]
  9. Patil, P.; Muley, K.; Agrawal, R. Identification of Power Quality Disturbance Using Neural Network. In Proceedings of the 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 12–14 June 2019; pp. 990–996. [Google Scholar] [CrossRef]
  10. Akbarpour, A.; Nafar, M.; Simab, M. Multiple Power Quality Disturbances Detection and Classification with Fluctuations of Amplitude and Decision Tree Algorithm. Electr. Eng. 2022, 104, 2333–2343. [Google Scholar] [CrossRef]
  11. Liu, J.; Song, H.; Sun, H.; Zhao, H. High-Precision Identification of Power Quality Disturbances under Strong Noise Environment Based on FastICA and Random Forest. IEEE Trans. Industr. Inform. 2021, 17, 377–387. [Google Scholar] [CrossRef]
  12. Dekhandji, F.Z. Detection of Power Quality Disturbances Using Discrete Wavelet Transform. In Proceedings of the 5th International Conference on Electrical Engineering—Boumerdes (ICEE-B), Boumerdes, Algeria, 29–31 October 2017; pp. 1–5. [Google Scholar] [CrossRef]
  13. Molu, R.J.J.; Mbasso, W.F.; Saatong, K.T.; Dzonde Naoussi, S.R.; Alruwaili, M.; Elrashidi, A.; Nureldeen, W. Enhancing Power Quality Monitoring with Discrete Wavelet Transform and Extreme Learning Machine: A Dual-Stage Pattern Recognition Approach. Front. Energy Res. 2024, 12, 1435704. [Google Scholar] [CrossRef]
  14. Kurbatskii, V.G.; Sidorov, D.N.; Spiryaev, V.A.; Tomin, N.V. On the Neural Network Approach for Forecasting of Nonstationary Time Series on the Basis of the Hilbert-Huang Transform. Autom. Remote Control 2011, 72, 1405–1414. [Google Scholar] [CrossRef]
  15. Kurbatsky, V.G.; Sidorov, D.N.; Spiryaev, V.A.; Tomin, N.V. Forecasting Nonstationary Time Series Based on Hilbert-Huang Transform and Machine Learning. Autom. Remote Control 2014, 75, 922–934. [Google Scholar] [CrossRef]
  16. Manjula, M.; Mishra, S.; Sarma, A.V.R.S. Empirical Mode Decomposition with Hilbert Transform for Classification of Voltage Sag Causes Using Probabilistic Neural Network. Int. J. Electr. Power Energy Syst. 2013, 44, 597–603. [Google Scholar] [CrossRef]
  17. Manjula, M.; Sarmad, A.V.R.S. Assessment of Power Quality Events by Empirical Mode Decomposition Based Neural Network. Proceeding World Congr. Eng. 2012, 2, 4–6. [Google Scholar]
  18. Saxena, A.; Alshamrani, A.M.; Alrasheedi, A.F.; Alnowibet, K.A.; Mohamed, A.W. A Hybrid Approach Based on Principal Component Analysis for Power Quality Event Classification Using Support Vector Machines. Mathematics 2022, 10, 2780. [Google Scholar] [CrossRef]
  19. Xu, Q.; Zhu, F.; Jiang, W.; Pan, X.; Li, P.; Zhou, X.; Wang, Y. Efficient Identification Method for Power Quality Disturbance: A Hybrid Data-Driven Strategy. Processes 2024, 12, 1395. [Google Scholar] [CrossRef]
  20. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Snin, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. R. Soc. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  21. Huang, B.; Kunoth, A. An Optimization Based Empirical Mode Decomposition Scheme. J. Comput. Appl. Math. 2013, 240, 174–183. [Google Scholar] [CrossRef]
  22. Priyadarshini, M.S.; Bajaj, M.; Prokop, L.; Berhanu, M. Perception of Power Quality Disturbances Using Fourier, Short-Time Fourier, Continuous and Discrete Wavelet Transforms. Sci. Rep. 2024, 14, 3443. [Google Scholar] [CrossRef]
  23. Balubaid, M.; Sattari, M.A.; Taylan, O.; Bakhsh, A.A.; Nazemi, E. Applications of Discrete Wavelet Transform for Feature Extraction to Increase the Accuracy of Monitoring Systems of Liquid Petroleum Products. Mathematics 2021, 9, 3215. [Google Scholar] [CrossRef]
  24. Xu, W.; Wang, R.; Zhang, Y.; Wang, J.; Wang, Z.; Wu, X.; Li, W.; Li, X.; Zhang, M.; Sun, L. A Power Quality Disturbance Classification Method Using a Hybrid Transformer and Discrete Wavelet Transform Model. Electr. Power Syst. Res. 2026, 253, 112547. [Google Scholar] [CrossRef]
  25. Stockwell, R.G.; Mansinha, L.; Lowe, R.P. Localization of the Complex Spectrum: The S Transform. IEEE Trans. SIGNAL Process. 1996, 44, 998–1001. [Google Scholar] [CrossRef]
  26. Samal, L.; Palo, H.K.; Sahu, B.N.; Samal, D. The Classification of Power Quality Disturbances Using Statistical S-Transform and Probabilistic Neural Network. In Proceedings of the 1st Odisha International Conference on Electrical Power Engineering, Communication and Computing Technology(ODICON), Bhubaneswar, India, 8–9 January 2021; pp. 1–6. [Google Scholar] [CrossRef]
  27. Yong, H.; Yongqiang, L.; Zhiping, H. Detection and Location of Power Quality Disturbances Based on Mathematical Morphology and Hilbert-Huang Transform. In Proceedings of the 9th International Conference on Electronic Measurement and Instruments, Beijing, China, 16–19 August 2009; pp. 2319–2324. [Google Scholar] [CrossRef]
  28. Ebrahimpour Moghaddam Tasouj, P.; Soysal, G.; Eroğul, O.; Yetkin, S. ECG Signal Analysis for Detection and Diagnosis of Post-Traumatic Stress Disorder: Leveraging Deep Learning and Machine Learning Techniques. Diagnostics 2025, 15, 1414. [Google Scholar] [CrossRef]
  29. Perez-Anaya, E.; Saucedo-Dorantes, J.J.; Jaen-Cuellar, A.Y.; Romero-Troncoso, R.d.J.; Elvira-Ortiz, D.A. An Entropy–Envelope Approach for the Detection and Quantification of Power Quality Disturbances. Appl. Sci. 2025, 15, 12101. [Google Scholar] [CrossRef]
  30. Sharma, R.; Pachori, R.B.; Acharya, U.R. Application of Entropy Measures on Intrinsic Mode Functions for the Automated Identification of Focal Electroencephalogram Signals. Entropy 2015, 17, 669–691. [Google Scholar] [CrossRef]
  31. Urbanowicz, R.J.; Meeker, M.; La Cava, W.; Olson, R.S.; Moore, J.H. Relief-Based Feature Selection: Introduction and Review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef]
  32. Kononenko, I. Estimating Attributes: Analysis and Extensions of RELIEF. In European Conference on Machine Learning; Springer: Berlin/Heidelberg, Germany, 1994; pp. 171–182. [Google Scholar] [CrossRef]
  33. Chen, S.; Li, Z.; Pan, G.; Xu, F. Power Quality Disturbance Recognition Using Empirical Wavelet Transform and Feature Selection. Electronics 2022, 11, 174. [Google Scholar] [CrossRef]
  34. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A Comprehensive Survey on Support Vector Machine Classification: Applications, Challenges and Trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  35. Saravanan, K.; Sasithra, S. Review on Classification Based on Artificial Neural Networks. Int. J. Ambient Syst. Appl. 2014, 2, 11–18. [Google Scholar] [CrossRef]
  36. Yang, D.; Lü, S.; Wei, J.; Zheng, L.; Gao, Y. Detection and Classification of Power Quality Disturbances Based on Improved Adaptive S-Transform and Random Forest. Energies 2025, 18, 4088. [Google Scholar] [CrossRef]
  37. Chowdhury, A.R.; Chatterjee, T.; Banerjee, S. A Random Forest Classifier-Based Approach in the Detection of Abnormalities in the Retina. Med. Biol. Eng. Comput. 2018, 57, 193–203. [Google Scholar] [CrossRef] [PubMed]
  38. Yu, J.; Yu, Z.; Ye, W. Enhanced Recognition of Power Quality Disturbances through an Augmented S-Transform and XGBOOST Algorithm. In Proceedings of the 3rd International Conference on Energy, Power and Electrical Technology (ICEPET), Chengdu, China, 17–19 May 2024; pp. 1052–1056. [Google Scholar] [CrossRef]
  39. Syriopoulos, P.K.; Kalampalikis, N.G.; Kotsiantis, S.B.; Vrahatis, M.N. KNN Classification: A Review. Ann. Math. Artif. Intell. 2023, 93, 43–75. [Google Scholar] [CrossRef]
  40. Caicedo, J.E.; Agudelo-Martínez, D.; Rivas-Trujillo, E.; Meyer, J. A Systematic Review of Real-Time Detection and Classification of Power Quality Disturbances. Prot. Control Mod. Power Syst. 2023, 8, 1–37. [Google Scholar] [CrossRef]
  41. Li, Z.; Liu, H.; Zhao, J.; Bi, T.; Yang, Q. A Power System Disturbance Classification Method Robust to PMU Data Quality Issues. IEEE Trans. Industr. Inform. 2022, 18, 130–142. [Google Scholar] [CrossRef]
  42. Jiang, Z.; Wang, Y.; Li, Y.; Cao, H. A New Method for Recognition and Classification of Power Quality Disturbances Based on IAST and RF. Electr. Power Syst. Res. 2024, 226, 109939. [Google Scholar] [CrossRef]
  43. Majumdar, S.; Mishra, A.K. Empirical Mode Decomposition with Wavelet Transform Based Analytic Signal for Power Quality Assessment. Int. J. Electron. Commun. Eng. 2018, 12, 329–334. [Google Scholar] [CrossRef]
  44. Mishra, V.; Singh, V.K.; Pachori, R.B. Automated Power Quality Assessment Using IEVDHM Technique. In Proceedings of the 10th International Conference on Signal Processing and Communication (ICSC), Noida, India, 20–22 February 2025; pp. 642–647. [Google Scholar] [CrossRef]
Figure 1. The architecture of the HAMRFE framework.
Figure 1. The architecture of the HAMRFE framework.
Mathematics 14 00784 g001
Figure 2. The flowchart of the HAMRFE method proposed in this study.
Figure 2. The flowchart of the HAMRFE method proposed in this study.
Mathematics 14 00784 g002
Figure 3. The results of the EMD component in decomposing the voltage transient oscillation signal into five IMFs.
Figure 3. The results of the EMD component in decomposing the voltage transient oscillation signal into five IMFs.
Mathematics 14 00784 g003
Figure 4. The DWT results in analyzing the voltage sag, swell, and impulse transient signals across various resolution levels.
Figure 4. The DWT results in analyzing the voltage sag, swell, and impulse transient signals across various resolution levels.
Mathematics 14 00784 g004
Figure 5. The S-transform procedure for voltage harmonics and flicker signals shows the conversion from time-domain to time–frequency representation.
Figure 5. The S-transform procedure for voltage harmonics and flicker signals shows the conversion from time-domain to time–frequency representation.
Mathematics 14 00784 g005
Figure 6. Signal contours, peak detection, and gradient-based features that include the geometric aspects of the pure voltage signal, voltage interruptions, harmonics, notching, and sags with harmonics and oscillations.
Figure 6. Signal contours, peak detection, and gradient-based features that include the geometric aspects of the pure voltage signal, voltage interruptions, harmonics, notching, and sags with harmonics and oscillations.
Mathematics 14 00784 g006
Figure 7. The results of different entropy metrics with pure voltage signal, voltage interruption, harmonic, notching, and sag with harmonic and oscillation.
Figure 7. The results of different entropy metrics with pure voltage signal, voltage interruption, harmonic, notching, and sag with harmonic and oscillation.
Mathematics 14 00784 g007
Figure 8. The confusion matrix for the SVM algorithm with a noiseless level.
Figure 8. The confusion matrix for the SVM algorithm with a noiseless level.
Mathematics 14 00784 g008
Figure 9. The confusion matrix for the XGBoost algorithm with 40 dp noise level.
Figure 9. The confusion matrix for the XGBoost algorithm with 40 dp noise level.
Mathematics 14 00784 g009
Figure 10. The confusion matrix for the KNN algorithm with 20 dB noise level.
Figure 10. The confusion matrix for the KNN algorithm with 20 dB noise level.
Mathematics 14 00784 g010
Figure 11. The 20 most significant features with an SNR level of noiseless.
Figure 11. The 20 most significant features with an SNR level of noiseless.
Mathematics 14 00784 g011
Figure 12. The 20 most significant features with an SNR level of 30 dB.
Figure 12. The 20 most significant features with an SNR level of 30 dB.
Mathematics 14 00784 g012
Figure 13. The 20 most significant features with an SNR level of 10 dB.
Figure 13. The 20 most significant features with an SNR level of 10 dB.
Mathematics 14 00784 g013
Figure 14. Execution time consumption of the five ML algorithms.
Figure 14. Execution time consumption of the five ML algorithms.
Mathematics 14 00784 g014
Table 1. The numerical equations of the 15 classes of PQDs contained in the dataset.
Table 1. The numerical equations of the 15 classes of PQDs contained in the dataset.
PQ ClassNumerical Equation
Pure sine (C1) V ( t ) = A s i n ( ω t )
Sag (C2) V ( t ) = 1 α u t t 1 u t t 2 s i n ( ω t )
Swell (C3) V ( t ) = 1 + α u t t 1 u t t 2 s i n ( ω t )
Interruption (C4) V ( t ) = α ( u t t 1 u t t 2 s i n ( ω t )
Harmonics (C5) V ( t ) = α 1 s i n ( w t ) + α 3 s i n ( 3 w t ) + α 5 s i n ( 5 w t ) + α 7 s i n ( 7 w t )
Oscillatory transient (C6) V ( t ) = s i n ( ω t ) + α e t t 1 τ s i n ( ω n t t 1 ) ( u t t 1 u t t 2 )
Flicker (C7) V ( t ) = 1 + α f sin β ω t s i n ( ω t )
Impulse transient (C8) V ( t ) = s i n ( ω t ) + s i g n ( s i n ( ω t ) ) × n = 0 9     K u t t 1 + 0.02 n u t t 2 + 0.02 n
Notch (C9) V ( t ) = s i n ( ω t ) s i g n ( s i n ( ω t ) ) × n = 0 9     K u t t 1 + 0.02 n u t t 2 + 0.02 n
Sag with Harmonics (C10) V ( t ) = 1 α u t t 1 u t t 2 α 1 s i n ( w t ) + α 3 s i n ( 3 w t ) + α 5 s i n ( 5 w t ) + α 7 s i n ( 7 w t )
Swell with
Harmonics (C11)
V ( t ) = 1 + α u t t 1 u t t 2 [ α 1 s i n ( w t ) + α 3 s i n ( 3 w t ) + α 5 s i n ( 5 w t ) + α 7 s i n 7 ω t ]
Sag with oscillation (C12) V ( t ) = A s i n ( ω t ϕ ) 1 + α u t t 1 u t t 2 + β e t t 3 / τ s i n ω n t t 3 ν u t t 4 u t t 3
Swell with oscillation (C13) V ( t ) = A s i n ( ω ϕ ) 1 + α u t t 1 u t t 2 + β e t t 3 / τ s i n ω n t t 3 ν u t t 4 u t t 3
Sag with Harmonics and Oscillations (C14) V ( t ) = A 1 α u t t 1 u t t 2 n = 0 5     α n   s i n n ω t ν n                                                 + β e t t 3 / τ s i n ω n t t 3 ν u t t 4 u t t 3
Swell with Harmonics and Oscillations (C15) V ( t ) = A s i n ω t ν 1 + β u t t 1 u t t 2 + n = 0 5     α n   s i n n ω t ν n + β e t t 3 / τ s i n ω n t t 3 ν u t t 4 u t t 3
Table 2. The classification results (Accuracy Mean ± STD) for each classifier with 148 features attained from the proposed HAMRFE method.
Table 2. The classification results (Accuracy Mean ± STD) for each classifier with 148 features attained from the proposed HAMRFE method.
Classifier Classification Accuracy (%)
Noiseless 40 dB 30 dB 20 dB 10 dB
SVM99.83% ± 0.000399.82% ± 0.000599.80% ± 0.000599.72% ± 0.000597.31% ± 0.0020
ANN99.76% ± 0.000599.66% ± 0.000499.63% ± 0.000699.59% ± 0.000697.26% ± 0.0059
RF99.68% ± 0.000699.64% ± 0.000699.61% ± 0.000499.55% ± 0.000596.87% ± 0.0013
XGBoost 99.41% ± 0.001199.32% ± 0.001299.02% ± 0.001498.63% ± 0.001592.70% ± 0.0039
KNN91.06% ± 0.003490.72% ± 0.002789.25% ± 0.003887.38% ± 0.003679.10% ± 0.0032
Table 3. The performance evaluation metrics results of the SVM, ANN, and RF at an SNR level of 30 dB.
Table 3. The performance evaluation metrics results of the SVM, ANN, and RF at an SNR level of 30 dB.
SNR: 30 dB
SVMANNRF
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
C11110.999710.9999111
C20.99990.99960.99970.99960.99830.998910.99990.9999
C30.99940.99990.99960.999310.99960.999710.9999
C40.999610.99980.998710.99940.999910.9999
C50.998410.99920.99460.99970.9972111
C60.99700.99910.99810.99590.99900.99740.998910.9994
C7111111111
C80.99960.99970.99960.99930.99940.9994111
C91110.999710.9999111
C100.99090.99490.99290.99110.98740.98930.96760.99240.9798
C110.98810.99890.99350.99050.99200.99120.96040.99170.9758
C120.99910.99670.99790.99870.99540.997110.99890.9994
C130.99910.99940.99930.99910.99930.99920.99990.99970.9998
C140.99640.99090.99360.99260.99090.99170.99220.96670.9793
C150.99880.98730.99300.99230.98960.99090.99140.95900.9749
Table 4. The performance evaluation metrics results of the SVM, ANN, and RF at an SNR level of 10 dB.
Table 4. The performance evaluation metrics results of the SVM, ANN, and RF at an SNR level of 10 dB.
SNR: 10 dB
SVMANNRF
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
C10.90850.96700.93680.91370.96210.93730.89900.96500.9308
C20.95810.94460.95120.96120.93670.94880.97110.89960.9339
C30.98810.99840.99330.98940.99730.99330.99470.99670.9957
C40.97770.98930.98350.97490.98800.98140.93840.99430.9655
C50.96560.97890.97220.96260.98240.97240.95520.97970.9673
C60.97410.98360.97880.97770.98270.98020.95870.98370.9710
C70.97450.95500.96460.97010.95290.96140.97150.94690.9590
C80.99200.97500.98340.99030.97990.98510.99400.97600.9849
C90.99810.99040.99430.99770.99540.99660.99890.99470.9968
C100.94780.95600.95190.95050.94710.94880.91680.94510.9307
C110.95910.98660.97260.96330.98140.97230.93950.98490.9616
C120.98250.96290.97260.98090.96730.97400.98010.94910.9644
C130.99330.98810.99070.99390.99040.99210.99310.99240.9928
C140.98720.95790.97230.98080.96230.97140.98260.92040.9505
C150.98670.95330.96970.98240.95900.97050.98200.93330.9570
Table 5. The classification results (Accuracy Mean ± STD) for the eight feature set sizes using SVM at different SNR levels.
Table 5. The classification results (Accuracy Mean ± STD) for the eight feature set sizes using SVM at different SNR levels.
Feature No.Classification Accuracy (%)
Noiseless40 dB30 dB20 dB10 dB
589.67% ± 0.002688.19% ± 0.003178.49% ± 0.003876.72% ± 0.005267.51% ± 0.0053
1093.09% ± 0.002393.06% ± 0.002990.55% ± 0.002390.41% ± 0.003283.19% ± 0.0040
2095.73% ± 0.001895.61% ± 0.001494.38% ± 0.002193.79% ± 0.002288.95% ± 0.0030
5099.13% ± 0.001199.01% ± 0.000998.99% ± 0.000898.90% ± 0.001096.03% ± 0.0021
7599.70% ± 0.000499.67% ± 0.000499.63% ± 0.000599.53% ± 0.000597.35% ± 0.0013
10099.80% ± 0.000599.79% ± 0.000599.75% ± 0.000699.71% ± 0.000697.67% ± 0.0014
12599.86% ± 0.000299.85% ± 0.000299.82% ± 0.000499.74% ± 0.000397.92% ± 0.0012
14899.83% ± 0.000399.82% ± 0.000599.80% ± 0.000599.72% ± 0.000597.79% ± 0.0020
Table 6. A comparison between HAMRFE and other studies in classifying PQDs.
Table 6. A comparison between HAMRFE and other studies in classifying PQDs.
StudyDataset UsedNo. of ClassesMethods UsedAccuracy
M. Manjula and A. V. R. S. Sarma (2012) [17]Synthetic data
1500 samples
10EMD-HT98.3%
S. Majumdar et al. (2018) [43]Synthetic data4EMD-WTOverall classification accuracy of 97.5%
Saxena et al. (2022) [18]Synthetic data 2500 samples (500 per class)5HT-WT96.2%
Molu et al. (2024) [13]Synthetic data and real-time validation on Xilinx Zynq-7000 SoC FPGA7DWT-ELMOverall classification accuracy of 99.69%
Xu et al. (2024) [19]Synthetic data 1000 samples per class6WPT-LMDUp to 99.4% at SNR = 40 dB.
V. Mishra et al. (2025) [44]Synthetic data
1000 signals per class
19Improved eigenvalue decomposition of Hankel matrix (IEVDHM)-HT92.48% with noiseless signals and between 91.07% and 88.01% with noisy signals
Proposed method (HAMRFE) (2026)Synthetic data 3500 samples per class15Adaptive Multi-Resolution Feature Extraction Method99.86% with noiseless signals and between 99.85% and 97.92% with noisy signals
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alrashidi, M. A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics 2026, 14, 784. https://doi.org/10.3390/math14050784

AMA Style

Alrashidi M. A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics. 2026; 14(5):784. https://doi.org/10.3390/math14050784

Chicago/Turabian Style

Alrashidi, Musaed. 2026. "A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection" Mathematics 14, no. 5: 784. https://doi.org/10.3390/math14050784

APA Style

Alrashidi, M. (2026). A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics, 14(5), 784. https://doi.org/10.3390/math14050784

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop