A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection
Abstract
1. Introduction
1.1. Background
1.2. Related Works
1.3. Main Contributions
- An extensive feature extraction framework that integrates various complementary strategies to identify the unique attributes of diverse PQDs.
- A systematic feature selection process that identifies the most distinguishing features for the classification of PQD signals.
- A comprehensive assessment of the developed methodology utilizing various classifiers and noise levels.
- Examination of the optimum feature set size for optimizing computational efficiency and classification accuracy.
2. Materials and Methods
2.1. HAMRFE Framework Overview
- Step 1: Signal Preprocessing: The HAMRFE method starts with generating synthetic PQD signals. Thereafter, the signals should be processed before feeding them to the feature extraction components. The preprocessing procedures encompass noise augmentation (for robustness evaluation), normalization, and segmentation.
- Step 2: Feature Extraction: The PQD signals are then fed to the six HAMRFE components for feature extraction, shown in Figure 1. Each of the PQD signals is processed by each component, accompanied by typical outputs from each approach. These features are statistical, spectral, morphological, and entropy-based.
- Step 3: Feature Ranking: The extracted features in Step 2 are then ranked. The selection is based on the feature importance that reflects the relative value of various features, including the selection of optimal feature subsets of differing sizes (5, 10, 20, 25, 50, 75, 100, 125, and 148 features).
- Step 4: Classification Model: The considered ML algorithms, including SVM, ANN, RF, Extreme Gradient Boosting (XGBoost) version 1.0.9, and KNN, are trained using the training dataset. Each of the algorithms is exposed to the same training and testing datasets.
- Step 5: Performance Evaluation: The developed classification models are evaluated using the testing dataset. The evaluation metrics, including accuracy, precision, recall, F1-Score, and confusion matrix, are employed to evaluate classification performance.
2.2. Adaptive Signal Decomposition
- Determine Extrema: Identify all local maxima and minima of the raw PQD signal .
- Create Envelopes: Utilize cubic spline interpolation to independently connect the local maxima and minima, constructing the upper envelope and the lower envelope . The average of these two envelope lines is calculated as .
- Extract the Detail Component: Deduct the mean envelope from the original signal to derive a candidate component: This stage seeks to isolate the oscillatory mode included within the signal.
- Verify IMF Conditions: The candidate function is evaluated to ascertain its compliance with the predetermined conditions for IMFs.
- The quantity of extrema and zero-crossings in must be either equal or differ by no more than one.
- The average of the upper and lower envelopes must consistently equal zero over the signal.
- 5.
- If meets these criteria, it is referred to as the first intrinsic mode function, . If not, the sifting process (steps 1–4) is performed on until the criteria for the IMF are satisfied.
- 6.
- Extract IMF and Repeat: After extracting an IMF, subtract it from the original signal to produce a residual: The residual subsequently transforms into the new signal, and steps 1–4 are reiterated to derive additional IMFs until the residual is either a monotonic function or possesses fewer than two extrema, indicating the conclusion of the decomposition process.
- 7.
- Reconstruction: The original signal can be restored by aggregating all IMFs and the residual: .
2.3. Multi-Resolution Wavelet Analysis
- Approximation coefficients : represent the low-frequency, coarse characteristics of the signal.
- Detail coefficients (): represent the high-frequency, intricate characteristics at each level.
2.4. Time–Frequency Analysis
Extraction of Time–Frequency Matrix
2.5. Morphological Features Extraction
- Peak and Crest Factors:
- Peak Factor: Quantifies the ratio of the signal’s maximum amplitude to its RMS value. It signifies the occurrence of transitory spikes or anomalous peaks, typical of specific disturbances.
- Crest Factor: Analogous to the peak factor, it denotes the severity of waveform peaks in relation to its RMS value. Increased crest factors frequently indicate temporary or impulsive disruptions.
- 2.
- Duration and Rise Time:
- Duration: The total duration of the signal displays deviation beyond a specified threshold, which is essential for distinguishing between brief transient disturbances and prolonged defects.
- Rise Time: The duration required for the signal to ascend from a defined low level to a higher level during a disturbance event, reflecting the features of the disturbance onset.
- 3.
- Zero-Crossing Features: The zero-crossing rate and distribution are employed to examine the frequency of signal polarity alterations, indicating the oscillatory characteristics of the waveform. Abrupt surges in zero-crossings may signify impulsive occurrences or transitory disruptions.
- 4.
- Symmetry and Skewness of Waveforms: Evaluate the symmetry of the waveforms to distinguish between symmetrical and asymmetrical disturbances. Skewness measures the asymmetry of the amplitude distribution, whereas kurtosis evaluates the sharpness of the waveform’s peak.
- 5.
- Shape Characterizations and Morphological Procedures:
- Envelope Characteristics: The amplitude envelope can be examined for its peaks, troughs, and overall contour, elucidating the transitory dynamics of the disturbance.
- Morphological Transformations: Techniques such as dilation and erosion from mathematical morphology are utilized on waveform signals to extract features associated with signal peaks, valleys, and structural alterations, emphasizing particular disruption patterns.
- 6.
- Area and Energy of Disturbance Segments: Computing the entire area beneath the waveform during a disturbance interval yields insight into its shape and energy characteristics. These parameters can differentiate between various disturbance types according to their amplitude and duration.
- 7.
- Number and Distribution of Local Extrema: Assessing the quantity of local maxima and minima, as well as their temporal distribution, aids in delineating the waveform’s intricacy and transient characteristics.
2.6. Entropy-Based Feature Extraction
- Shannon Entropy: Shannon entropy quantifies the total uncertainty or disorder within a signal, determined by its amplitude distribution. The calculation involves initially creating a probability distribution of the signal’s amplitude values, followed by the use of the Shannon entropy formula shown in Equation (6).
- 2.
- Spectral Entropy: Spectral entropy generalizes the notion of Shannon entropy to the frequency domain. Following the calculation of the power spectral density of the signal, spectral entropy assesses the distribution of spectral power. It offers a perspective on the complex nature of the signal’s frequency. The spectral entropy formula is shown in Equation (7).
- 3.
- Sample Entropy: Sample entropy assesses the regularity and predictability of a signal by evaluating the probability that analogous sequences retain their similarity in consecutive points. This entails analyzing sequences within the data and determining the likelihood that structures of a specified length remain proximate when augmented by an additional point. Reduced sample entropy signifies regular, predictable signals, whereas elevated values indicate complexity or randomness. The sample entropy formula is shown in Equation (8).
- 4.
- Permutation Entropy: Permutation entropy analyzes the temporal sequence of a signal by evaluating the permutation patterns of amplitude values across sliding frames. It assesses complexity according to the variety of ordinal patterns. The permutation entropy formula is shown in Equation (9).
- 5.
- Wavelet Energy Entropy: Wavelet energy entropy assesses the distribution of a signal’s energy among wavelet coefficients at different scales. Through the application of the WT to decompose the signal, the energy at each scale is calculated, and the entropy of this energy distribution indicates the complexity of the signal. The wavelet energy entropy formula is shown in Equation (10).
- 6.
- IMF Entropy: Originating from EMD, IMF entropy evaluates the complexity of the intrinsic mode functions derived from the signal. Upon decomposing the signal into IMFs, the energy distribution across these modes is examined, and the calculated entropy indicates the signal’s irregularity. Elevated IMF entropy signifies more intricate or disrupted signals. The IMF entropy formula is shown in Equation (11).
2.7. Feature Selection and Optimization
- Feature weight initialization: for every feature
- For = 1 to (the total number of iterations), select an instance at random and identify the closest results and nearest misses for each category . Hence, for every attribute
- Arrange features according to their ReliefF weights in descending order.
- For various feature subset sizes (5, 10, 20, 25, 50, 75, 100, 125, and 148), identify the highest-ranked features and assess classification efficacy by 10-fold cross-validation. Finally, compute the accuracy, precision, recall, and F1-Score.
- Determine the feature subset size that optimally balances performance and computational complexity.
| Algorithm 1: Algorithm for Hybrid Adaptive Multi-Resolution Feature Extraction (HAMRFE) | |
| Input: Power quality disturbance signals [signals] <- Signals in time-domain format | |
| SNR_level <- Signal-to-noise ratio level for noise addition (optional) | |
| Output: feature_vector <- Extracted features for classification | |
| selected_features <- Optimized feature subset | |
| classification_result <- Final label of classification | |
| 1: | function HAMRFE (signals [], SNR_level) |
| 2: | features = [] |
| ———— Add noise if SNR_level is specified ———— | |
| 3: | if (SNR_level is not Inf) then |
| 4: | signals = add_gaussian_noise (signals, SNR_level) |
| 5: | END If |
| ———— Process each signal ———— | |
| 6: | for (each signal in signals []) do |
| ———— Component 1: Adaptive Signal Decomposition ———— | |
| 7: | IMFs [] = EMD_decomposition (signal) |
| 8: | IMF_features = extract_IMF_statistics (IMFs []) |
| 9: | IMF_features.append(compute_IMF_correlations (IMFs [])) |
| ———— Component 2: Multi-Resolution Wavelet Analysis ———— | |
| 10: | [approximation, details []] = wavelet_decomposition (signal, ‘db4’, 5) |
| 11: | wavelet_features = extract_wavelet_features (approximation, details []) |
| ———— Component 3: Time–Frequency Analysis ———— | |
| 12: | S_matrix = modified_S_transform (signal, gamma = 0.9) |
| 13: | stransform_features = extract_time_frequency_features (S_matrix) |
| ———— Component 4: Morphological Feature Extraction ———— | |
| 14: | envelope = extract_signal_envelope (signal) |
| 15: | gradient = compute_gradient (signal) |
| 16: | morphological_features = extract_morphological_features (signal, envelope, gradient) |
| ———— Component 5: Entropy-Based Feature Extraction ———— | |
| 17: | entropy_features = compute_entropy_measures (signal, IMFs [], details []) |
| ———— Combine all features ———— | |
| 18: | signal_features = concatenate (IMF_features, wavelet_features, stransform_features, |
| 19: | morphological_features, entropy_features) |
| 20: | features.append (signal_features) |
| 21: | END For |
| ———— Component 6: Feature Selection and Optimization ———— | |
| 22: | feature_weights = apply_modified_reliefF (features) |
| 23: | selected_features = select_optimal_feature_subset (features, feature_weights) |
| ———— Train classifier and predict ———— | |
| 24: | model = train_SVM (selected_features, RBF_kernel, C = 100, gamma = 0.01) |
| 25: | classification_result = model.predict (selected_features) |
| 26: | return feature_vector, selected_features, classification_result |
| 27: | END HAMRFE |
3. Experimental Setup
3.1. Classification Algorithms
3.1.1. Support Vector Machine
3.1.2. Artificial Neural Networks
3.1.3. Random Forest
3.1.4. Extreme Gradient Boosting
3.1.5. The K-Nearest Neighbor
3.2. Performance Evaluation Metrics
- Overall Accuracy: It quantifies the total number of correct forecasts made by a particular model. The formula for accuracy is as follows:
- Recall: It evaluates the effectiveness of the algorithm for each class. It assesses the ratio of true positives that were accurately recognized.
- Precision (P): It quantifies the algorithm’s accuracy in classifying each class. It measures the ratio of accurate positive identifications.
- F1-Score: It assesses the equilibrium between precision and recall.
4. Results and Discussion
4.1. Classifier Performance Comparison
4.2. Noise Robustness Analysis
4.3. Feature Set Size Optimization
4.4. Analysis of Execution Time
5. Conclusions
- -
- Regarding the performance of the prediction algorithms, the SVM outperforms all other classification algorithms across all noise levels. The distinction in performance of the SVM relative to other classifiers grows as noise levels increase, highlighting the SVM’s superior noise resilience when employed with HAMRFE features.
- -
- The findings indicated the exceptional efficacy of the SVM utilizing HAMRFE features, with classification accuracies of 99.86% for noiseless signals, 99.85% at 40 dB, 99.82% at 30 dB, 99.74% at 20 dB, and 97.92% at 10 dB noise levels.
- -
- The optimization of feature set size revealed that a set with 125 features is optimum across all SNR levels, followed by sets of 148 and 100 features, offering an effective balance between computational complexity and classification accuracy. This discovery is crucial for real-time applications with potentially limited processing resources.
Funding
Data Availability Statement
Conflicts of Interest
References
- Saeed, F.; Aldera, S.; Alkhatib, M.; Al-Shamma’a, A.A.; Hussein Farh, H.M. A Data-Driven Convolutional Neural Network Approach for Power Quality Disturbance Signal Classification (DeepPQDS-FKTNet). Mathematics 2023, 11, 4726. [Google Scholar] [CrossRef]
- Ding, Z.; Ji, T.; Li, M. Improved Hadamard Decomposition and Its Application in Data Compression in New-Type Power Systems. Mathematics 2025, 13, 671. [Google Scholar] [CrossRef]
- Madgula, S.; Veeramsetty, V.; Durgam, R. Signal Processing Approaches for Power Quality Disturbance Classification: A Comprehensive Review. Results Eng. 2025, 25, 104569. [Google Scholar] [CrossRef]
- Wang, T.; Zhuo, J.; Hou, Y.; Lu, Z.; Li, Y. Power Quality Disturbance Classification via a Time-Frequency Feature-Fused Transformer Model with Cross-Attention Mechanism. Electr. Power Syst. Res. 2026, 251, 112330. [Google Scholar] [CrossRef]
- Satyanrayana, M.; Veeramsetty, V.; Rajababu, D. The Analysis of Short Duration Power Quality Disturbances Using Short Time Fourier Transform. In Proceedings of the IEEE 1st International Conference on Smart and Sustainable Developments in Electrical Engineering (SSDEE), Dhanbad, India, 28 February–2 March 2025; pp. 1–6. [Google Scholar] [CrossRef]
- Medina-Molina, J.A.; Reyes-Archundia, E.; Gutiérrez-Gnecchi, J.A.; Rodríguez-Herrejón, J.A.; Chávez-Báez, M.V.; Olivares-Rojas, J.C.; Guerrero-Rodríguez, N.F. Optimal Selection of Sampling Rates and Mother Wavelet for an Algorithm to Classify Power Quality Disturbances. Computers 2025, 14, 138. [Google Scholar] [CrossRef]
- Li, J.; Liu, H.; Wang, D.; Bi, T. Classification of Power Quality Disturbance Based on S-Transform and Convolution Neural Network. Front. Energy Res. 2021, 9, 708131. [Google Scholar] [CrossRef]
- Lin, W.M.; Wu, C.H. Fast Support Vector Machine for Power Quality Disturbance Classification. Appl. Sci. 2022, 12, 11649. [Google Scholar] [CrossRef]
- Patil, P.; Muley, K.; Agrawal, R. Identification of Power Quality Disturbance Using Neural Network. In Proceedings of the 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 12–14 June 2019; pp. 990–996. [Google Scholar] [CrossRef]
- Akbarpour, A.; Nafar, M.; Simab, M. Multiple Power Quality Disturbances Detection and Classification with Fluctuations of Amplitude and Decision Tree Algorithm. Electr. Eng. 2022, 104, 2333–2343. [Google Scholar] [CrossRef]
- Liu, J.; Song, H.; Sun, H.; Zhao, H. High-Precision Identification of Power Quality Disturbances under Strong Noise Environment Based on FastICA and Random Forest. IEEE Trans. Industr. Inform. 2021, 17, 377–387. [Google Scholar] [CrossRef]
- Dekhandji, F.Z. Detection of Power Quality Disturbances Using Discrete Wavelet Transform. In Proceedings of the 5th International Conference on Electrical Engineering—Boumerdes (ICEE-B), Boumerdes, Algeria, 29–31 October 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Molu, R.J.J.; Mbasso, W.F.; Saatong, K.T.; Dzonde Naoussi, S.R.; Alruwaili, M.; Elrashidi, A.; Nureldeen, W. Enhancing Power Quality Monitoring with Discrete Wavelet Transform and Extreme Learning Machine: A Dual-Stage Pattern Recognition Approach. Front. Energy Res. 2024, 12, 1435704. [Google Scholar] [CrossRef]
- Kurbatskii, V.G.; Sidorov, D.N.; Spiryaev, V.A.; Tomin, N.V. On the Neural Network Approach for Forecasting of Nonstationary Time Series on the Basis of the Hilbert-Huang Transform. Autom. Remote Control 2011, 72, 1405–1414. [Google Scholar] [CrossRef]
- Kurbatsky, V.G.; Sidorov, D.N.; Spiryaev, V.A.; Tomin, N.V. Forecasting Nonstationary Time Series Based on Hilbert-Huang Transform and Machine Learning. Autom. Remote Control 2014, 75, 922–934. [Google Scholar] [CrossRef]
- Manjula, M.; Mishra, S.; Sarma, A.V.R.S. Empirical Mode Decomposition with Hilbert Transform for Classification of Voltage Sag Causes Using Probabilistic Neural Network. Int. J. Electr. Power Energy Syst. 2013, 44, 597–603. [Google Scholar] [CrossRef]
- Manjula, M.; Sarmad, A.V.R.S. Assessment of Power Quality Events by Empirical Mode Decomposition Based Neural Network. Proceeding World Congr. Eng. 2012, 2, 4–6. [Google Scholar]
- Saxena, A.; Alshamrani, A.M.; Alrasheedi, A.F.; Alnowibet, K.A.; Mohamed, A.W. A Hybrid Approach Based on Principal Component Analysis for Power Quality Event Classification Using Support Vector Machines. Mathematics 2022, 10, 2780. [Google Scholar] [CrossRef]
- Xu, Q.; Zhu, F.; Jiang, W.; Pan, X.; Li, P.; Zhou, X.; Wang, Y. Efficient Identification Method for Power Quality Disturbance: A Hybrid Data-Driven Strategy. Processes 2024, 12, 1395. [Google Scholar] [CrossRef]
- Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Snin, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. R. Soc. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
- Huang, B.; Kunoth, A. An Optimization Based Empirical Mode Decomposition Scheme. J. Comput. Appl. Math. 2013, 240, 174–183. [Google Scholar] [CrossRef]
- Priyadarshini, M.S.; Bajaj, M.; Prokop, L.; Berhanu, M. Perception of Power Quality Disturbances Using Fourier, Short-Time Fourier, Continuous and Discrete Wavelet Transforms. Sci. Rep. 2024, 14, 3443. [Google Scholar] [CrossRef]
- Balubaid, M.; Sattari, M.A.; Taylan, O.; Bakhsh, A.A.; Nazemi, E. Applications of Discrete Wavelet Transform for Feature Extraction to Increase the Accuracy of Monitoring Systems of Liquid Petroleum Products. Mathematics 2021, 9, 3215. [Google Scholar] [CrossRef]
- Xu, W.; Wang, R.; Zhang, Y.; Wang, J.; Wang, Z.; Wu, X.; Li, W.; Li, X.; Zhang, M.; Sun, L. A Power Quality Disturbance Classification Method Using a Hybrid Transformer and Discrete Wavelet Transform Model. Electr. Power Syst. Res. 2026, 253, 112547. [Google Scholar] [CrossRef]
- Stockwell, R.G.; Mansinha, L.; Lowe, R.P. Localization of the Complex Spectrum: The S Transform. IEEE Trans. SIGNAL Process. 1996, 44, 998–1001. [Google Scholar] [CrossRef]
- Samal, L.; Palo, H.K.; Sahu, B.N.; Samal, D. The Classification of Power Quality Disturbances Using Statistical S-Transform and Probabilistic Neural Network. In Proceedings of the 1st Odisha International Conference on Electrical Power Engineering, Communication and Computing Technology(ODICON), Bhubaneswar, India, 8–9 January 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Yong, H.; Yongqiang, L.; Zhiping, H. Detection and Location of Power Quality Disturbances Based on Mathematical Morphology and Hilbert-Huang Transform. In Proceedings of the 9th International Conference on Electronic Measurement and Instruments, Beijing, China, 16–19 August 2009; pp. 2319–2324. [Google Scholar] [CrossRef]
- Ebrahimpour Moghaddam Tasouj, P.; Soysal, G.; Eroğul, O.; Yetkin, S. ECG Signal Analysis for Detection and Diagnosis of Post-Traumatic Stress Disorder: Leveraging Deep Learning and Machine Learning Techniques. Diagnostics 2025, 15, 1414. [Google Scholar] [CrossRef]
- Perez-Anaya, E.; Saucedo-Dorantes, J.J.; Jaen-Cuellar, A.Y.; Romero-Troncoso, R.d.J.; Elvira-Ortiz, D.A. An Entropy–Envelope Approach for the Detection and Quantification of Power Quality Disturbances. Appl. Sci. 2025, 15, 12101. [Google Scholar] [CrossRef]
- Sharma, R.; Pachori, R.B.; Acharya, U.R. Application of Entropy Measures on Intrinsic Mode Functions for the Automated Identification of Focal Electroencephalogram Signals. Entropy 2015, 17, 669–691. [Google Scholar] [CrossRef]
- Urbanowicz, R.J.; Meeker, M.; La Cava, W.; Olson, R.S.; Moore, J.H. Relief-Based Feature Selection: Introduction and Review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef]
- Kononenko, I. Estimating Attributes: Analysis and Extensions of RELIEF. In European Conference on Machine Learning; Springer: Berlin/Heidelberg, Germany, 1994; pp. 171–182. [Google Scholar] [CrossRef]
- Chen, S.; Li, Z.; Pan, G.; Xu, F. Power Quality Disturbance Recognition Using Empirical Wavelet Transform and Feature Selection. Electronics 2022, 11, 174. [Google Scholar] [CrossRef]
- Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A Comprehensive Survey on Support Vector Machine Classification: Applications, Challenges and Trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
- Saravanan, K.; Sasithra, S. Review on Classification Based on Artificial Neural Networks. Int. J. Ambient Syst. Appl. 2014, 2, 11–18. [Google Scholar] [CrossRef]
- Yang, D.; Lü, S.; Wei, J.; Zheng, L.; Gao, Y. Detection and Classification of Power Quality Disturbances Based on Improved Adaptive S-Transform and Random Forest. Energies 2025, 18, 4088. [Google Scholar] [CrossRef]
- Chowdhury, A.R.; Chatterjee, T.; Banerjee, S. A Random Forest Classifier-Based Approach in the Detection of Abnormalities in the Retina. Med. Biol. Eng. Comput. 2018, 57, 193–203. [Google Scholar] [CrossRef] [PubMed]
- Yu, J.; Yu, Z.; Ye, W. Enhanced Recognition of Power Quality Disturbances through an Augmented S-Transform and XGBOOST Algorithm. In Proceedings of the 3rd International Conference on Energy, Power and Electrical Technology (ICEPET), Chengdu, China, 17–19 May 2024; pp. 1052–1056. [Google Scholar] [CrossRef]
- Syriopoulos, P.K.; Kalampalikis, N.G.; Kotsiantis, S.B.; Vrahatis, M.N. KNN Classification: A Review. Ann. Math. Artif. Intell. 2023, 93, 43–75. [Google Scholar] [CrossRef]
- Caicedo, J.E.; Agudelo-Martínez, D.; Rivas-Trujillo, E.; Meyer, J. A Systematic Review of Real-Time Detection and Classification of Power Quality Disturbances. Prot. Control Mod. Power Syst. 2023, 8, 1–37. [Google Scholar] [CrossRef]
- Li, Z.; Liu, H.; Zhao, J.; Bi, T.; Yang, Q. A Power System Disturbance Classification Method Robust to PMU Data Quality Issues. IEEE Trans. Industr. Inform. 2022, 18, 130–142. [Google Scholar] [CrossRef]
- Jiang, Z.; Wang, Y.; Li, Y.; Cao, H. A New Method for Recognition and Classification of Power Quality Disturbances Based on IAST and RF. Electr. Power Syst. Res. 2024, 226, 109939. [Google Scholar] [CrossRef]
- Majumdar, S.; Mishra, A.K. Empirical Mode Decomposition with Wavelet Transform Based Analytic Signal for Power Quality Assessment. Int. J. Electron. Commun. Eng. 2018, 12, 329–334. [Google Scholar] [CrossRef]
- Mishra, V.; Singh, V.K.; Pachori, R.B. Automated Power Quality Assessment Using IEVDHM Technique. In Proceedings of the 10th International Conference on Signal Processing and Communication (ICSC), Noida, India, 20–22 February 2025; pp. 642–647. [Google Scholar] [CrossRef]














| PQ Class | Numerical Equation |
|---|---|
| Pure sine (C1) | |
| Sag (C2) | |
| Swell (C3) | |
| Interruption (C4) | |
| Harmonics (C5) | |
| Oscillatory transient (C6) | |
| Flicker (C7) | |
| Impulse transient (C8) | |
| Notch (C9) | |
| Sag with Harmonics (C10) | |
| Swell with Harmonics (C11) | |
| Sag with oscillation (C12) | |
| Swell with oscillation (C13) | |
| Sag with Harmonics and Oscillations (C14) | |
| Swell with Harmonics and Oscillations (C15) |
| Classifier | Classification Accuracy (%) | ||||
|---|---|---|---|---|---|
| Noiseless | 40 dB | 30 dB | 20 dB | 10 dB | |
| SVM | 99.83% ± 0.0003 | 99.82% ± 0.0005 | 99.80% ± 0.0005 | 99.72% ± 0.0005 | 97.31% ± 0.0020 |
| ANN | 99.76% ± 0.0005 | 99.66% ± 0.0004 | 99.63% ± 0.0006 | 99.59% ± 0.0006 | 97.26% ± 0.0059 |
| RF | 99.68% ± 0.0006 | 99.64% ± 0.0006 | 99.61% ± 0.0004 | 99.55% ± 0.0005 | 96.87% ± 0.0013 |
| XGBoost | 99.41% ± 0.0011 | 99.32% ± 0.0012 | 99.02% ± 0.0014 | 98.63% ± 0.0015 | 92.70% ± 0.0039 |
| KNN | 91.06% ± 0.0034 | 90.72% ± 0.0027 | 89.25% ± 0.0038 | 87.38% ± 0.0036 | 79.10% ± 0.0032 |
| SNR: 30 dB | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| SVM | ANN | RF | |||||||
| Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
| C1 | 1 | 1 | 1 | 0.9997 | 1 | 0.9999 | 1 | 1 | 1 |
| C2 | 0.9999 | 0.9996 | 0.9997 | 0.9996 | 0.9983 | 0.9989 | 1 | 0.9999 | 0.9999 |
| C3 | 0.9994 | 0.9999 | 0.9996 | 0.9993 | 1 | 0.9996 | 0.9997 | 1 | 0.9999 |
| C4 | 0.9996 | 1 | 0.9998 | 0.9987 | 1 | 0.9994 | 0.9999 | 1 | 0.9999 |
| C5 | 0.9984 | 1 | 0.9992 | 0.9946 | 0.9997 | 0.9972 | 1 | 1 | 1 |
| C6 | 0.9970 | 0.9991 | 0.9981 | 0.9959 | 0.9990 | 0.9974 | 0.9989 | 1 | 0.9994 |
| C7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| C8 | 0.9996 | 0.9997 | 0.9996 | 0.9993 | 0.9994 | 0.9994 | 1 | 1 | 1 |
| C9 | 1 | 1 | 1 | 0.9997 | 1 | 0.9999 | 1 | 1 | 1 |
| C10 | 0.9909 | 0.9949 | 0.9929 | 0.9911 | 0.9874 | 0.9893 | 0.9676 | 0.9924 | 0.9798 |
| C11 | 0.9881 | 0.9989 | 0.9935 | 0.9905 | 0.9920 | 0.9912 | 0.9604 | 0.9917 | 0.9758 |
| C12 | 0.9991 | 0.9967 | 0.9979 | 0.9987 | 0.9954 | 0.9971 | 1 | 0.9989 | 0.9994 |
| C13 | 0.9991 | 0.9994 | 0.9993 | 0.9991 | 0.9993 | 0.9992 | 0.9999 | 0.9997 | 0.9998 |
| C14 | 0.9964 | 0.9909 | 0.9936 | 0.9926 | 0.9909 | 0.9917 | 0.9922 | 0.9667 | 0.9793 |
| C15 | 0.9988 | 0.9873 | 0.9930 | 0.9923 | 0.9896 | 0.9909 | 0.9914 | 0.9590 | 0.9749 |
| SNR: 10 dB | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| SVM | ANN | RF | |||||||
| Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
| C1 | 0.9085 | 0.9670 | 0.9368 | 0.9137 | 0.9621 | 0.9373 | 0.8990 | 0.9650 | 0.9308 |
| C2 | 0.9581 | 0.9446 | 0.9512 | 0.9612 | 0.9367 | 0.9488 | 0.9711 | 0.8996 | 0.9339 |
| C3 | 0.9881 | 0.9984 | 0.9933 | 0.9894 | 0.9973 | 0.9933 | 0.9947 | 0.9967 | 0.9957 |
| C4 | 0.9777 | 0.9893 | 0.9835 | 0.9749 | 0.9880 | 0.9814 | 0.9384 | 0.9943 | 0.9655 |
| C5 | 0.9656 | 0.9789 | 0.9722 | 0.9626 | 0.9824 | 0.9724 | 0.9552 | 0.9797 | 0.9673 |
| C6 | 0.9741 | 0.9836 | 0.9788 | 0.9777 | 0.9827 | 0.9802 | 0.9587 | 0.9837 | 0.9710 |
| C7 | 0.9745 | 0.9550 | 0.9646 | 0.9701 | 0.9529 | 0.9614 | 0.9715 | 0.9469 | 0.9590 |
| C8 | 0.9920 | 0.9750 | 0.9834 | 0.9903 | 0.9799 | 0.9851 | 0.9940 | 0.9760 | 0.9849 |
| C9 | 0.9981 | 0.9904 | 0.9943 | 0.9977 | 0.9954 | 0.9966 | 0.9989 | 0.9947 | 0.9968 |
| C10 | 0.9478 | 0.9560 | 0.9519 | 0.9505 | 0.9471 | 0.9488 | 0.9168 | 0.9451 | 0.9307 |
| C11 | 0.9591 | 0.9866 | 0.9726 | 0.9633 | 0.9814 | 0.9723 | 0.9395 | 0.9849 | 0.9616 |
| C12 | 0.9825 | 0.9629 | 0.9726 | 0.9809 | 0.9673 | 0.9740 | 0.9801 | 0.9491 | 0.9644 |
| C13 | 0.9933 | 0.9881 | 0.9907 | 0.9939 | 0.9904 | 0.9921 | 0.9931 | 0.9924 | 0.9928 |
| C14 | 0.9872 | 0.9579 | 0.9723 | 0.9808 | 0.9623 | 0.9714 | 0.9826 | 0.9204 | 0.9505 |
| C15 | 0.9867 | 0.9533 | 0.9697 | 0.9824 | 0.9590 | 0.9705 | 0.9820 | 0.9333 | 0.9570 |
| Feature No. | Classification Accuracy (%) | ||||
|---|---|---|---|---|---|
| Noiseless | 40 dB | 30 dB | 20 dB | 10 dB | |
| 5 | 89.67% ± 0.0026 | 88.19% ± 0.0031 | 78.49% ± 0.0038 | 76.72% ± 0.0052 | 67.51% ± 0.0053 |
| 10 | 93.09% ± 0.0023 | 93.06% ± 0.0029 | 90.55% ± 0.0023 | 90.41% ± 0.0032 | 83.19% ± 0.0040 |
| 20 | 95.73% ± 0.0018 | 95.61% ± 0.0014 | 94.38% ± 0.0021 | 93.79% ± 0.0022 | 88.95% ± 0.0030 |
| 50 | 99.13% ± 0.0011 | 99.01% ± 0.0009 | 98.99% ± 0.0008 | 98.90% ± 0.0010 | 96.03% ± 0.0021 |
| 75 | 99.70% ± 0.0004 | 99.67% ± 0.0004 | 99.63% ± 0.0005 | 99.53% ± 0.0005 | 97.35% ± 0.0013 |
| 100 | 99.80% ± 0.0005 | 99.79% ± 0.0005 | 99.75% ± 0.0006 | 99.71% ± 0.0006 | 97.67% ± 0.0014 |
| 125 | 99.86% ± 0.0002 | 99.85% ± 0.0002 | 99.82% ± 0.0004 | 99.74% ± 0.0003 | 97.92% ± 0.0012 |
| 148 | 99.83% ± 0.0003 | 99.82% ± 0.0005 | 99.80% ± 0.0005 | 99.72% ± 0.0005 | 97.79% ± 0.0020 |
| Study | Dataset Used | No. of Classes | Methods Used | Accuracy |
|---|---|---|---|---|
| M. Manjula and A. V. R. S. Sarma (2012) [17] | Synthetic data 1500 samples | 10 | EMD-HT | 98.3% |
| S. Majumdar et al. (2018) [43] | Synthetic data | 4 | EMD-WT | Overall classification accuracy of 97.5% |
| Saxena et al. (2022) [18] | Synthetic data 2500 samples (500 per class) | 5 | HT-WT | 96.2% |
| Molu et al. (2024) [13] | Synthetic data and real-time validation on Xilinx Zynq-7000 SoC FPGA | 7 | DWT-ELM | Overall classification accuracy of 99.69% |
| Xu et al. (2024) [19] | Synthetic data 1000 samples per class | 6 | WPT-LMD | Up to 99.4% at SNR = 40 dB. |
| V. Mishra et al. (2025) [44] | Synthetic data 1000 signals per class | 19 | Improved eigenvalue decomposition of Hankel matrix (IEVDHM)-HT | 92.48% with noiseless signals and between 91.07% and 88.01% with noisy signals |
| Proposed method (HAMRFE) (2026) | Synthetic data 3500 samples per class | 15 | Adaptive Multi-Resolution Feature Extraction Method | 99.86% with noiseless signals and between 99.85% and 97.92% with noisy signals |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Alrashidi, M. A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics 2026, 14, 784. https://doi.org/10.3390/math14050784
Alrashidi M. A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics. 2026; 14(5):784. https://doi.org/10.3390/math14050784
Chicago/Turabian StyleAlrashidi, Musaed. 2026. "A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection" Mathematics 14, no. 5: 784. https://doi.org/10.3390/math14050784
APA StyleAlrashidi, M. (2026). A Novel Hybrid Adaptive Multi-Resolution Feature Extraction Method for Power Quality Disturbance Detection. Mathematics, 14(5), 784. https://doi.org/10.3390/math14050784
