Next Article in Journal
You Got Phished! Analyzing How to Provide Useful Feedback in Anti-Phishing Training with LLM Teacher Models
Previous Article in Journal
A Joint Diagnosis Model Using Response Time and Accuracy for Online Learning Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters

by
Fatih Özen
1,*,
Rana Ortaç Kabaoğlu
2 and
Tarık Veli Mumcu
2
1
Department of Electronics and Automation, Vocational Higher School of Technical Sciences, Tekirdağ Namık Kemal University, 59030 Tekirdağ, Turkey
2
Department of Electrical-Electronics Engineering, Faculty of Engineering, İstanbul University-Cerrahpaşa, 34320 İstanbul, Turkey
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(19), 3876; https://doi.org/10.3390/electronics14193876
Submission received: 27 August 2025 / Revised: 22 September 2025 / Accepted: 26 September 2025 / Published: 29 September 2025
(This article belongs to the Special Issue Application of Machine Learning in Power Electronics)

Abstract

In three-level inverters, high accuracy and low latency sector and region detection are of great importance for control and monitoring processes. This study aims to overcome the limitations of traditional methods and develop a model that can work in real time in industrial applications. In this study, various deep learning (DL) architectures are systematically evaluated, and a comprehensive performance comparison is performed to automate sector and region detection for inverter systems. The proposed approach aims to detect sectors (6 classes) and regions (3 classes) with high accuracy using a Deep Neural Network (DNN), 1D Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM), and Gated Recurrent Unit (GRU) based DL architectures. The performance of the considered DL approaches was systematically evaluated with cross-validation, confusion matrices, and statistical tests. The proposed GRU-based model offers both computational efficiency and high classification performance with a low number of parameters compared to other models. The proposed model achieved 99.27% and 97.62% accuracy in sector and region detection, respectively, and provided a more optimized solution compared to many heavily structured state-of-the-art DL models. The results show that the GRU model exhibits statistically significant superior performance and support that it has the potential to be easily integrated into hardware-based systems due to its low computational complexity. The comprehensive results show that DL-based approaches can be effectively used in sector and region detection in inverter systems, and especially the GRU architecture is a promising method.

1. Introduction

Inverters, with their advantages such as high efficiency, low harmonic distortion, and improved output waveform, are often used in renewable energy systems, motor drives, electric vehicles, and industrial applications where power quality plays an important role [1,2]. The performance and reliability of inverters depend on the precise specification and control of output voltage levels [3]. Multilevel inverter structures are used due to the switching losses of semiconductor elements at high power levels [4]. In multilevel inverters, the output voltage vector can be represented in a reference frame divided into sectors and regions. Accurate identification of the operating sector and region is essential for applying appropriate pulse width modulation (PWM) techniques and ensuring correct switching of power electronic components. Incorrect sector or region detection can lead to overloads, increased harmonic distortion, and voltage imbalances [5,6]. One of the most important steps for inverters to generate signals with the desired amplitude and frequency is to determine the sector and region in which the system operates relative to the reference vector [7,8].
The detection of sectors and regions in inverters is usually performed using mathematical models and threshold-based approaches. Although these methods work for certain system parameters, increasing data density and complexity degrade the performance of traditional methods [9]. Increasing switching states and large data set requirements for multilevel inverters limit the effectiveness of conventional methods and increase the need for deep learning-based solutions [10]. Deep learning methods are used to reduce the computational effort for complex mathematical models and to predict the behavior of systems [11]. Deep learning methods in the development of inverters offer effective solutions for optimizing important parameters such as output current and voltage values and harmonic distortion.
Multilevel inverters are increasingly favored to increase the efficiency of systems and improve power quality. There are various studies in the literature that aim to obtain the desired signal at the output by reducing the switching losses in inverters with three or more levels. These studies include both classical methods and modern approaches based on deep learning. Yupeng Si et al. [12] used the attention collaborative stacked long short-term memory (ASLSTM) method, one of the deep learning methods, to increase the efficiency of neutral point clamped (NPC) type inverters and build models for fault diagnosis. With this successful study, they performed fault diagnosis in the inverter under different conditions. Matias Aguirre et al. [13] proposed a new strategy based on deep learning as an alternative to operating a 3-level NPC-type inverter with classical algorithms. With this new algorithm, the frequency of the inverter is controlled, and the values of the current output and DC capacitance voltage are brought to the desired level. Hussan et al. [14] proposed a hybrid ANN–backstepping framework for MPPT in photovoltaic systems, optimized using a combination of Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). This approach achieved high tracking accuracy (99.8%), low RMS error (0.103%), and reduced power loss (0.2%) while ensuring system stability and robustness under dynamic conditions. Fabiola P. et al. [15] used machine learning algorithms to monitor and manage inverters in photovoltaic solar power plants and to classify, improve, predict, and forecast inverter failures based on historical data. The main objective of these studies is to demonstrate the importance of using advanced techniques in multilevel inverters. However, expanding research in this area will make it possible to make energy conversion systems more efficient, particularly through the further application of harmonic reduction and optimization techniques.
Accurate identification of sectors and regions in NPC inverters is essential for applying PWM techniques and ensuring correct switching of power electronic components [16]. This process uses parameters such as DC input voltage, modulation index, switching frequency, and phase angle to determine the operating sector and region. Traditional calculation methods based on mathematical models and decision tables can be complex, time-consuming, and less accurate for multilevel inverters. Deep learning classifiers provide a robust alternative, enabling real-time recognition, high accuracy with large datasets, and improved generalization, reducing the need for complex modeling [17,18,19].
The aim of this study is to model and analyze a three-level Neutral Point Clamped (NPC) type inverter using Space Vector Pulse Width Modulation (SVPWM). The inverter model involves applying the output signals obtained by varying parameters such as input voltage, switching frequency, and modulation index to an induction motor load. Deep learning techniques are used in this modeling process to reduce the computational load of the system and improve computational efficiency. Thus, sector and region predictions based on the input parameters of the system are realized with high accuracy. The motivation and contributions of this work can be summarized as follows:
  • An innovative and efficient model is proposed based on the lightweight, low-depth, and high-accuracy DL architecture developed for sector and region detection in three-level NPC inverters.
  • The proposed GRU-based model offers computational efficiency with a low number of parameters, high classification performance, and a more optimized solution compared to many heavy-structured DL models in the literature.
  • The comprehensive performance comparisons and statistical validation results reveal that the proposed sector and region detection models have the potential to be easily integrated into hardware-based systems due to their low computational complexity.
The rest of this article is structured as follows: Section 2 describes in detail the general working principle of a three-level inverter, dataset preparation, deep learning techniques, and performance evaluation metrics. Section 3 presents the comprehensive results and findings. Section 4 discusses the implications and potential of the study. Finally, Section 5 concludes the paper.

2. Materials and Methods

In this paper, we propose a deep learning-based prediction system that performs sector and region prediction to analyze the operating states of multiphase power electronic systems. Figure 1 illustrates the overall pipeline of the proposed system. The system consists of five main components: Data acquisition and processing, calculation of characteristic quantities, modeling with deep learning architectures, hyperparameter optimization, and final model prediction.

2.1. 3-Level NPC Inverter

Three-level Neutral Point Clamped (NPC) type inverters are preferred due to their high efficiency and low harmonic values [20]. Three different voltage levels, + V D C 2 , 0 ve V D C 2 are generated at the output of the inverter [21]. Space vector pulse width modulation (SVPWM) is a switching technique used to achieve the desired waveform and high efficiency. Figure 2 shows a 3-phase 3-level diode-coupled inverter feeding an induction motor [22]. The SVPWM technique is used to control the output voltage and frequency of the inverter.

2.2. Data Acquisition and Preparation

Since it is easier to recognize sectors and regions in a two-phase coordinate system, the Clarke transformation is used first. The Clarke transformation converts the three-phase voltage or current components at the output of the inverter into two-phase axes [23]. For the sector and region identification model, which is one of the focal points of this study, the data is obtained from the inverter based on the SVPWM technique.
In a three-phase system, the phase voltages or currents are expressed as V a , V b ,   a n d   V c . These are converted into a two-axis (α, β) coordinate system using the Clarke transformation.
V α = 2 3   ( V a V b 2 V c 2 )
V β = 2 3   ( 3   x   V b 2 3   x   V c 2 )
After the Clark transformation, the values for V r e f and the angle ( θ ) are determined, which are to be used as parameters for the detection of sectors and regions. These values are expressed as follows:
V r e f = V α 2 + V β 2
θ = ( tan 1 V β V α )
The space vector diagram of a three-level inverter of type NPC is shown in Figure 3. The space vector has a hexagonal structure with six sectors and four regions within each sector.
Since each phase has three states, there are 27 switching states. There are a total of 24 active voltage vectors; 12 voltage vectors are small and have the value V D C 3 , 6 voltage vectors are medium and have the value V D C 3 and 6 voltage vectors are large and have the value 2 V D C 3 [24]. In this way, the output signal is almost sinusoidal, and the efficiency of the inverter is higher. In the space vector diagram, each sector is divided into 60° angles. The relationship between sector and angle ( θ ) is shown in Table 1.
In the space vector diagram, there are 4 triangular regions in each sector. The x and y components are used to determine the region where the voltage vector V r e f is located. The relationship between the detection of the region depends on the components V r e f x and V r e f y is given in Table 2.
In a three-level NPC inverter, the technique of space vector pulse width modulation (SVPWM) is used to determine the switching sequences of the power electronic components. The switching sequences and times of the power electronic elements are optimized according to the position of the reference voltage vector in the space vector diagram [25,26]. The times are given as T a , T b and T c according to the position of the reference vector in the subregion. T s is the sum of the execution times of the individual vectors.
As can be seen in Figure 4, the reference voltage vector is represented by three voltage vectors in the lower area. V 1 for the switching states [1 0 0/0 −1 −1], V 2 for the switching state [1 0 −1] and V 3 for the switching state [1 1 0 0/0 0 0 −1]. Accordingly,
V r e f   x   T S = T a   x   V 1 + T b   x   V 2 + T c   x   V 3
T S = T a + T b + T c

2.3. Pre-Processing

The dataset consists of the phase voltages V a , V b , and V c and the phase currents i a , i b , and i c collected from a three-phase voltage source inverter system. Prior to model training, these raw time-series signals were subjected to several pre-processing steps to ensure reliability and suitability for deep learning applications. The applied steps are summarized as data quality check, down-sampling, and data normalization. The data quality check is used to ensure that data sets with corrupted, missing, or physically meaningless values are removed. Next, downsampling optimizes processing time by reducing data density without losing information. Finally, all input values (DC input voltage, modulation index, switching frequency, and angle) were normalized into the range [0, 1] using min–max normalization so that the model can learn more robustly.
Although the inverter system initially provides the three-phase voltages ( V a , V b , and V c ) and currents ( i a , i b , and i c ), these signals are not directly used as model inputs. Instead, they are processed to extract the fundamental operating variables of the system. From the measured phase voltages and currents, the DC voltage ( V d c ), modulation index ( m ), switching frequency ( f s ), and reference angle ( θ ) are derived. The modulation index is defined as the ratio between the reference voltage ( V r e f ) amplitude and the DC voltage. The reference voltage and angle are obtained using the Clarke transformation to convert phase voltages into the stationary α-β component. These four quantities serve as compact and physically meaningful descriptors of the inverter state, and are therefore selected as the final input variables for sector and region classification.
The success of deep learning models depends directly on the structure and distribution of the data. Therefore, the statistical analysis of the dataset is one of the most important steps that must be performed before setting up the model. The dataset contains a total of 360,000 samples for six class sectors (A–F) and four class regions (1–4). The main statistical measures for each characteristic of the dataset are listed in Table 3. This statistical data shows the numerical distribution of each characteristic and forms the basis for all pre-processing before the data is fed into the model.

2.4. Deep Learning Classifiers

Recognizing sectors and regions in the NPC inverters is crucial for the correct operation of the system. While traditional methods can be computationally intensive and slow, deep learning-based classifiers can make predictions in real time with high accuracy. DNN, 1D-CNN, LSTM, and GRU-based deep learning models can be used for the classification of sectors and regions in NPC inverters. These models are described in detail below, with the most suitable model being selected based on the system requirements and optimized for real-time use.

2.4.1. Deep Neural Network

DNN is a deeper version of the traditional Multi-Layer Perceptron (MLP) model [27]. DNN is a flexible and powerful model that can be used independently of the data type. It consists of fully linked layers, and each neuron is connected to all neurons of the previous layer [28]. A DNN layer performs the following operation:
z l = W l a l 1 + b l
where z is the activation input in the layer, W is the weight matrix of the layer and b is the bias vector. Using activation functions such as ReLU, Sigmoid, and Softmax, the output of the stratum can be expressed as follows:
a l = σ z l
where σ and a denote the activation function and the output of the function, respectively.
In DNN classifiers that use the softmax activation function, probability estimation is performed as follows.
P y = k x = e z k j = 1 K e z j

2.4.2. 1D Convolutional Neural Network

Convolutional neural networks (CNNs) are popular and powerful models, especially in image processing and time series analysis [29,30]. The 1D-CNN models that work with one-dimensional data are widely used, especially in areas such as signal processing, speech recognition, EEG analysis, and classification of time series data [31,32]. This model creates feature maps by applying displacement filters to the input data. The convolution process is mathematically defined as follows:
h i = σ j = 0 k w j x i j + b
where x stands for the input data, w for the weights of the convolution kernel, and h for the filtered output (feature map). In addition, k and b stand for the filter size and the bias term, respectively, while σ stands for the activation function.
1D-CNN models usually apply maximum pooling or average pooling after convolution and activation [33]. Pooling is used to reduce the size of the feature map and to condense important information. In the last layer of the model, classification is performed using Global Average Pooling (GAP) or Fully Connected layers [34].

2.4.3. Long-Short Term Memory

LSTM is an improved version of recurrent neural networks (RNN) and is effective in learning long-term dependencies [35,36]. The LSTM cell consists of three basic components: the cell state ( C t ), which stores the long-term memory, the hidden state ( h t ) which contains information from the previous time and the input ( i t ), forgetting ( f t ) and output gates ( o t ), which control the flow of information. In the LSTM model, the input gate, which determines the new information to be stored, the forget gate, which determines the information to be forgotten, and the output gate, which determines the information to be output, can be expressed mathematically as follows [37,38]:
i t = σ W i   x   h t 1 ,   x t + b i f t = σ W f   x   h t 1 ,   x t + b f o t = σ W o   x   h t 1 ,   x t + b o
The state of the cell and the hidden state calculated by the update are expressed as follows:
C ~ t = t a n h W C   x   h t 1 ,   x t + b C C t = f t   x   C t 1 + i t   x   C ~ t h t = o t   x   tanh C t
where W stands for the learned weight matrices and b for the distortion terms. In addition, σ and t a n h stand for the sigmoid and tangential hyperbolic activation function, respectively.

2.4.4. Gated Recurrent Unit

GRU is a simplified version of LSTM and requires less computation as it combines forget and enter gates into a single update gate [39]. GRU is therefore faster because it reduces computing costs by using fewer parameters [40]. In contrast to LSTM, GRU has no cell state and only uses the hidden state [41]. The basic components of a GRU cell are the update gate ( z t ), which determines how much prior knowledge should be applied, and the reset gate ( r t ), which controls the forgetting of prior knowledge and the hidden state ( h t ). These basic components can be expressed mathematically as follows:
z t = σ W z   x   h t 1 ,   x t + b z r t = σ W r   x   h t 1 ,   x t + b r h ~ t = t a n h W h   x   r t   x   h t 1 ,   x t + b h h t = 1 z t   x   h t 1 + z t   x   h ~ t
where W and b stand for the learned weight matrices or bias terms and σ and t a n h for the sigmoid or tangential hyperbolic activation functions.

2.5. Proposed Deep Learning Architectures

Sector and region estimation is a complex problem, especially with input parameters such as Vdc (DC input voltage), m (modulation index), f (switching frequency), and θ (phase angle) requires deep learning models that can accurately learn the multidimensional relationships between these parameters. DNN (Deep Neural Network), 1D-CNN (One-Dimensional Convolutional Neural Network), LSTM (Long-Short Term Memory), and GRU (Gated Recurrent Unit) based models are used in this study.
The DNN model is a traditional deep learning model consisting of fully connected layers. This model has been used to learn complex and multidimensional relationships and improve accuracy on tasks such as sector and region prediction. The 1D-CNN model is particularly effective in analyzing time series and is capable of extracting features through convolutional filters. The architecture of the proposed 1D-CNN model for sector and region prediction is shown in Figure 5.
The LSTM model is used to capture long-term dependencies in time series data. It is optimized to make the best prediction while preserving the temporal patterns of the input data. The GRU model works similarly to the LSTM but has an optimized structure that requires less computational effort. This model is preferred for learning time series patterns with fewer parameters. Figure 6 shows the proposed LSTM/GRU-based model architecture for predicting sectors and regions.
Each of these models was chosen because it offers different advantages and can work with time series and signal data. Each model is optimized with the grid search algorithm to select the best hyperparameters, and several architectural designs are proposed. Adam was chosen as the optimization method for all proposed models. The number of epochs and the stack size were set to 100 and 64, respectively. The optimal number of layers, the number of units in each layer, the dropout rate, and the learning rate selected by the grid search algorithm for the prediction models for sectors and regions are shown in Table 4.

2.6. Model Evaluation

In this study, metrics such as accuracy (Acc), recall (Rec), precision (Pre), F1 score, and Matthews Correlation Coefficient (MCC) [42,43] are used to evaluate the performance of each DL model in identifying sectors and regions in NPC inverters. The confusion matrix is used to calculate these performance measures. The confusion matrix C for n classes are given by the following:
C = C 1,1 C 1,2 C 1 , n C 2,1 C 2,2 C 2 , n C n , 1 C n , 1 C n , n
For multi-class classification, the performance metrics accuracy, precision, recall, F1-score, and MCC are mathematically defined as follows, where k represents the true class:
A c c u r a c y = i = 1 n C i , i i = 1 n j = 1 n C i , j
P r e c i s i o n k = C k , k i = 1 n C i , k
R e c a l l k = C k , k j = 1 n C k , j
F 1 k = 2 · P r e c i s i o n k   x   R e c a l l k P r e c i s i o n k + R e c a l l k
M C C = N   x i C i , i i T i   x   P i N 2 i T i 2   x   N 2 i P i 2
where T i and P i represent the number of samples of the actual class i and the number of predicted samples, respectively. N stands for the total number of samples.
In order to assess the generalizability of the DL models for the prediction of sectors and regions more precisely, a cross-validation (CV) is carried out. In this way, overfitting of the models can be avoided, and a more robust performance measure can be obtained. The results of the model comparison obtained with CV are evaluated with the Friedman test and the subsequent Nemenyi post-hoc test.
The Friedman test is a non-parametric statistical test [44]. If you test several models (e.g., DNN, 1D-CNN, LSTM, GRU) with the same data (folding), the ranking of their performance is used to measure whether there is a significant difference. The test statistics for the Friedman test are calculated as follows:
χ F 2 = 12 N k k + 1 j = 1 k R j 2 3 N k + 1
where k and N denote the number of models and convolutions, respectively. R j indicates the average rank of the j -th model in the foldings. If the χ F 2 value is greater than the critical value, it is concluded that there is a significant difference (p < 0.05) between the models.
If there is a significant difference because of the Friedman test, the Nemenyi post hoc test is applied to determine the different models [45]. If the difference between the average rankings of the models is greater than the critical difference (CD), the performance of these two models is considered statistically different [46]. The critical difference (CD) can be expressed mathematically as follows:
C D = q α k k + 1 6 N
where q α denotes the critical value, which, taken from the Studentized Range Distribution table, is typically ~ 2.569 for α = 0.05 . Therefore, if the difference is greater than CD, the performance of the models is significantly different.
The CV technique and statistical testing procedure used in the performance measurement of the DL models for predicting sectors and regions are shown in Figure 7. The 5-fold CV technique was used to measure the performance of the models, and the model results were evaluated using the Friedman test followed by the Nemenyi post-hoc test. If the Friedman test showed a significant difference (p < 0.05), the Nemenyi post-hoc test was used to select the best model with a significant difference.

2.7. Experimental Setup

In this study, DNN, 1D-CNN, LSTM, and GRU-based DL models are evaluated for the prediction of sectors and regions for a three-level NPC inverter. The input DC voltage (Vdc), modulation index (m), switching frequency (fs), and angle are used as input variables. The data used in this study were obtained through simulation of a three-level NPC inverter model in MATLAB/Simulink, where input voltage, modulation index, switching frequency, and angle were systematically varied to generate a comprehensive dataset. The experiments were conducted considering six sectors (S1–S6) and four subregions (R1–R4). However, the reference voltage vector obtained according to the input parameters contains three regions (R2–R4) in the space vector diagram. Therefore, only three regions were used in the training and testing of the prediction models. This allowed the classification of six sectors and three regions for the inverter. In this study, the input variables are processed differently depending on the network architecture. For the DNN model, the variables are provided as static feature vectors, allowing the model to learn relationships directly from individual data points. In contrast, CNN, LSTM, and GRU architectures are trained on sequentially structured inputs, where the same features are arranged as time-series windows. This enables CNN to capture local spatial patterns, while LSTM and GRU are capable of modeling long-term temporal dependencies. Such a distinction ensures that each architecture is utilized in line with its strengths in feature extraction and sequence modeling.
In developing the classification models for predicting sectors and regions, a data set of approximately 360,000 rows of data was created. Subsequently, 1 out of 12 of the data is used by sub-sampling, and the processing time is optimized by reducing the data density. A normalization process is applied to the final dataset, and all input variables are normalized to be within a certain range to ensure a more stable learning of the models. The 5-fold CV technique is used to train and test the models, and the performance of the models is evaluated with the metrics of accuracy, pre-precision, recall, F1-score, and MCC. The Friedman statistical test, followed by a Nemenyi post hoc test, was applied using the fold accuracy ratio to select the best model with a significant difference. Finally, all DL classification models were implemented in Python 3.11 using the sklearn and TensorFlow libraries.

3. Results

In this section, the results obtained by using DNN, 1D-CNN, LSTM, and GRU deep learning models to solve the sector prediction problem are evaluated. The performance of the prediction models is analyzed using metrics such as accuracy, precision, recall, F1-score, and MCC in the experiments conducted using the 5-fold cross-validation method.
The training loss plots of the models against the epochs for the sector and region predictions are shown in Figure 8a,b. From these plots, it can be seen that the training loss for the deep GRU and LSTM models reaches a stable level for both the sector and region predictions, especially between epochs 80 and 100. Thus, the LSTM and GRU models achieve the lowest training loss for tasks with time series or sequential data. The training loss for the region prediction is generally higher than for the sector prediction. This indicates that the regional data may be more complex or that the model is more difficult to learn with regional data.
The classification results obtained with 5-fold cross-validation for sector prediction are shown in Table 5. The performance evaluation of the DL models was performed comprehensively by considering each convolution and the average of all convolutions.
According to the results shown in Table 5, all models have a very high accuracy, but the GRU model performs best overall. The average accuracy of the GRU model is 99.27%, which is higher than that of the LSTM model (98.94%), which achieves the second-highest accuracy compared to the other models. The GRU model also achieves the best result in the MCC metric with an average MCC value of 0.991, which means that this model has the lowest error rate.
The LSTM model performed similarly well to the GRU model with an average accuracy of 98.94%, but slight variations in accuracy and MCC values were observed in some folds. In particular, the slightly lower performance in Fold 4 compared to the other folds (Accuracy: 98.43%) indicates that the model may be inconsistent in certain situations.
When analyzing the DNN and 1D-CNN models, although the DNN model achieved a successful result with an average accuracy of 98.33%, it performed relatively worse than the LSTM and GRU models, apart from the differences in the MCC metric. The 1D-CNN model outperformed the DNN model with an average accuracy of 98.61%, but with larger changes in some layers. The lowest accuracy of this model was 97.96% in Fold 3, and these variations indicate that the model performs unevenly on certain datasets.
The results of the 5-fold cross-validation of the deep learning models used for region prediction are shown in Table 6. The results show that the GRU model achieves the highest performance with 97.62% accuracy. The LSTM model has a similar performance with 97.57% accuracy and is particularly strong in learning time-dependent information. The DNN model performed relatively well with an accuracy of 96.21%, but with a lower performance than LSTM and GRU. The 1D-CNN model showed the lowest performance with 94.97% accuracy and a significant drop in accuracy, especially in Fold 3.
The MCC results show that the GRU (0.964) and LSTM (0.962) models have the highest correlation, indicating that these models have high predictive power overall. The metrics for precision, recall, and F1 score show a similar trend, with the GRU and LSTM models having the highest values. As a result, the GRU model provides the best performance in the prediction of regions, while the LSTM model can be considered a strong alternative.
The confusion matrix of the GRU-based models with the highest performance for predicting sectors and regions is shown in Figure 9. As can be seen in Figure 9a, the classification performance of the model is quite high for the six sectoral classes (S1–S6). It can be seen that the model is able to correctly distinguish the sectoral samples, and there is minimal confusion between the classes. The most successful classes are S1, S2, and S6 with precision and recognition values of over 99%. For class S4, the confusion with S5 is particularly noteworthy. The fact that some instances of S4 were incorrectly predicted as S5 suggests that these two classes may have semantically similar properties. Figure 9b shows the confusion matrix for the three regional classes (R2–R4). While the accuracy in this classification task is high, more confusion between the classes can be observed. In particular, it is noticeable that classes R2 and R4 are confused with each other. In both tasks, the GRU model successfully learned the sequential structure of the time series data and discriminated between the classes with high performance. While the F1 score for the sectoral classification was 99, it was 97% for the regional classification. This difference provides important evidence of the similarities between the classes and the overall generalizability of the model.
The Friedman test was applied to compare the performance of four different models (DNN, 1D-CNN, LSTM, GRU) for predicting sectors and regions and to determine whether there is a significant difference between them. The Friedman test statistic for the sector detection models is 8.28, and the p-value is 0.0406. Since this p-value (0.0406) is less than 0.05, it is concluded that there is a statistically significant difference between the models. Thus, there is a significant difference between the performance of the four models in recognizing sectors. Similarly, the Friedman test statistic for the region detection models is 12.60 with a p-value of 0.0056. Since this p-value (0.0056) is less than 0.05, it can be said that there is a statistically significant difference between the performance of the models.
After the Friedman test, the Nemenyi post hoc test was used to determine which models differ. This test checks whether there is a significant difference between pairs of models. If there is a significant difference between the pairs with p < 0.05, the best model is determined. The results of the Nemenyi post-hoc test for the models determined for the detection of sectors and regions can be found in Table 7.
The Nemenyi post hoc test, based on mean ranks and critical difference (CD) values, was applied to evaluate the statistical significance of performance differences among the models. For sector recognition, GRU achieved the best mean rank (1.4), followed by LSTM (2.2), and DNN and 1D-CNN (3.0). However, the differences between models did not exceed the CD (2.096), indicating no statistically significant differences. Specifically, the comparison between DNN and GRU showed a partially significant difference (p = 0.068), while no significant differences were observed between LSTM and GRU (p = 0.88) or between 1D-CNN and the other models (p > 0.05). For region recognition, LSTM obtained the lowest mean rank (1.2), followed by GRU (1.8), DNN (2.8), and 1D-CNN (3.6). In this case, LSTM and GRU demonstrated statistically significant improvements over 1D-CNN (p = 0.017 and p = 0.036, respectively), while differences with DNN were not significant (p > 0.05). Overall, the statistical analysis indicates that GRU generally performs best for sector recognition and LSTM for region recognition.

4. Discussion

The present study investigated the use of deep learning (DL) architectures, including DNN, 1D-CNN, LSTM, and GRU, for predicting sectors and regions of a three-level NPC inverter. The experimental results demonstrate that all models achieved high classification performance, with recurrent models generally providing superior accuracy and robustness compared to feedforward and convolutional networks. This indicates that the temporal dependencies inherent in inverter signals are better captured by architectures specifically designed for sequence learning.
An important consideration is the origin of the ground truth labels. In this study, the sector and region labels were generated automatically from the SVPWM logic using the analytical thresholds. While these labels can be computed directly through deterministic rules, the application of DL is motivated by several practical advantages. Rule-based thresholding methods are inherently sensitive to measurement noise, sensor inaccuracies, and parameter variations that commonly arise in inverter operation. In contrast, DL models are capable of learning discriminative patterns from data, which enables more robust classification under noisy or uncertain conditions.
Moreover, once trained, DL models provide very fast inference, making them suitable for real-time applications. Unlike rule-based approaches that may require recalibration of thresholds when operating conditions change, DL models can generalize to unseen scenarios without additional tuning. This adaptability is particularly valuable in applications such as fault detection, predictive maintenance, and adaptive modulation, where inverter operating conditions deviate from ideal assumptions. The use of DL in sector and region classification, therefore, not only validates its feasibility for fundamental inverter state estimation but also lays the groundwork for extending these models to more advanced power electronic applications.
All models demonstrated high classification performance, with recurrent architectures generally outperforming feedforward and convolutional networks. In particular, the GRU model achieved a balance of accuracy, robustness, and computational efficiency. Table 8 summarizes the approximate parameter counts, test accuracy, and indicative inference latency for all models. GRU, with approximately 150,000 parameters, achieved the highest test accuracy and F1-score, while maintaining a lower parameter count and moderate latency compared to LSTM. Its simplified gating mechanism allows efficient learning of temporal dependencies, making it particularly suitable for real-time embedded applications. In comparison, DNN and 1D-CNN, with fewer or more parameters, respectively, performed slightly worse in accuracy and were less capable of capturing temporal dependencies.

5. Conclusions

In this study, DL approaches with high accuracy and precision are proposed to automatically recognize the sector and region for three-level NPC inverters. DNN, 1D-CNN, LSTM, and GRU-based DL methods are used to analyze and evaluate the sectors and regions of inverters. The classification performance of the DL models is evaluated using the CV method, metrics, confusion matrix, and statistical tests. The deep GRU model showed the highest classification accuracy in recognizing sectors and regions compared to other models. The comprehensive results show that the proposed GRU-based model achieves an accuracy of 99.27% for predicting sectors in six classes and 97.62% for predicting regions in three classes. The statistical test results clearly show that a GRU-based approach has a significant difference in automatically recognizing different classes of sectors and regions and provides better classification performance than traditional methods. The proposed DL-based approach can be implemented in a hardware-based system for three-level NPC inverters since it has a small number of layers and parameters.

Author Contributions

Conceptualization, F.Ö. and R.O.K.; methodology, F.Ö., R.O.K. and T.V.M.; software, F.Ö.; validation, F.Ö.; formal analysis, F.Ö. and T.V.M.; investigation, F.Ö.; resources, F.Ö. and R.O.K.; data curation, F.Ö.; writing—original draft preparation, F.Ö., R.O.K. and T.V.M.; writing—review and editing, F.Ö., R.O.K. and T.V.M.; visualization, F.Ö.; supervision, R.O.K. and T.V.M.; project administration, R.O.K.; funding acquisition, F.Ö. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

This manuscript was prepared using part of Fatih Özen’s PhD thesis, conducted at Istanbul University-Cerrahpasa.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cavdar, M.; Ozcira Ozkilic, S. A Novel Linear-Based Closed-Loop Control and Analysis of Solid-State Transformer. Electronics 2024, 13, 3253. [Google Scholar] [CrossRef]
  2. Ichim-Burlacu, C.; Rat, C.-L.; Panoiu, C.; Cuntan, C. PV Panel Monitoring Using Arduino and LabVIEW. In Proceedings of the 2023 5th Global Power, Energy and Communication Conference (GPECOM), Cappadocia, Turkey, 14–16 June 2023; IEEE: New York, NY, USA, 2023; pp. 270–275. [Google Scholar]
  3. Kocalmis Bilhan, A. Integrated Solar-Based PEMWEs for Green Electricity Production. Int. J. Hydrogen Energy 2024, 75, 415–427. [Google Scholar] [CrossRef]
  4. Cecati, F.; Liserre, M. Interoperability Specifications for Multi-Vendor Converter-Dominated Grid: A Robust Stability Perspective. IEEE Trans Smart Grid 2025, 16, 3003–3016. [Google Scholar] [CrossRef]
  5. Poorfakhraei, A.; Narimani, M.; Emadi, A. A Review of Multilevel Inverter Topologies in Electric Vehicles: Current Status and Future Trends. IEEE Open J. Power Electron. 2021, 2, 155–170. [Google Scholar] [CrossRef]
  6. Mhiesan, H.; Wei, Y.; Siwakoti, Y.P.; Mantooth, H.A. A Fault-Tolerant Hybrid Cascaded H-Bridge Multilevel Inverter. IEEE Trans. Power Electron. 2020, 35, 12702–12715. [Google Scholar] [CrossRef]
  7. Anand, V.; Singh, V. A 13-Level Switched-Capacitor Multilevel Inverter with Single DC Source. IEEE J. Emerg. Sel. Top. Power Electron. 2022, 10, 1575–1586. [Google Scholar] [CrossRef]
  8. Cherkaoui Jaouad, N.; Chaikhy, H.; Belhora, F.; Hajjaji, A. Comparison between Two Levels and Multi-Level (NPC and Cascad) Inverters. Mater. Today Proc. 2022, 66, 162–180. [Google Scholar] [CrossRef]
  9. Li, F.; He, J.; Luo, P.; Jiang, H.; Liu, M. Quadratic-Type High Step-up DC–DC Converter with Continuous Input Current Integrating Coupled Inductor and Voltage Multiplier for Renewable Energy Applications. J. Power Electron. 2023, 23, 555–567. [Google Scholar] [CrossRef]
  10. Wu, J.; Yan, Z.; Sun, Q. Multiple Faults Detection of Three-Level NPC Inverter Based on Improved Deep Learning Network. In Proceedings of the International Conference on Applications and Techniques in Cyber Intelligence ATCI 2019, Huainan, China, 22–24 June 2019; pp. 1575–1583. [Google Scholar]
  11. Arslan, Ö. Automated Detection of Heart Valve Disorders with Time-Frequency and Deep Features on PCG Signals. Biomed. Signal Process Control 2022, 78, 103929. [Google Scholar] [CrossRef]
  12. Si, Y.; Wang, R.; Zhang, S.; Zhou, W.; Lin, A.; Wang, Y. Fault Diagnosis Based on Attention Collaborative LSTM Networks for NPC Three-Level Inverters. IEEE Trans. Instrum. Meas. 2022, 71, 3512416. [Google Scholar] [CrossRef]
  13. Matias Aguirre, M.; Vazquez, S.; Wilson-Veas, A.H.; Kouro, S.; Rojas, C.A.; Leon, J.I.; Franquelo, L.G. Extended Period Control Approach FCS-MPC for Three Phase NPC Power Converters. IEEE Trans. Power Electron. 2025, 40, 4927–4937. [Google Scholar] [CrossRef]
  14. Hussan, A.; Waheed, A.; Bilal, H.; Wang, H.; Hassan, M.; Ullah, I.; Peng, J.; Hosseinzadeh, M. Robust Maximum Power Point Tracking in PV Generation System: A Hybrid ANN-Backstepping Approach With PSO-GA Optimization. IEEE Trans. Consum. Electron. 2025, 71, 6016–6026. [Google Scholar] [CrossRef]
  15. Pereira, F.; Silva, C. Machine Learning for Monitoring and Classification in Inverters from Solar Photovoltaic Energy Plants. Sol. Compass 2024, 9, 100066. [Google Scholar] [CrossRef]
  16. Chenchireddy, K.; Basha, M.G.; Dongari, S.; Kumar, P.; Karthik, G.; Maruthi, G. Grid-Connected 3L-NPC Inverter with PI Controller Based on Space Vector Modulation. In Proceedings of the 2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 17–18 March 2023; IEEE: New York, NY, USA, 2023; pp. 94–98. [Google Scholar]
  17. Ramu, K.; Sreenivasulu, G.; Sharma Dixit, R.; Lohmor Choudhary, S.; Venkateswara Rao, K.; Kumar Suman, S.; Shuaib, M.; Rajaram, A. Smart Solar Power Conversion: Leveraging Deep Learning MPPT and Hybrid Cascaded h-Bridge Multilevel Inverters for Optimal Efficiency. Biomed. Signal Process Control 2025, 105, 107582. [Google Scholar] [CrossRef]
  18. Singh, A.; Kaswan, K.S. Intelligent Control and Multi-Level Inverter Design for Power Management Using Deep Learning. In Proceedings of the 2023 2nd International Conference for Innovation in Technology (INOCON), Bengaluru, India, 3–5 March 2023; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar]
  19. Langarica, S.; Pizarro, G.; Poblete, P.M.; Radrigan, F.; Pereda, J.; Rodriguez, J.; Nunez, F. Denoising and Voltage Estimation in Modular Multilevel Converters Using Deep Neural-Networks. IEEE Access 2020, 8, 207973–207981. [Google Scholar] [CrossRef]
  20. Barah, S.S.; Behera, S. An Optimize Configuration of H-Bridge Multilevel Inverter. In Proceedings of the 2021 1st International Conference on Power Electronics and Energy (ICPEE), Bhubaneswar, India, 2–3 January 2021; IEEE: New York, NY, USA, 2021; pp. 1–4. [Google Scholar]
  21. Makhamreh, H.; Trabelsi, M.; Kukrer, O.; Abu-Rub, H. An Effective Sliding Mode Control Design for a Grid-Connected PUC7 Multilevel Inverter. IEEE Trans. Ind. Electron. 2020, 67, 3717–3725. [Google Scholar] [CrossRef]
  22. Choudhury, A. Three-Level Neutral Point Clamped (NPC) Traction Inverter Drive for Electric Vehicles. Ph.D. Thesis, Concordia University, Montreal, QC, Canada, 2015. [Google Scholar]
  23. Siddique, M.D.; Seyedmahmoudian, M.; Mekhilef, S.; Stojcevski, A. Open-Circuit Fault-Tolerant High-Gain Multilevel Inverter Topology With Minimum of 2/3 Level in Postfault Condition. IEEE Trans. Power Electron. 2024, 39, 12013–12017. [Google Scholar] [CrossRef]
  24. Mahafzah, K.A.; Negry, R.M.; Obeidat, M.A.; Alsalem, H. An Improved Modulation Technique Suitable for a Three Level Flying Capacitor Multilevel Inverter. Int. J. Electr. Comput. Eng. (IJECE) 2024, 14, 2522. [Google Scholar] [CrossRef]
  25. Bilhan, A.K.; Sunter, S. Simulation and Design of Three-Level Cascaded Inverter Based on Soft Computing Method. Teh. Vjesn.-Tech. Gaz. 2020, 27, 489–496. [Google Scholar] [CrossRef]
  26. Mali, R.; Adam, N.; Satpaise, A.; Vaidya, A.P. Performance Comparison of Two Level Inverter with Classical Multilevel Inverter Topologies. In Proceedings of the 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019; IEEE: New York, NY, USA, 2019; pp. 1–7. [Google Scholar]
  27. Sze, V.; Chen, Y.-H.; Yang, T.-J.; Emer, J.S. Efficient Processing of Deep Neural Networks: A Tutorial and Survey. Proc. IEEE 2017, 105, 2295–2329. [Google Scholar] [CrossRef]
  28. Samek, W.; Montavon, G.; Lapuschkin, S.; Anders, C.J.; Muller, K.-R. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proc. IEEE 2021, 109, 247–278. [Google Scholar] [CrossRef]
  29. Rizvi, S.M.H. Time Series Deep Learning for Robust Steady-State Load Parameter Estimation Using 1D-CNN. Arab. J. Sci. Eng. 2022, 47, 2731–2744. [Google Scholar] [CrossRef]
  30. Orhan, H.; Yavşan, E. Artificial Intelligence-Assisted Detection Model for Melanoma Diagnosis Using Deep Learning Techniques. Math. Model. Numer. Simul. Appl. 2023, 3, 159–169. [Google Scholar] [CrossRef]
  31. Allamy, S.; Koerich, A.L. 1D CNN Architectures for Music Genre Classification. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–7. [Google Scholar]
  32. Liu, L.; Si, Y.-W. 1D Convolutional Neural Networks for Chart Pattern Classification in Financial Time Series. J. Supercomput. 2022, 78, 14191–14214. [Google Scholar] [CrossRef]
  33. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D Convolutional Neural Networks and Applications: A Survey. Mech. Syst. Signal Process 2021, 151, 107398. [Google Scholar] [CrossRef]
  34. Chen, C.-C.; Liu, Z.; Yang, G.; Wu, C.-C.; Ye, Q. An Improved Fault Diagnosis Using 1D-Convolutional Neural Network Model. Electronics 2020, 10, 59. [Google Scholar] [CrossRef]
  35. Karim, F.; Majumdar, S.; Darabi, H.; Chen, S. LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access 2018, 6, 1662–1669. [Google Scholar] [CrossRef]
  36. Kaya, Y.; Kuncan, F.; Tekin, R. A New Approach for Congestive Heart Failure and Arrhythmia Classification Using Angle Transformation with LSTM. Arab. J. Sci. Eng. 2022, 47, 10497–10513. [Google Scholar] [CrossRef]
  37. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  38. Özen, F.; Ortaç Kabaoğlu, R.; Mumcu, T.V. Deep Learning Based Temperature and Humidity Prediction. Necmettin Erbakan Üniversitesi Fen Ve Mühendislik Bilim. Derg. 2023, 5, 219–229. [Google Scholar] [CrossRef]
  39. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. arXiv 2014, arXiv:1406.1078. [Google Scholar] [CrossRef]
  40. Wei, D.; Tian, Z. A Comprehensive Multivariate Wind Speed Forecasting Model Utilizing Deep Learning Neural Networks. Arab. J. Sci. Eng. 2024, 49, 16809–16828. [Google Scholar] [CrossRef]
  41. Panoiu, M.; Panoiu, C. Hybrid Deep Neural Network Approaches for Power Quality Analysis in Electric Arc Furnaces. Mathematics 2024, 12, 3071. [Google Scholar] [CrossRef]
  42. Arslan, Ö.; Karhan, M. Effect of Hilbert-Huang Transform on Classification of PCG Signals Using Machine Learning. J. King Saud. Univ.-Comput. Inf. Sci. 2022, 34, 9915–9925. [Google Scholar] [CrossRef]
  43. Şimşek, M.A.; Sertbaş, A.; Sasani, H.; Dinçel, Y.M. Automatic Meniscus Segmentation Using YOLO-Based Deep Learning Models with Ensemble Methods in Knee MRI Images. Appl. Sci. 2025, 15, 2752. [Google Scholar] [CrossRef]
  44. Liu, J.; Xu, Y. T-Friedman Test: A New Statistical Test for Multiple Comparison with an Adjustable Conservativeness Measure. Int. J. Comput. Intell. Syst. 2022, 15, 29. [Google Scholar] [CrossRef]
  45. Wahono, R.S.; Herman, N.S.; Ahmad, S. A Comparison Framework of Classification Models for Software Defect Prediction. Adv. Sci. Lett. 2014, 20, 1945–1950. [Google Scholar] [CrossRef]
  46. Niedoba, T.; Surowiak, A.; Hassanzadeh, A.; Khoshdast, H. Evaluation of the Effects of Coal Jigging by Means of Kruskal–Wallis and Friedman Tests. Energies 2023, 16, 1600. [Google Scholar] [CrossRef]
Figure 1. Framework of a deep learning-based approach for sector and region detection models.
Figure 1. Framework of a deep learning-based approach for sector and region detection models.
Electronics 14 03876 g001
Figure 2. Three-phase three-level diode-clamped inverter feeding an induction motor.
Figure 2. Three-phase three-level diode-clamped inverter feeding an induction motor.
Electronics 14 03876 g002
Figure 3. Space vector diagram of a three−level inverter.
Figure 3. Space vector diagram of a three−level inverter.
Electronics 14 03876 g003
Figure 4. Space vector diagram of sector A of a three-level inverter.
Figure 4. Space vector diagram of sector A of a three-level inverter.
Electronics 14 03876 g004
Figure 5. Proposed 1D-CNN architecture for sector and region prediction.
Figure 5. Proposed 1D-CNN architecture for sector and region prediction.
Electronics 14 03876 g005
Figure 6. Proposed LSTM/GRU-based model architecture for sector and region prediction.
Figure 6. Proposed LSTM/GRU-based model architecture for sector and region prediction.
Electronics 14 03876 g006
Figure 7. Schematic representation of the framework incorporating the CV technique and statistical testing for performance evaluation of DL models.
Figure 7. Schematic representation of the framework incorporating the CV technique and statistical testing for performance evaluation of DL models.
Electronics 14 03876 g007
Figure 8. Training loss curves of deep learning models (a) for the sector, (b) for the region.
Figure 8. Training loss curves of deep learning models (a) for the sector, (b) for the region.
Electronics 14 03876 g008
Figure 9. Confusion matrices of the GRU deep learning model (a) for the sector, (b) for the region.
Figure 9. Confusion matrices of the GRU deep learning model (a) for the sector, (b) for the region.
Electronics 14 03876 g009
Table 1. Angle and sector relationship for the inverter.
Table 1. Angle and sector relationship for the inverter.
SectorAngle ( θ )
A 0 ° θ 60 °
B 60 ° θ 120 °
C 120 ° θ 180 °
D 180 ° θ 240 °
E 240 ° θ 300 °
F 300 ° θ 360 °
Table 2. Reference voltage and region relationship for the inverter.
Table 2. Reference voltage and region relationship for the inverter.
Region V r e f x V r e f y V r e f x   +   V r e f y
1<0.5<0.5<0.5
2<0.5<0.5>0.5
3->0.5-
4>0.5--
Table 3. Statistical distribution of the dataset for sector and region forecasting.
Table 3. Statistical distribution of the dataset for sector and region forecasting.
VariableMeanstdMin25%50%75%Max
V d c 4000400400400400400
m 0.550.2270.300.300.500.850.85
f 36.6612.472020405050
θ 3.141.8101.573.144.716.28
Table 4. Selected the best parameters for deep learning models on sector and region prediction.
Table 4. Selected the best parameters for deep learning models on sector and region prediction.
ModelsParametersSelected Parameters
SectorRegion
DNN of layers33
of units in each layer{128, 64, 32}{512, 256, 128}
dropout rate0.30.1
learning rate0.010.01
1D-CNN of layers33
of units in each layer{256, 128, 64}{512, 256, 128}
dropout rate0.50.1
learning rate0.0010.001
LSTM of layers33
of units in each layer{128, 64, 32}{512, 256, 128}
dropout rate0.30.1
learning rate0.0010.001
GRU of layers33
of units in each layer{128, 64, 32}{512, 256, 128}
dropout rate0.30.1
learning rate0.0010.001
Table 5. Sector prediction results obtained using deep learning-based models with 5-Fold CV.
Table 5. Sector prediction results obtained using deep learning-based models with 5-Fold CV.
ModelsMetricFold 1Fold 2Fold 3Fold 4Fold 5 μ ± σ
DNNAccuracy (%)98.0399.0098.4098.4197.8198.33 ± 0.40
Precision (%)98.0499.0098.4198.4297.8698.35 ± 0.39
Recall (%)98.0498.9998.3998.4197.7698.33 ± 0.41
F1-score (%)98.0398.9998.4098.4197.7998.32 ± 0.40
MCC0.9760.9880.9800.9810.9730.980 ± 0.005
1D-CNNAccuracy (%)99.3398.9397.9698.1698.6898.61 ± 0.49
Precision (%)99.3298.9597.9998.1898.6798.62 ± 0.48
Recall (%)99.3398.9297.9698.1698.6798.61 ± 0.50
F1-score (%)99.3298.9397.9698.1698.6698.61 ± 0.49
MCC0.9920.9870.9750.9780.9840.983 ± 0.006
LSTMAccuracy (%)99.2899.2598.8498.4398.8898.94 ± 0.31
Precision (%)99.3099.2598.8898.4598.9198.96 ± 0.30
Recall (%)99.2799.2598.8498.4498.8698.93 ± 0.30
F1-score (%)99.2899.2498.8498.4398.8798.93 ± 0.31
MCC0.9910.9910.9860.9810.9860.987 ± 0.004
GRUAccuracy (%)99.1199.3199.2699.3099.3599.27 ± 0.08
Precision (%)99.1299.3399.2699.3199.3599.28 ± 0.08
Recall (%) 99.1199.3299.2699.3099.3499.27 ± 0.08
F1-score (%)99.1199.3299.2699.3099.3499.27 ± 0.08
MCC0.9900.9920.9910.9910.9920.991 ± 0.001
Table 6. Region prediction results obtained using deep learning-based models with 5-Fold CV.
Table 6. Region prediction results obtained using deep learning-based models with 5-Fold CV.
ModelsMetricFold 1Fold 2Fold 3Fold 4Fold 5 μ σ
DNNAccuracy (%)96.1895.9396.2097.1495.5896.21 ± 0.52
Precision (%)96.0295.6396.1697.0195.4596.06 ± 0.54
Recall (%)96.0995.7195.9897.0595.5296.01 ± 0.60
F1-score (%)96.0595.6796.0697.0395.3296.03 ± 0.57
MCC0.9420.9380.9420.9560.9320.942 ± 0.008
1D-CNNAccuracy (%)95.8595.7092.5094.7496.0594.97 ± 1.31
Precision (%)96.0195.6592.8695.6595.5794.95 ± 1.14
Recall (%)95.2895.1791.6994.3995.9194.49 ± 1.48
F1-score (%)95.5995.3992.1594.4995.7394.67 ± 1.33
MCC0.9370.9340.8860.9200.9390.923 ± 0.019
LSTMAccuracy (%)97.1097.7397.1398.4497.4397.57 ± 0.49
Precision (%)97.0197.7496.9898.5197.2897.51 ± 0.57
Recall (%)96.9297.6896.9198.3097.3397.43 ± 0.52
F1-score (%)96.9697.7196.9498.4097.3097.47 ± 0.54
MCC0.9550.9650.9560.9760.9600.962 ± 0.007
GRUAccuracy (%)98.1397.5096.6197.6398.2397.62 ± 0.57
Precision (%)98.1897.4096.5797.4897.8997.51 ± 0.54
Recall (%) 98.0397.3096.2297.6398.3297.50 ± 0.72
F1-score (%)98.1097.3496.3897.5498.0997.49 ± 0.63
MCC0.9720.9620.9480.9640.9730.964 ± 0.008
Table 7. Nemenyi Post-Hoc test results for sector and region detection models.
Table 7. Nemenyi Post-Hoc test results for sector and region detection models.
ModelsNemenyi Post-Hoc Test Results (p-Values)
SectorRegion
DNN1D-CNNLSTMGRUDNN1D-CNNLSTMGRU
DNN1.0000.9940.3160.0681.0000.8830.1220.203
1D-CNN0.9941.0000.4560.1220.8831.0000.0170.036
LSTM0.3160.4561.0000.8830.1220.0171.0000.995
GRU0.0680.1220.8831.0000.2030.0360.9951.000
Table 8. Approximate parameter numbers and indicator inference latencies for all models.
Table 8. Approximate parameter numbers and indicator inference latencies for all models.
ModelParametersApprox. Latency (ms)
CPU/GPU
DNN52 k5/1
1D-CNN120 k8/2
LSTM200 k15/4
GRU150 k10/3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Özen, F.; Ortaç Kabaoğlu, R.; Mumcu, T.V. A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters. Electronics 2025, 14, 3876. https://doi.org/10.3390/electronics14193876

AMA Style

Özen F, Ortaç Kabaoğlu R, Mumcu TV. A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters. Electronics. 2025; 14(19):3876. https://doi.org/10.3390/electronics14193876

Chicago/Turabian Style

Özen, Fatih, Rana Ortaç Kabaoğlu, and Tarık Veli Mumcu. 2025. "A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters" Electronics 14, no. 19: 3876. https://doi.org/10.3390/electronics14193876

APA Style

Özen, F., Ortaç Kabaoğlu, R., & Mumcu, T. V. (2025). A Lightweight and Efficient Deep Learning Model for Detection of Sector and Region in Three-Level Inverters. Electronics, 14(19), 3876. https://doi.org/10.3390/electronics14193876

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop