Next Article in Journal
A Cooperative Game Theory Approach to Encourage Electric Energy Supply Reliability Levels and Demand-Side Flexibility
Previous Article in Journal
Investigating Small-Scale DER Impact on Fault Currents and Overcurrent Protection Coordination in Distribution Feeders Under Brazilian Technical Standards
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids

1
Department of Electrical Engineering & Technology, Riphah International University, Faisalabad Campus, Faisalabad 38000, Pakistan
2
Department of Electrical Electronics and Telecommunication Engineering, University of Engineering and Technology, Faisalabad Campus, Lahore 38000, Pakistan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work and share first authorship.
Electricity 2025, 6(4), 55; https://doi.org/10.3390/electricity6040055
Submission received: 13 August 2025 / Revised: 9 September 2025 / Accepted: 29 September 2025 / Published: 1 October 2025

Abstract

Microgrids are becoming an innovative piece of modern energy systems as they provide locally sourced and resilient energy opportunities and enable efficient energy sourcing. However, microgrid operations can be greatly affected by sudden environmental changes, deviating demand, and unexpected outages. In particular, extreme climatic events expose the vulnerability of microgrid infrastructure and resilience, often leading to increased risk of system-wide outages. Thus, successful microgrid operation relies on timely and accurate outage predictions. This research proposes a data-driven machine learning framework for the optimized operation of a microgrid and predictive outage detection using a Recurrent Neural Network–Long Short-Term Memory (RNN-LSTM) architecture that reflects inherent temporal modeling methods. A time-aware embedding and masking strategy is employed to handle categorical and sparse temporal features, while mutual information-based feature selection ensures only the most relevant and interpretable inputs are retained for prediction. Moreover, the model addresses the challenges of experiencing rapid power fluctuations by looking at long-term learning dependency aspects within historical and real-time data observation streams. Two datasets are utilized: a locally developed real-time dataset collected from a 5 MW microgrid of Maple Cement Factory in Mianwali and a 15-year national power outage dataset obtained from Kaggle. Both datasets went through intensive preprocessing, normalization, and tokenization to transform raw readings into machine-readable sequences. The suggested approach attained an accuracy of 86.52% on the real-time dataset and 84.19% on the Kaggle dataset, outperforming conventional models in detecting sequential outage patterns. It also achieved a precision of 86%, a recall of 86.20%, and an F1-score of 86.12%, surpassing the performance of other models such as CNN, XGBoost, SVM, and various static classifiers. In contrast to these traditional approaches, the RNN-LSTM’s ability to leverage temporal context makes it a more effective and intelligent choice for real-time outage prediction and microgrid optimization.

1. Introduction

Microgrids are proposed to improve the resilience and reliability of the grid by replacing load on the main grid with essential loads in the event of a failure. The ability to predict and respond to outages in microgrids carries significant operational and economic value. According to the U.S. Government Accountability Office (GAO), total annual outage costs to utility customers in the United States were approximately $55 billion based on 2006–2019 data and could exceed $480 billion per year by 2080–2099, absent aggressive resilience measures [1]. In addition to the impacts on economic loss, outages can have impacts on critical services like health, safety, emergency response, etc., highlighting the value of resilient microgrid operation. For this reason, there has also been a strategic emphasis on hardening microgrids for outages. The U.S. Department of Energy (DOE) set aggressive targets to improve resilience for critical loads by 98% and reduce the time they rely on non-microgrid solutions, by 2020 [2]. Setting these very high standards clearly indicates the value that outage avoidance is given in the design of microgrids. All of this is to say, aggregating reliability through data-driven outage prediction could provide substantial value through (a) reduced unplanned downtime, (b) improved power quality, and (c) continued service to a critical infrastructure facility. Studies on the capabilities of microgrid predictive maintenance systems show that when failure can be predicted in a timely manner, uptime can also improve, and efficiency can be realized [3].
The concept of a microgrid is illustrated in Figure 1, which presents a novel paradigm for maximizing the utilization of distributed energy resources (DERs), including energy storage systems, small-scale gas and oil-based generators, and variable renewable energy sources such as solar and wind. Microgrids aim to reduce dependence on the main power grid by intelligently coordinating controllable distributed generation, thereby increasing the penetration of renewable energy [4]. However, the inherent variability of renewable resources and the limitations of accurate demand forecasting often render deterministic scheduling models ineffective during real-time operation. This problem may result in curtailment of renewable energy, commonly referred to as wind or solar abandonment, and inefficient load management. Therefore, achieving optimal scheduling under uncertain and dynamic conditions has become a critical research challenge [5].
A microgrid typically comprises local loads, distributed generation, and energy storage units [6]. It can be broadly defined as a small to medium-scale distribution network that includes distributed energy generators, end-users, and supporting infrastructure. Battery storage systems are often integrated to compensate for the intermittent nature of renewable sources such as wind and solar power. In order to ensure coordinated operation, microgrids require robust communication and control systems. They can operate in both grid-connected and islanded modes [6], adapting flexibly to changes in external supply or internal demand. Depending on the nature of their energy sources, microgrids may employ direct current, alternating current, or hybrid transmission systems.
Microgrids are low-voltage distribution networks including varying types of generation technology, controllable energy resources, and DERs (as illustrated in Figure 2) [7]. The energy industry across the globe has started to embrace the three Ds, which are Decentralization, Decarbonization and Democratization as the foundational principles for grid modernization and the energy transition towards clean energy; the modernization of the grid relies heavily on trust between people and technology.
Fluctuating electricity demand, industrialization and technological advancements, and obstacles associated with the adoption of renewable energy sources have created new and unprecedented challenges to modern power systems on a global scale. There has been a greater emphasis placed on microgrids and their ability to aid with energy reliability, renewable energy integration, and decentralized energy management [5]. But, when power outages happen unexpectedly, they harm the smooth running and maintenance of microgrids, plus they cause equipment breakages and economic losses.
The traditional forms of data collection and predicting outage are typically done manually, which might result in missing or corrupted data, therefore compromising the quality of the outage prediction and causing limitations in operational optimization in real time. This highlights the need for intelligent and automated prediction systems to enhance the resiliency and agility of microgrid infrastructures. The rise of smart grid technology has led to massive data generation through sensors, actuators, and intelligent monitoring devices. This data captures critical information about grid behavior, load variations, equipment status, and environmental factors. However, traditional data analysis tools often fall short in extracting meaningful patterns from such complex, high-dimensional, and time-dependent datasets. These limitations necessitate the use of advanced machine learning and deep learning techniques to uncover actionable insights for real-time decision-making and outage prediction [8]. In the past few years, AI and ML have helped boost power system analysis and forecasting, as discussed in [9,10]. In particular, RNNs and their advanced variant, LSTM networks, have demonstrated significant success in modeling sequential data due to their ability to capture temporal dependencies. These architectures are perfect for power outage prediction, as they can learn from historical patterns and retain long-term contextual information critical for forecasting future events.
As this work differentiates itself from earlier studies by combining temporal deep learning with real-time and historical outage data under a classification framework, its key contributions are outlined as follows:
  • Proposes an enhanced temporal learning framework based on RNN-LSTM architecture for multi-class outage prediction in microgrids. The model achieves a high accuracy of 86.52%, with a precision of 86%, a recall of 86.20%, and an F1-score of 86.12% on real-time microgrid data, outperforming conventional models including CNN, XGBoost, SVM, and Random Forest.
  • Applies mutual information-based feature selection to retain the most relevant and interpretable features for outage prediction, ensuring transparency and physical traceability essential for real-time microgrid operations.
  • Introduces time-aware embedding and masking strategy to preprocess categorical and sparse temporal features. This allows the model to learn rich, low-dimensional representations of categorical outage causes while ignoring padding tokens, enhancing generalization without compromising temporal continuity.
  • Implements hierarchical temporal learning by structuring LSTM layers to operate at multiple time scales like 5-min, hourly, and daily resolutions. This enables the model to simultaneously learn micro-events such as short-term fluctuations and macro-behaviors like seasonal outage trends, improving forecasting depth and robustness.
  • Formulates outage prediction as a fine-grained multiclass classification task rather than binary classification. The model distinguishes between diverse outage types such as equipment failures, cyber-attacks, and weather-induced disruptions, enabling more actionable predictions compared to traditional approaches.
  • Presents a unified comparative study of temporal deep learning (RNN-LSTM) and ensemble machine learning models (e.g., Random Forest, XGBoost) across two heterogeneous datasets, offering a holistic performance benchmark under both real-time and historical outage conditions.
  • Highlights the RNN-LSTM’s capability to capture long-term temporal dependencies through gated memory units, which retain critical patterns over extended sequences. This significantly improves predictive accuracy for time-dependent outage events that unfold over hours or days.
  • Validates the proposed framework using two real-world datasets: (i) a high-frequency, real-time telemetry dataset from a 5 MW microgrid at Maple Cement Factory, and (ii) a 15-year national power outage dataset from Kaggle. This dual-source validation demonstrates the model’s adaptability across local and national reliability contexts.

2. Literature Review

At first, fuzzy logic methods were commonly used by smart grid controllers to deal with variables that were uncertain and could not be expressed by simple equations. Even though fuzzy logic can deal with uncertain information much like people can, it struggles in current grid situations where data is plentiful. A downside is that fuzzy inference relies on human opinions and rarely changes over time. The accuracy of a fuzzy system depends on expertly defined membership functions and rules; these are not learned from data but fixed a priori, which makes reasoning under novel uncertainty challenging [11]. In large-scale power systems with high-dimensional data streams, a fuzzy logic controller can struggle to cope. Scalability is an issue: increasing the number of rules or input variables exponentially grows complexity, often leading to slower performance and difficulty in maintenance [11]. Moreover, fuzzy logic lacks an inherent mechanism to learn or adapt from new data. As noted by Małolepsza et al. [11], a fuzzy system cannot automatically update its rules with incoming information, limiting its effectiveness in dynamic environments. Additionally, although fuzzy controllers handle approximate reasoning, they may perform poorly when extreme or rare events occur. By smoothing outputs, they risk underestimating high-impact outlier scenarios, a critical drawback for grid reliability applications. In large data environments, the computational cost of evaluating numerous fuzzy rules can also be prohibitive, making real-time control difficult [12]. In a nutshell, while fuzzy logic played a role in earlier grid control strategies, its inability to scale, adapt, and learn in the face of big data and evolving uncertainty has motivated the search for more data-driven, intelligent techniques.
Driven by the proliferation of sensors and advanced metering, modern smart grids generate vast data that can be harnessed by AI and ML methods. Over the past decade, AI/ML techniques have been widely applied to enhance grid management and operation [13,14]. In particular, load forecasting has benefited greatly from machine learning algorithms. Short-term electricity demand prediction using ML has shown improved accuracy over traditional methods, enabling better balance of supply and demand [15]. Accurate forecasts allow grid operators to optimize unit commitment and reduce operating costs. Another critical application is fault and outage prediction. Machine learning classifiers can learn patterns from historical outage data and real-time sensor streams to predict faults or incipient failures in distribution networks [16]. Such predictive analytics enable proactive maintenance and self-healing actions, thereby improving reliability.
In the realm of AI, deep learning has arisen to represent a powerful method for modeling the variety of nonlinear behavior that is evident in energy systems. Unlike some statistical methods, deep learning models (typically multi-layer neural networks) can automatically identify salient features in data; therefore, they are useful for microgrid operational tasks with complex spatiotemporal patterns [17]. One benefit of deep learning is its ability to model nonlinear relationships and time-dependent dynamics in datasets. For example, deep neural networks may represent in nonlinear terms, how a microgrid’s load or generation at a certain hour is correlated to a variety of factors (weather variables, values from previous hours, etc.), something that traditional physics-based or linear models are typically unable to accomplish. Of the many deep learning approaches, RNNs have been the best specified in methodologies as they have been well-featured in forecasting work within application to microgrids [18]. RNNs are well-suited to sequential data as the internal state is capable of storing the unique facets of information that were exchanged at previous times. Therefore, RNNs were good candidates for modeling time-series for the energy system since current conditions on the grid, will depend on characteristics of historical behavior. Standard RNN architectures can, in theory, learn dependencies across arbitrary lengths of sequence; however, the downfalls happen in practice when they are faced with very long-term dependencies due to vanishing or exploding gradients. This is one of the limitations associated with the LSTM model, which harbors intrinsic gates (input, output and forget) to allow for a conditioning on the flow of information over many time steps even with long-range context. LSTM has proven to be very effective over traditional modeling approaches in microgrid forecasting examples such as day-ahead load prediction or renewable generation forecasting, as it can use short-term fluctuations to model the longer-term seasonality [19]. For example, work [20] used an LSTM to complete short-term load forecasting in a smart microgrid and showed that an LSTM can still retain long-term information from load sequence forecasting when operating 24 h ahead into the future. In general, LSTM networks will yield superior performance when working with time-series with long-term dependencies like the probability of extended outages or remaining useful life of microgrid components based on real-time sensor time-series.
Recent literature provides detailed evidence of these strengths. Recurrent networks have been shown to effectively capture temporal features relevant to microgrid operations. For instance, one study notes that RNNs, by virtue of their recursive structure, can learn the time-step correlations and complex time-varying relationships between external factors (like weather) and grid demand. Building on that, LSTMs further allow learning of persistent influences; they remember information from far-back time steps, e.g., consumption from the previous day or week, that might influence future states [21]. This capability is crucial for microgrids, which often exhibit daily and weekly cycles. An additional advantage in the microgrid context is LSTM’s robustness to noise and irregular time patterns. Microgrid load data can be more volatile (especially in small-scale systems) than aggregate grid load. Studies have observed that LSTM models perform reliably even on irregular demand patterns in small regions, whereas traditional models struggle [22]. In a nutshell, RNN-based models particularly LSTMs provide a powerful tool for microgrid forecasting problems, as they naturally account for temporal dependencies and can model complex sequential dynamics that static learning algorithms or statistical methods might miss.

3. Methodology

The proposed approach is shown in Figure 3, encompasses comprehensive data preprocessing, time-aware Embedding and masking, and RNN-LSTM architecture, optimization to capture both short-term fluctuations and long-term outage precursors.

3.1. Dataset Overview

In this study, we use two primary datasets; the first dataset is a custom real-time collection from a 5 MW microgrid at the Maple Leaf Cement Factory in Mianwali, capturing the microgrid’s operational metrics like irradiation, load demand and others are recorded at frequent intervals. A sample excerpt of this dataset is shown in Table 1. The second dataset is sourced from Kaggle and comprises 15 years of nationwide power outage records compiled by the U.S. Department of Energy [23]. This outage dataset spans a broad timeframe (approximately 2000–2014) and includes information such as date of restoration, geographic areas, demand loss, etc., as shown in Table 2. Together, these datasets provide both local high-resolution microgrid data and a broader context of grid reliability events, forming a comprehensive basis for training the machine learning model.

3.2. Data Preprocessing

Before training the model, we performed an extensive data preprocessing pipeline to ensure the data’s quality and suitability. This pipeline began with data purification to handle missing and irrelevant information. For handling missing or incomplete entries, we applied various imputation strategies depending on the data type and frequency. Continuous variables such as voltage, frequency, and power were imputed using linear interpolation if less than 10% of values were missing within a sequence. For isolated gaps exceeding this threshold, we applied forward-fill imputation for real-time operational data and mean imputation for the historical dataset. Records with missing target labels were excluded from the data.
To detect the outliers, we use modified z-score approach which is calculated as:
z i = 0.6745 · x i median X MAD
where x i is value of the i t h data point, median X is the median of the dataset X , MAD is median absolute deviation of the dataset computed as MAD = median x i median X and z i is the modified z-score of the i t h data point. Data points with | z | > 3.5 were flagged as outliers and removed. For example, voltage spikes exceeding ±30% of the nominal value (beyond 1.3 p.u.) or load anomalies exceeding three standard deviations from the mean were excluded. This step was especially important in the real-time microgrid dataset, which occasionally recorded sensor artifacts during transient conditions. Next step was the transformation of categorical or text-based fields into numerical form. We tokenized any textual attributes such as outage cause descriptions or categorical labels in the datasets by converting them into discrete tokens and then mapping those tokens to integer codes. All continuous features were normalized using min-max scaling to a [0, 1] range prior to model training.
We further applied mutual information-based feature selection to retain features with the highest statistical dependency on the target outage class while reducing redundancy. This method evaluates the nonlinear mutual dependence between each input feature and the output labels, allowing us to identify variables that contribute most meaningfully to predictive performance. Feature label mutual information is estimated using a nonparametric k-nearest-neighbors estimator (mutual_info_classif, k = 5), accommodating mixed continuous and tokenized categorical variables. To control redundancy, minimum-redundancy maximum-relevance (mRMR) greedy forward selection is applied, maximizing I x ; Y 1 S s S I x ; s , where S is the set of already selected features. To protect minority classes, mutual information is computed on class-balanced resamples and aggregated using inverse-frequency weights so rare but critical outage contexts are preserved. Unlike dimensionality reduction techniques such as Principal Component Analysis (PCA), which transform features into untraceable latent components, this approach preserves the physical interpretability of inputs such as time of the day, power demand, or weather indicators. This transparency is critical in real-time operational settings, enabling grid operators to understand which specific conditions are driving a prediction and thereby make more informed control decisions. After converting all features into a numeric format, we split the data by proportion of 80% for training and 20% for testing. This ratio was chosen to ensure a large training set for effective model learning, particularly beneficial in high-dimensional feature spaces, while keeping a representative test set for reliable generalization assessment [24]. The 20% test portion is sufficient for stable evaluation metrics, and random splitting helps preserve the data distribution and prevent sampling bias. This balance supports both model robustness and performance reliability on unseen data.

3.3. Model Architecture

The core predictive model is a deep sequence learning architecture based on stacked LSTM networks, designed to capture multi-scale temporal dependencies from time-series microgrid data. The complete model structure with layers is shown in Figure 4 and is discussed below.
(a) 
Time-Aware Embedding
Each input sample is structured as a window of temporally ordered features, comprising real-time sensor readings and time-stamped categorical variables. To effectively incorporate temporal context into the categorical inputs, we implement a time-aware embedding mechanism. Specifically, each categorical field is encoded using a 32-dimensional embedding layer that is aligned with its corresponding time step in the sequence. This approach allows the model to learn temporal dynamics within the embedding space, capturing how the relevance of a category may evolve over time. By embedding categories in a trainable vector space and associating them directly with time-indexed observations, the model gains a richer understanding of sequential dependencies than standard one-hot encoding or static embeddings could provide. This enables improved generalization and enhanced detection of temporal patterns associated with different outage types. In practice, categorical fields (e.g., outage cause, equipment type, weather condition) are mapped to 32-dimensional learned embeddings, with rare categories merged into an OOV (“other”) token. Embeddings are aligned to each timestep and trained end-to-end with embedding dropout (0.1) for robustness. We add cyclical encodings for hour-of-day, day-of-week, and month-of-year to capture daily, weekly, and seasonal periodicities. A time-gap feature Δ t (minutes since the previous observation) is included after a log 1 + Δ t transform and scaling, and is also bucketized into quantile bins with an auxiliary 8-dimensional embedding to indicate long gaps between observations.
(b) 
Masking Layer
To accommodate variable-length input sequences within a fixed-size training framework, sequences are padded to a common length T (continuous channels padded with 0.0, categorical channels with a dedicated PAD token, index 0). A masking layer is applied prior to the recurrent units—tf.keras.layers.Masking(mask_value = 0.0) for continuous inputs and Embedding (…, mask_zero = True) for categorical inputs—so that padded timesteps are ignored by the network. The mask is propagated through all stacked LSTM layers and the hierarchical fusion module, ensuring that artificially added padding does not influence hidden-state transitions or gradient flow during backpropagation through time (BPTT), nor the computation of loss/metrics. To handle irregular sampling, observations are aligned to a 5-min grid; gaps exceeding a safety threshold (e.g., 30 min) are left unfilled and treated as masked rather than forward-filled. Additionally, per-feature missingness indicators and a time-gap feature Δ t accompany the masked inputs so the model can distinguish true zeros vs. missing values and modulate memory accordingly, preserving the integrity of learned temporal patterns and maintaining consistent alignment between time-aware embeddings and real sensor observations.
(c) 
Stacked LSTM Layers
We employ a 3-layer stacked LSTM architecture to enhance the model’s temporal abstraction capability and capture both short-term and long-term dependencies within the outage sequences:
i.
LSTM Layer 1: Comprises 128 memory cells and receives the embedded input sequence. This layer captures short-term temporal dependencies and low-level time-series fluctuations using gated memory mechanisms.
ii.
LSTM Layer 2: Contains 64 memory cells and builds on the output of the first layer to learn intermediate temporal abstractions, capturing transitions and evolving patterns related to outage precursors.
iii.
LSTM Layer 3: Includes 32 memory cells and functions as a high-level temporal aggregator, integrating long-range dependencies and consolidating sequence-level context before passing the representation to the dense classifier.
This structure was chosen empirically after evaluating 1 to 5 LSTM layers and 32 to 256 units per layer. While deeper networks (>3 LSTM layers) showed slight gains in training performance, they also exhibited higher variance on validation loss and longer convergence times. Three layers with decreasing cell size (128 to 32) provided a strong trade-off between learning capacity and generalization. Each LSTM cell contains input, output, and forget gates, allowing selective memory retention across time steps. This architecture enables the network to model long-range dependencies, such as gradual voltage decay or prolonged frequency oscillations, that precede certain outage types.
(d) 
Hierarchical Temporal Fusion
The model supports hierarchical temporal learning, wherein 3 LSTM branches process time-series data at different resolutions like 5-min, 1-h, 1-day averages. Outputs from these branches are concatenated before entering the dense layers, allowing the model to learn both short-term micro events and long-term macro behaviors, a crucial feature for outage prediction where both rapid events and seasonal trends matter.
(e) 
Dropout Regularization
To mitigate overfitting, we incorporate two dropout layers with the rate 0.3 and 0.2, between the LSTM output and the dense classifier. This randomly disables neurons during training, encouraging redundant representations and improving generalization. The dropout rate was optimized via grid search between 0.1 and 0.5.
(f) 
Dense Classifier
Following the recurrent layers, a fully connected dense layer with 64 units and ReLU activation projects the temporal feature representation into a lower-dimensional latent space. The final softmax output layer has dimensionality equal to the number of outage classes and outputs class-wise probabilities.
(g) 
Optimization and Training
The proposed RNN-LSTM model is optimized using the Adam optimizer, selected for its adaptive learning rate adjustment and superior convergence behavior on sparse gradients and noisy datasets. The optimizer’s parameters were configured as follows:
  • Learning rate (η): The optimized learning rate 0.001, determined through empirical tuning to balance fast convergence and gradient stability.
  • Exponential decay rates for moment estimates (β1 and β2): Default values of 0.9 and 0.999, respectively.
  • Epsilon (ϵ): 1 × 10−8 to prevent division by zero during parameter updates.
To ensure stable training dynamics, a mini-batch size of 64 was used. This value provided an optimal trade-off between gradient estimation accuracy and computational efficiency, particularly when leveraging GPU acceleration. The model was trained using the Categorical Cross-Entropy loss function, defined as:
L = i = 1 C y i log y i ^
where C is the number of outage classes, y i is the true label, and y i ^ is the predicted softmax probability. This loss is well suited for multi-class classification tasks with mutually exclusive labels. Early stopping was employed as a regularization technique to prevent overfitting. The validation loss was monitored across epochs, and training was halted if no improvement was observed for 10 consecutive epochs. This approach conserves computational resources while preserving model generalization.
The training process consisted of 30 epochs, during which 10-fold cross-validation was performed on the training set. This validation strategy ensured robustness against data partitioning bias and provided reliable model selection by averaging performance across multiple splits. The remaining 20% test set, completely unseen during training and validation, was then used for final evaluation of model generalization performance.

4. Results

To ensure a fair and unbiased comparison of proposed approach with other state of the art (SOTA) models including SVM, KNN, CNN and others were trained using the same set of features selected through the mutual information-based feature selection method described earlier. This ensured that each model operated on a comparable and physically meaningful input space. Hyperparameter tuning was conducted independently for each model using grid search with 10-fold cross-validation on the training set. For instance, the number of trees and maximum depth were optimized for ensemble models like Random Forest and XGBoost, while regularization parameters were tuned for SVM and Logistic Regression. Neural models were tuned for the number of layers, hidden units, learning rate, and dropout. All models were evaluated using the same training and validation split to eliminate sampling bias and allow for consistent performance comparison. Evaluation metrics including accuracy, precision, recall, and F1-score, were calculated using macro-averaging to reflect balanced performance across all outage classes.
On the microgrid dataset, the RNN-LSTM achieved an accuracy of 86.52%, with a corresponding loss of 13.48%. The model’s training behavior is shown in Figure 5a, demonstrating effective convergence and learning of temporal outage patterns in data. When evaluated on the larger and more heterogeneous Kaggle dataset, the model achieved a slightly lower accuracy of 84.19%, as shown in Figure 5b. This modest reduction in performance is expected due to the increased variability, broader geographic scope, and noise inherent in the national dataset. Nevertheless, an 84.19% accuracy on such a diverse dataset underscores the model’s strong generalization capabilities and robustness across long-term, large-scale outage scenarios.
The performance comparison of proposed approach vs. SOTA models is shown in Figure 6 and Table 3, listing the accuracy achieved by various models including SVM, Logistic Regression, k-Nearest Neighbors (KNN), Naïve Bayes, Decision Tree, Random Forest, Stochastic Gradient Descent (SGD) classifier, Extreme Gradient Boosting (XGBoost), Gradient Boosting, CNN, and the proposed RNN-LSTM. Although ensemble methods such as XGBoost (85.5%) and Gradient Boosting (85.9%) achieved comparable accuracy, the RNN-LSTM’s advantage lies in its temporal modeling capacity, which enables better recall and F1-score, especially for minority outage classes. The statistical significance of performance gains was confirmed using Wilcoxon signed-rank test across 10-fold cross-validation folds. Results showed that RNN-LSTM significantly outperformed CNN with p-value of 0.00195 and XGBoost with p-value of 0.00586 in F1-score, with p-values < 0.05 indicating statistically significant improvement.
Among the SOTA models, the RNN-LSTM architecture consistently outperforms all others, achieving the highest accuracy of 86.52%, along with superior precision of 86%, recall of 86.20%, and F1-score of 86.12%. Traditional machine learning models such as Logistic Regression, Naive Bayes, and KNN yielded relatively modest performance, with accuracy values ranging from 71.1% to 79.7%. The simpler classifiers such as SVM, Logistic Regression, KNN, and Naïve Bayes, generally showed lower accuracy, typically in the range of roughly seventy to mid-eighties percentile, reflecting the greater difficulty these models have in capturing the temporal, nonlinear patterns of outage data without extensive feature engineering [15]. On the other hand, ensemble-based models such as Random Forest with 84.43% accuracy, XGBoost with 85.50% accuracy, and Gradient Boosting Classifier with 85.90% accuracy showed strong improvements, especially in handling class imbalance and capturing nonlinear decision boundaries; they remained slightly behind the deep learning approaches. Notably, CNN, typically strong in spatial pattern recognition, achieved competitive performance of 86.20% accuracy. However, it was the RNN-LSTM model that delivered the most outstanding results, highlighting the value of temporal modeling in this domain. RNNs, and particularly their LSTM variant, are designed to learn from sequential data by maintaining internal memory across time steps. This allows them to effectively capture long-term dependencies in the outage data, such as historical patterns of failures, delayed effects of environmental factors, and recurring system behaviors.
The confusion matrices for Random Forest, XGBoost, CNN, and RNN-LSTM models provide deeper insight into class-wise prediction performance across multiple outage event types, as shown in Figure 7 and Figure 8. As observed, Random Forest and XGBoost exhibit strong diagonal dominance, indicating robust classification accuracy, particularly for well-represented classes such as equipment failure, extreme cold, and thunderstorm. XGBoost shows slightly better uniformity across classes, reducing misclassification in lower-frequency events like cyber-attacks and vandalism. The CNN model further improves event separation by learning spatial hierarchies in the input data, which reflects in fewer off diagonal misclassifications. However, the RNN-LSTM heatmap demonstrates the most balanced and accurate class-wise predictions. Thanks to its ability to model temporal dependencies, RNN-LSTM more effectively captures sequential relationships between events, leading to enhanced recognition of complex and overlapping outage types such as intentional attacks and natural gas shortages. This consistent diagonal dominance and reduced class confusion in the RNN-LSTM heatmap reaffirm its superior performance for time-dependent outage prediction tasks.

5. Discussion

This research backs up the importance of deep learning for microgrid analysis, mainly when dealing with volatile renewable electricity and complex loads. Recent studies have explored various approaches for outage forecasting and load optimization. For example, Elsaraiti and Merabet [22] applied an ARIMA model and a basic LSTM to wind energy forecasting, reporting an accuracy of around 82%. Ghasemkhani B. et al. [25] compared classical machine learning models like Random Forest and Gradient Boosting for short-term load prediction, achieving accuracies between 84.28% and 85.51%. XGBoost, in particular, has been highlighted as a high-performing ensemble method in many works, often yielding superior static classification accuracy. However, these models generally rely on static feature engineering and lack the intrinsic ability to model sequential dependencies unless explicitly structured with lag features. Moreover, they often require separate preprocessing steps for time-series flattening, which can result in loss of contextual information.
Compared to these approaches, our RNN-LSTM model offers a more holistic and scalable solution by inherently preserving the temporal structure of the data. Unlike traditional models, LSTM networks can dynamically adapt to varying input lengths and capture both short-term and long-term behavioral patterns, making them particularly suitable for time-series tasks in microgrid environments. While the accuracy of our model is slightly higher than XGBoost, Random Forest, or CNN, the contextual understanding and generalizability of temporal behaviors position RNN-LSTM as the more appropriate solution for real-time, sequence-driven applications. Our model’s reliance on time-dependent features allows it to predict not only the occurrence of outages but also the likely precursors and time windows in which disruptions may arise. It outperformed several of the baseline methods and demonstrated that a deep learning approach can successfully learn from the outage data. More importantly, the RNN-LSTM offers a distinct advantage in that it learns the temporal dependencies inherent in time-series outage data. Unlike tree-based or regression models that treat each data point largely independently, the RNN-LSTM’s recurrent architecture enables it to consider the sequence of events and historical context leading up to an outage. This means that the RNN-LSTM can capture patterns such as daily load fluctuations, weather-related sequences, or cascading effects in the grid that precede outages, factors that are often not explicitly captured by models like XGBoost or Random Forest without manual creation of time-lag features. The benefit of this sequential modeling is evident in scenarios where the timing and order of events carry critical information about the likelihood of an outage. For instance, the RNN-LSTM can inherently recognize if a certain combination of precursor events over several hours or days is building up to a failure, whereas a static classifier might only see the aggregate feature snapshot at a given time. Thus, the RNN-LSTM’s must be weighed against its strength in modeling sequence-driven behavior, which can be crucial for reliable outage forecasting.
The dataset was split into 80% for training and 20% for testing, to maximize the training data volume for the RNN-LSTM model, ensuring robust learning of temporal dependencies in sequential outage data, while reserving a statistically sufficient test set of 20% for unbiased evaluation. This split aligns with common practices in deep learning for time-series tasks, where larger training sets mitigate overfitting risks inherent in complex architectures [24]. A 70/30% split might reduce overfitting further by enlarging the test set but could compromise the model’s ability to capture rare outage patterns due to diminished training samples, potentially lowering accuracy [26].
Nonetheless, this work is not without limitations. The performance of deep learning models like RNN-LSTM is highly dependent on the volume and quality of training data. In our case, the custom real-time dataset, while valuable, was limited in scale and may not fully capture seasonal and extreme-event scenarios. The Kaggle dataset, though extensive, lacked certain microgrid-specific attributes such as voltage stability and inverter performance metrics. Additionally, the model was trained under a supervised learning framework, which limits its ability to adapt autonomously to novel, unseen conditions without retraining. The computational cost of training deep learning models is also nontrivial and may pose challenges for deployment in low-resource microgrid control systems.
The framework will be extended to support a broader range of microgrid configurations and operational scenarios in future work. This includes incorporating additional data modalities such as weather forecasts, grid frequency fluctuations, and real-time electricity pricing to improve situational awareness and predictive accuracy. Although the current study validates the model using a high-resolution dataset from a 5 MW industrial microgrid and a national outage archive, we recognize the limitation of relying on a single real-time deployment. Microgrids vary in scale, topology, and control strategies, and this variability may influence outage dynamics. To improve generalizability, future experiments will include cross-microgrid validation across different system types, potentially through utility partnerships or publicly available datasets. Techniques such as transfer learning and domain adaptation may also be used to adapt the model to new environments with minimal retraining effort. Additionally, synthetic outage scenarios generated via simulation tools (e.g., GridLAB-D, OpenDSS) can supplement limited real-world data. Enhancements to the model architecture, such as integrating temporal attention mechanisms or hybridizing LSTM with ensemble methods like XGBoost, are expected to further improve both predictive performance and interpretability. To support autonomous, intelligent control, the framework may be extended with unsupervised anomaly detection and reinforcement learning for real-time decision-making. Finally, deploying the model on edge-computing platforms and integrating it with existing microgrid management systems or SCADA infrastructure will be critical for achieving practical, on-site resilience and reliability.

Practical Implications for Microgrid Operations

From a practical standpoint, the integration of this predictive model into microgrid operations has clear implications for cost reduction, reliability enhancement, and decision support. Accurate classification of outage types, such as cyberattacks, equipment failure, or weather-related disruptions, can enable operators to implement targeted preemptive actions, such as dynamic reconfiguration, priority load shedding, or isolation protocols. This facilitates faster recovery and minimizes downtime, ultimately reducing operational and economic losses. Additionally, the model’s forecasts can be linked to automated control systems or SCADA dashboards, where alerts and suggested mitigation actions can be generated in real time. To move beyond a black-box model, future work will incorporate explainability tools such as SHAP or integrated attention mechanisms, which can highlight which features or time steps contributed most to a specific outage prediction.
From a deployment standpoint, the model offers several advantages for modern microgrid systems:
  • Early Warning System: The model’s ability to classify outage types in advance such as cyberattack or weather-related failures, enables operators to initiate differentiated mitigation protocols, improving response time and resource allocation.
  • Dynamic Scheduling and Load Shedding: Accurate forecasts can be integrated into energy management systems (EMS) to dynamically adjust load prioritization, storage dispatch, or demand response actions during high-risk periods.
  • SCADA Integration: The model can be integrated into existing SCADA or monitoring dashboards, where it processes incoming data streams and outputs real-time predictions through lightweight RESTful APIs.
  • Grid Resilience Planning: Historical insights from the model can inform resilience strategies, maintenance scheduling, and investment prioritization like reinforcing nodes with recurrent weather-related failures.

6. Conclusions

This study presented a data-driven approach for optimizing the operation of microgrids and predicting power outages using a deep learning model based on RNN-LSTM. By leveraging two datasets, a custom real-time dataset from a 5 MW microgrid and a 15-year historical outage dataset from Kaggle, we demonstrated the capability of the RNN-LSTM model to accurately capture temporal patterns and dependencies in microgrid operation. The model achieved an accuracy of 86.52% on the real-time dataset and 84.19% on the historical dataset, confirming its effectiveness in both high-resolution and large-scale forecasting scenarios. Compared to traditional machine learning methods, which often require extensive manual feature engineering and lack temporal context, the proposed RNN-LSTM architecture offers a more adaptive and context-aware solution. Its ability to process sequential data makes it particularly well-suited for dynamic environments such as microgrids, where accurate, time-sensitive decisions are critical. The findings highlight the potential of deep learning in enhancing microgrid resilience, reliability, and operational efficiency. While the model exhibits strong predictive performance, future work is needed to address limitations related to data diversity, real-time adaptability, and deployment in low-resource settings. Expanding the framework to include additional contextual features and hybrid learning strategies could further improve accuracy and robustness. Ultimately, the integration of intelligent forecasting models such as RNN-LSTM into microgrid control systems represents a vital step toward the realization of smarter, more sustainable energy networks.

Author Contributions

Conceptualization, N.L. and M.Z.; methodology, M.Z.; software, M.Z.; validation, M.Z. and M.I.A.; formal analysis, M.S. and N.L.; investigation, N.L.; resources, A.W.; data curation, A.W. and M.S.; writing—original draft preparation, N.L. and M.Z.; writing—review and editing, A.W. and M.S.; visualization, M.I.A.; supervision, A.W. and M.I.A.; project administration, A.W.; funding acquisition, A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by funding from Department of EETE, UET Faisalabad Campus.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. “GAO Urges Action to Address ‘Far-Reaching Effects’ of Climate Change on US Grid|S&P Global.” [Online]. Available online: https://www.spglobal.com/market-intelligence/en/news-insights/articles/2021/3/gao-urges-action-to-address-far-reaching-effects-of-climate-change-on-us-grid-63081807 (accessed on 8 September 2025).
  2. Feng, W.; Jin, M.; Liu, X.; Bao, Y.; Marnay, C.; Yao, C.; Yu, J. A review of microgrid development in the United States—A decade of progress on policies, demonstrations, controls, and software tools. Appl. Energy 2018, 228, 1656–1668. [Google Scholar] [CrossRef]
  3. Newman, S.; Shiozawa, K.; Follum, J.; Barrett, E.; Douville, T.; Hardy, T.; Solana, A. A comparison of PV resource modeling for sizing microgrid components. Renew. Energy 2020, 162, 831–843. [Google Scholar] [CrossRef]
  4. Arafat, M.Y.; Hossain, M.J.; Alam, M.M. Machine learning scopes on microgrid predictive maintenance: Potential frameworks, challenges, and prospects. Renew. Sustain. Energy Rev. 2024, 190, 114088. [Google Scholar] [CrossRef]
  5. Votava, J.; Saker, M.; Ghanem, S.; Rehabi, M.; Fandi, G.; Müller, Z.; Krepl, V.; Čábelková, I.; Smutka, L.; Tlustý, J.; et al. Sustainable energy management system for microgrids assisted by IOT and improved by AI. Results Eng. 2025, 27, 106158. [Google Scholar] [CrossRef]
  6. ATariq, H.; Kazmi, S.A.A.; Hassan, M.; Muhammed Ali, S.A.; Anwar, M. Analysis of fuel cell integration with hybrid microgrid systems for clean energy: A comparative review. Int. J. Hydrogen Energy 2024, 52, 1005–1034. [Google Scholar] [CrossRef]
  7. Abbasi, M.; Abbasi, E.; Li, L.; Aguilera, R.P.; Lu, D.; Wang, F. Review on the Microgrid Concept, Structures, Components, Communication Systems, and Control Methods. Energies 2023, 16, 484. [Google Scholar] [CrossRef]
  8. Shaier, A.A.; Elymany, M.M.; Enany, M.A.; Elsonbaty, N.A.; Tharwat, M.M.; Ahmed, M.M. An efficient and resilient energy management strategy for hybrid microgrids inspired by the honey badger’s behavior. Results Eng. 2024, 24, 103161. [Google Scholar] [CrossRef]
  9. Hafez, S.; Alkhedher, M.; Ramadan, M.; Gad, A.; Alhalabi, M.; Yaghi, M.; Jama, M.; Ghazal, M. Advancements in grid resilience: Recent innovations in AI-driven solutions. Results Eng. 2025, 26, 105042. [Google Scholar] [CrossRef]
  10. Zubair, M.; Waleed, A.; Rehman, A.; Ahmad, F.; Islam, M.; Javed, S. Machine Learning Insights into Retail Sales Prediction: A Comparative Analysis of Algorithms. In Proceedings of the 2024 Horizons of Information Technology and Engineering (HITE), Lahore, Pakistan, 15–16 October 2024; Available online: https://ieeexplore.ieee.org/abstract/document/10777132/ (accessed on 14 June 2025).
  11. Małolepsza, O.; Mikołajewski, D.; Prokopowicz, P. Using Fuzzy Logic to Analyse Weather Conditions. Electronics 2024, 14, 85. [Google Scholar] [CrossRef]
  12. Maged, N.A.; Hasanien, H.M.; Ebrahim, E.A.; Tostado-Véliz, M.; Turky, R.A.; Jurado, F. Optimal Real-time implementation of fuzzy logic control strategy for performance enhancement of autonomous microgrids. Int. J. Electr. Power Energy Syst. 2023, 151, 109140. [Google Scholar] [CrossRef]
  13. Henao, F.; Edgell, R.; Sharma, A.; Olney, J. AI in power systems: A systematic review of key matters of concern. Energy Inform. 2025, 8, 76. [Google Scholar] [CrossRef]
  14. Fatima, K.; Shareef, H.; Costa, F.B.; Bajwa, A.A.; Wong, L.A. Machine learning for power outage prediction during hurricanes: An extensive review. Eng. Appl. Artif. Intell. 2024, 133, 108056. [Google Scholar] [CrossRef]
  15. Balamurugan, M.; Narayanan, K.; Raghu, N.; Kumar, G.B.A.; Trupti, V.N. Role of artificial intelligence in smart grid—A mini review. Front. Artif. Intell. 2025, 8, 1551661. [Google Scholar] [CrossRef] [PubMed]
  16. Akhtar, S.; Shahzad, S.; Zaheer, A.; Ullah, H.S.; Kilic, H.; Gono, R.; Jasiński, M.; Leonowicz, Z. Short-Term Load Forecasting Models: A Review of Challenges, Progress, and the Road Ahead. Energies 2023, 16, 4060. [Google Scholar] [CrossRef]
  17. Lei, C.; Zhang, H.; Wang, Z.; Miao, Q. Deep Learning for Demand Forecasting: A Framework Incorporating Variational Mode Decomposition and Attention Mechanism. Processes 2025, 13, 594. [Google Scholar] [CrossRef]
  18. Dewangan, F.; Abdelaziz, A.Y.; Biswal, M. Load Forecasting Models in Smart Grid Using Smart Meter Information: A Review. Energies 2023, 16, 1404. [Google Scholar] [CrossRef]
  19. Ibrahim, B.; Rabelo, L.; Gutierrez-Franco, E.; Clavijo-Buritica, N. Machine Learning for Short-Term Load Forecasting in Smart Grids. Energies 2022, 15, 8079. [Google Scholar] [CrossRef]
  20. El Bourakadi, D.; Yahyaouy, A.; Boumhidi, J. Intelligent energy management for micro-grid based on deep learning LSTM prediction model and fuzzy decision-making. Sustain. Comput. Inform. Syst. 2022, 35, 100709. [Google Scholar] [CrossRef]
  21. Pinheiro, M.G.; Madeira, S.C.; Francisco, A.P. Short-term electricity load forecasting—A systematic approach from system level to secondary substations. Appl. Energy 2023, 332, 120493. [Google Scholar] [CrossRef]
  22. Elsaraiti, M.; Merabet, A. A Comparative Analysis of the ARIMA and LSTM Predictive Models and Their Effectiveness for Predicting Wind Speed. Energies 2021, 14, 6782. [Google Scholar] [CrossRef]
  23. ‘EDA: 15 Years of Power Outage’. [Online]. Available online: https://www.kaggle.com/datasets/autunno/15-years-of-power-outages (accessed on 6 September 2025).
  24. Boser, A. Validating spatio-temporal environmental machine learning models: Simpson’s paradox and data splits. Environ. Res. Commun. 2024, 6, 031003. [Google Scholar] [CrossRef]
  25. Ghasemkhani, B.; Kut, R.A.; Yilmaz, R.; Birant, D.; Arıkök, Y.A.; Güzelyol, T.E.; Kut, T. Machine Learning Model Development to Predict Power Outage Duration (POD): A Case Study for Electric Utilities. Sensors 2024, 24, 4313. [Google Scholar] [CrossRef] [PubMed]
  26. Bashkari, M.; Sami, A.; Rastegar, M. Outage cause detection in power distribution systems based on data mining. IEEE Trans. Ind. Inform. 2021, 17, 640–649. Available online: https://ieeexplore.ieee.org/abstract/document/8959134/?casa_token=0m_nfBPS7hwAAAAA:7cVZ-k687--4nLbwlyRG0P_ykq2lNkm9JWQ_GkyHxECQ3wmPiIDcu_drhtDM_DuvtSMNd-RlBxgEY1E (accessed on 19 July 2025). [CrossRef]
Figure 1. Schematic structure of a typical microgrid showing interconnected local loads, distributed generators, and energy storage units.
Figure 1. Schematic structure of a typical microgrid showing interconnected local loads, distributed generators, and energy storage units.
Electricity 06 00055 g001
Figure 2. Illustrative configuration of a microgrid integrating renewable energy sources and distributed energy resources (DERs).
Figure 2. Illustrative configuration of a microgrid integrating renewable energy sources and distributed energy resources (DERs).
Electricity 06 00055 g002
Figure 3. Workflow diagram of the data preprocessing and model development process for microgrid outage prediction.
Figure 3. Workflow diagram of the data preprocessing and model development process for microgrid outage prediction.
Electricity 06 00055 g003
Figure 4. Architecture of the proposed RNN-LSTM model, detailing key layers including embedding, LSTM, dense, and dropout layers.
Figure 4. Architecture of the proposed RNN-LSTM model, detailing key layers including embedding, LSTM, dense, and dropout layers.
Electricity 06 00055 g004
Figure 5. (a): Model training performance of RNN-LSTM on real-time microgrid dataset—accuracy across epochs. (b): Model training performance of RNN-LSTM on national outage dataset—accuracy over training iterations.
Figure 5. (a): Model training performance of RNN-LSTM on real-time microgrid dataset—accuracy across epochs. (b): Model training performance of RNN-LSTM on national outage dataset—accuracy over training iterations.
Electricity 06 00055 g005
Figure 6. Performance comparison of models on the outage prediction task using accuracy, precision, recall, and F1-score. RNN-LSTM shows the highest scores across all metrics.
Figure 6. Performance comparison of models on the outage prediction task using accuracy, precision, recall, and F1-score. RNN-LSTM shows the highest scores across all metrics.
Electricity 06 00055 g006
Figure 7. Confusion matrices comparing the classification performance of (a) Random Forest and (b) XGBoost models on outage event prediction.
Figure 7. Confusion matrices comparing the classification performance of (a) Random Forest and (b) XGBoost models on outage event prediction.
Electricity 06 00055 g007
Figure 8. Confusion matrices comparing the classification performance of (a) CNN model (b) RNN-LSTM models on outage event prediction.
Figure 8. Confusion matrices comparing the classification performance of (a) CNN model (b) RNN-LSTM models on outage event prediction.
Electricity 06 00055 g008
Table 1. Sample records from the real-time operational dataset collected from a 5 MW microgrid.
Table 1. Sample records from the real-time operational dataset collected from a 5 MW microgrid.
Time Event BeganIrradiation (W/m2)Total Power (kW)Insolation (kWh/m2)Total Energy (kWh)Expected Energy (kWh)Lost Energy (kWh)Tags
8/1/2022 5:5013.214.91.11.24.40severe weather, thunderstorm
8/1/2022 5:5513.230.81.12.64.40severe weather, thunderstorm
8/1/2022 6:0013.231.11.12.64.40severe weather, thunderstorm
8/1/2022 6:0513.231.41.12.64.40fuel supply emergency, coal
8/1/2022 6:1037.570.23.15.812.50vandalism, physical
8/1/2022 6:1537.599.53.18.312.50vandalism, physical
8/1/2022 6:2037.5100.73.18.412.50vandalism, physical
8/1/2022 6:2537.5102.13.18.512.50severe weather, thunderstorm
8/1/2022 6:3037.5103.73.18.612.50severe weather, thunderstorm
8/6/2022 14:10793.3353.866.129.5264−29.5severe weather, wind, rain
28/8/2022 14:55716.41359.159.7113.3238.4−113.3equipment failure
Table 2. Sample entries from the 15-year national power outage dataset used for model training and validation.
Table 2. Sample entries from the 15-year national power outage dataset used for model training and validation.
Date Event BeganDate of RestorationGeographic AreasDemand Loss (KW)Customers Affected (count)Time Event Began (HH:MM: SS)Time of Restoration (HH:MM: SS)Tags
6/30/20147/2/2014Illinois−999420,00020:00:0018:30:00severe weather, thunderstorm
6/30/20147/1/2014North Central Indiana−999127,00023:20:0017:00:00severe weather, thunderstorm
6/30/20147/1/2014Southeast Wisconsin424120,00017:55:002:53:00severe weather, thunderstorm
6/27/2014−999Wisconsin−999−99913:21:00−999fuel supply emergency, coal
6/24/20146/24/2014Nashville, Tennessee−999−99914:54:0014:55:00vandalism, physical
6/19/20146/19/2014Nashville, Tennessee−999−9998:47:008:48:00vandalism, physical
6/18/20146/18/2014Washington−999−9999:52:0019:00:00vandalism, physical
6/18/20146/20/2014Southeast Michigan−999138,80217:00:0015:00:00severe weather, thunderstorm
6/15/20146/15/2014Central Minnesota−99955,9510:00:001:00:00severe weather, thunderstorm
6/12/20146/12/2014Somervell County, Texas−999−9999:10:009:11:00vandalism, physical
Table 3. Performance comparison of traditional and deep learning models for outage classification across key metrics: accuracy, precision, recall, and F1-score.
Table 3. Performance comparison of traditional and deep learning models for outage classification across key metrics: accuracy, precision, recall, and F1-score.
ModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)
Logistic Regression71.17069.569.75
Naive Bayes (NB)78.276.877.577.15
KNN79.778.47878.2
Stochastic Gradient Descent (SGD)80.279.179.579.3
SVM80.98080.480.2
Decision Tree (DT)81.328180.680.8
Random Forest (RF)84.4383.58483.75
XGBoost85.585.184.884.94
Gradient Boosting Classifier85.985.685.385.45
CNN86.2085.9085.8085.85
RNN-LSTM 86.52 86 86.20 86.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liaqat, N.; Zubair, M.; Waleed, A.; Abid, M.I.; Shahid, M. A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids. Electricity 2025, 6, 55. https://doi.org/10.3390/electricity6040055

AMA Style

Liaqat N, Zubair M, Waleed A, Abid MI, Shahid M. A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids. Electricity. 2025; 6(4):55. https://doi.org/10.3390/electricity6040055

Chicago/Turabian Style

Liaqat, Nouman, Muhammad Zubair, Aashir Waleed, Muhammad Irfan Abid, and Muhammad Shahid. 2025. "A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids" Electricity 6, no. 4: 55. https://doi.org/10.3390/electricity6040055

APA Style

Liaqat, N., Zubair, M., Waleed, A., Abid, M. I., & Shahid, M. (2025). A Hierarchical RNN-LSTM Model for Multi-Class Outage Prediction and Operational Optimization in Microgrids. Electricity, 6(4), 55. https://doi.org/10.3390/electricity6040055

Article Metrics

Back to TopTop