1. Introduction
For the Ukrainian electricity market to function effectively, market participants, primarily Distribution System Operators (DSOs) and the Transmission System Operator (TSO), are obligated to purchase electricity to compensate for technical and non-technical losses within the power grid [
1]. These purchases are necessary to maintain the reliability and balance of the electrical system, ensuring uninterrupted supply to end-users. The volume and cost of such energy losses are substantial, particularly in aging or overloaded infrastructure, which increases the financial burden on network operators [
2]. Consequently, the expenses incurred from procuring electricity to cover these losses are embedded in the tariff structures regulated by national authorities [
3]. As a result, these costs directly affect the final electricity prices paid by consumers, making energy loss management a key component of tariff policy and economic efficiency in the sector [
2,
4].
According to the latest publicly available reports of the energy regulator of Ukraine for the three quarters of 2021, the average total losses in the distribution network were 10.02%. For some distribution network operators, this percentage reaches 20%. This is significantly higher compared to most European countries, where typical loss levels range between 4% and 6% [
3,
4]. Such high losses indicate inefficiencies in network operation, aging infrastructure, and insufficient implementation of modern monitoring and control systems [
5]. According to market rules [
6], Distribution and Transmission System Operators purchase electricity to cover their own losses on the wholesale electricity market. The earlier the electricity is purchased before the delivery date, the cheaper it is, and if the actual energy consumed does not match the energy purchased, an imbalance occurs, the price of which is the most expensive [
7]. According to the analysis conducted in [
8], the calculation of the cost of forecast error for the New England power system market showed that the average annual losses for the period 2004–2014 for a company with a peak capacity of 1000 MW were estimated at USD 300 thousand for an increase in the error of short-term forecasts by 1%. For European markets, the annual cost of forecast error in 2017 reached OTE (Czech Republic)—EUR 890 thousand, EPEXSPOT (Western Europe)—EUR 340 thousand, and NordPool (Northern Europe and the Baltic States)—EUR 400 thousand. Calculations conducted for the Ukrainian market in 2019 showed that the price of 1 MWh of loss forecast error is ~8 €/MWh [
1]; now, it is much higher because the day-ahead market price has increased four times, and UAH has devalued against EUR by 1.7. During winter peak load periods, when energy consumption surges and grid stress intensifies, improving the accuracy of energy loss estimation and forecasting becomes especially critical for ensuring system reliability and economic efficiency [
9]. Therefore, increasing the accuracy of loss forecasts reduces their costs by reducing imbalances, which, in turn, can reduce the cost of their services to end-users.
High levels of electrical energy losses in distribution networks not only lead to economic inefficiencies but also contribute significantly to environmental degradation [
10]. In systems with outdated infrastructure, such as in parts of Ukraine, these losses result in the unnecessary generation of additional electricity to compensate for inefficiencies, often sourced from fossil fuel-based power plants [
11]. This excess generation increases greenhouse gas emissions, accelerates resource depletion, and elevates the overall carbon footprint of the energy sector. Moreover, the inability to accurately forecast and manage losses may hinder the integration of renewable energy sources, through competition for the “rotating” reserves of thermal power plants, which are necessary to compensate for the stochasticity of renewable generation, thereby prolonging dependency on non-renewable and environmentally harmful energy systems [
12]. Addressing these environmental challenges through the application of smart forecasting methods, such as deep neural networks, supports a more sustainable energy transition by minimizing waste, improving grid efficiency, and enabling smarter, cleaner energy management [
13,
14].
Rapid changes in the topology of electrical networks can significantly impact the accuracy of loss forecasts, as these changes introduce more variability and complexity into the system [
1,
15]. When network topology undergoes sudden shifts, it disrupts the predictable patterns of energy flow, making it difficult to forecast energy losses as a single time series [
16]. This increased unpredictability leads to larger forecast errors, as traditional models may not account for such dynamic alterations in network structure [
17]. As a result, the efficiency of network management is compromised, since operators may not have accurate information to make informed decisions regarding load balancing, energy procurement, or loss mitigation strategies [
18,
19]. To address this challenge, more advanced forecasting techniques that can adapt to these topological changes are needed to enhance the reliability and effectiveness of network management practices [
15,
20].
The realization of the set goal involves solving the following tasks:
The detection and replacement of anomalous values in data;
The development of short-term forecasting models for aggregated values of electrical energy losses in the electrical network based on retrospective data, using deep learning artificial neural networks, which has allowed for an increase in the accuracy of determining losses without performing their calculation compared to existing methods;
The development of short-term forecasting models for node loads based on deep learning artificial neural networks, which has enabled an increase in the accuracy of determining calculated values of losses in the electrical network by incorporating forecasted node load values during their calculation. With this approach, changes in network topology can be taken into account at the loss calculation stage.
Considering the necessity of accurate load forecasting, the idea of using advanced forecasting methods to determine electrical energy losses in electrical networks has gained increasing attention [
21]. Forecasting energy losses is crucial for efficient grid management, as it allows operators to anticipate and mitigate potential inefficiencies. In this regard, artificial neural networks (ANNs) have emerged as effective forecasting methods, offering significant advantages over traditional models [
22]. These networks can identify complex patterns and relationships in large datasets, making them well-suited for predicting both load demands and associated energy losses [
23]. As machine learning models, ANNs can continuously improve their accuracy through training with historical data, adapting to changing grid conditions over time. Given their ability to handle large volumes of data and predict nonlinear behaviors, artificial neural networks have become recognized as highly effective prediction models for forecasting electrical energy losses in modern distribution networks [
21,
23].
It is crucial that reliable load forecasting results are available at the nodes of the calculation scheme for various forecasting horizons, as they play a significant role in enhancing the efficiency of electricity network management. In addition to reducing electricity purchase costs, accurate forecasting enables better decision-making regarding load distribution, minimizing operational inefficiencies, and optimizing the overall performance of the network [
24]. This becomes especially relevant in the context of modernizing the Ukrainian electrical grid, where the implementation of Smart Grid systems can further improve the management of network modes and ensure greater stability and reliability in the face of fluctuating demand and supply conditions [
24,
25].
Classical forecasting methods include autoregressive integrated moving average (ARIMA) time series methods, as well as Holt–Winters exponential smoothing methods. Methods based on artificial neural networks are considered modern forecasting methods and are more popular due to their accuracy, speed, and efficient learning. In the article [
26], a combined neural network based on a multilayer perception and autoregression algorithm was constructed for single factor forecasting, with the autoregression algorithm used for data preprocessing. The validation was carried out on the PJM power system data in the USA, covering 96 nodes from 2014 to 2015 with hourly discreteness. Another example of a combined neural network is described in [
27], consisting of multiple neural networks and a fuzzy logic module (PROTREN) for trend detection. The data for the study included information from the isolated power system of the island of Crete, as well as air temperature data.
For single factor forecasting of node loads, the Support Vector Machine (SVM) method can be employed, as described in [
28], where its application during the management of the energy system in the Shandong province of China is discussed. The model takes into account the relationship between the total active load of the system and the node, as well as the relationship between the active and reactive power of the node, and the connection among the average power values of all nodes. To assess the effectiveness, the Support Vector Machine method was compared with a nonlinear autoregressive neural network [
29] for forecasting the active load of the node and an adaptive Kalman filter for predicting node power coefficients.
Studies on energy management in electrical distribution networks have increasingly explored the use of machine learning techniques, particularly deep neural networks (DNNs), for load forecasting and fault detection. Research demonstrates the effectiveness of deep learning models, such as Long Short-Term Memory (LSTM) and convolutional neural networks (CNN), in capturing complex temporal dependencies in power consumption data [
30,
31]. These works primarily focus on predictive accuracy without addressing real-time control or energy loss management [
1]. Other approaches, including optimization-based and rule-based systems, have been employed to reduce energy losses, but often lack adaptability and scalability when dealing with dynamic network conditions [
32]. Furthermore, most existing studies treat load forecasting and loss mitigation as separate tasks, thereby missing the opportunity for synergistic improvement through integrated models [
30,
32,
33].
2. Methodology—Forecasting of Nodal Load
The methodology for forecasting nodal load in distribution networks using deep neural networks (DNNs) combines advanced machine learning techniques with domain-specific data to improve accuracy in energy loss prediction [
14]. This study employs Long Short-Term Memory (LSTM) networks and the recently developed eResNet architecture, specifically designed for handling the complex dynamics of electrical networks [
14,
20,
34]. LSTM networks, known for their ability to capture temporal dependencies, are utilized to predict nodal load over short-term horizons, taking into account the historical load data from Ukrainian distribution networks. The eResNet architecture, a specialized model for energy systems, is used for more efficient processing of the grid’s varying topologies and energy flows [
15,
35]. These models are trained on historical data from the CIGRE medium-voltage network, covering the years 2017–2019, with a focus on 30 min intervals to account for the granularity of real-time grid operations [
15].
The forecasting process involves a two-stage approach to handle missing or anomalous data within the dataset [
14]. The first stage applies the DBSCAN clustering method to detect and replace anomalous values, ensuring a cleaner dataset for model training. After preprocessing, the trained models are tested and validated against real-world data provided by the Ukrainian Distribution System Operator, with an emphasis on minimizing forecasting errors [
14,
34,
36]. The methodology also explores the integration of these forecasting models within Smart Grid systems, focusing on how real-time, accurate load predictions can enable more efficient energy loss management and decision-making in energy procurement and energy saving [
15,
37]. This comprehensive approach not only enhances forecasting accuracy but also contributes to the development of smarter, more sustainable energy networks in Ukraine and beyond [
1,
14,
15].
To reduce the cost of energy losses, the utilization of modern methods for calculating and forecasting electrical energy losses using deep learning artificial neural networks is proposed. Various architectures of deep learning neural networks were employed to assess the effectiveness of forecasting electrical energy losses. Two approaches to forecasting losses were developed to test the accuracy of the prediction models:
Forecasting the load at each node simultaneously, followed by data consolidation and loss calculation based on the forecasted data (
Figure 1);
Directly calculating and then forecasting the total electrical energy losses (
Figure 2).
Previous versions of algorithms for predicting and calculating electricity losses without analysis blocks and replacing anomalous values are given in the article [
33].
A recurrent neural network of the type LSTM (Long Short-Term Memory), described in [
34], was used to forecast all load nodes. This neural network is a combined neural network architecture based on a recurrent LSTM module and a multilayer perception with two hidden layers. The SELU (scaled exponential linear unit) function was used as the activation function. A shortcut connection was used to mitigate the vanishing gradient (
Figure 3).
The neural network eResNet was used to forecast the time series of losses. This network was tested in the task of forecasting electrical load [
34]. It includes 3 autoencoder blocks. Each block contains two dense layers with SELU activation and an identity shortcut. The output of the block is the element-wise sum of the output of the autoencoder and the shortcut (
Figure 4).
The stability of the proposed forecasting models was indirectly validated through the inclusion of perturbed data in the training and testing process. Specifically, the forecasting models were trained on datasets that included both verified and anomalous values, allowing them to learn from a variety of real-world scenarios, including load spikes, missing entries, and operational inconsistencies [
14,
34]. The use of the DBSCAN-based preprocessing framework further enhanced model robustness by cleansing the data of extreme outliers while preserving realistic fluctuations in load behavior. As shown in
Table 1, the LSTM-based model maintained a low MAPE of 3.29% even after being tested on verified data derived from real network operations with inherent anomalies. Moreover, the architecture of the eResNet model, featuring residual connections and autoencoder blocks, enables stable learning and avoids gradient vanishing in deep networks, which is critical when processing abnormal or noisy inputs [
35,
38]. Importantly, the system’s ability to recalculate losses using forecasted nodal loads allows for the reflection of topological changes without requiring model retraining, which significantly contributes to operational stability under emergency conditions [
15].
In a recent study, an adapted version of eResNet was evaluated against several forecasting models, including a linear regression stack, multilayer perception (MLP), ε-support vector regression (ε-SVR), and the seasonal autoregressive integrated moving average (SARIMA) model. The results showed that eResNet achieved significantly lower values of root mean square error (RMSE) and maximum absolute error compared to the other models. This confirms the strong potential of deep residual networks for short-term renewable energy forecasting tasks [
33].
Using a single network to forecast the load of all nodes of the power system simultaneously has several advantages. The complexity of the model in terms of the number of parameters increases more slowly with an increase in the number of nodes, which reduces the required number of computational resources. In addition, a unified model can learn cross-node dependencies implicitly, whereas isolated models treat each series as independent and therefore discard useful information embedded in the joint dynamics. However, the presence of significant correlation between nodes can lead to overfitting of the network, so special attention should be paid to the use of techniques to prevent overfitting. A single model implies one set of hyper-parameters, one retraining trigger, and one version to validate and deploy, lowering engineering overhead and the probability of configuration drift. In [
34], it is shown that the use of a network based on LSTM gives comparable accuracy indicators to the forecast of the load of each node by a separate model, but achieves a shorter training time, and this effect will be even more noticeable when predicting a larger number of nodes.
Both forecasting methods were adapted to retrospective data from “Vinnytsiaoblenergo”, which provided a detailed dataset for load and energy loss calculations. The training process for both algorithms was conducted using the ADAM optimization algorithm, known for its efficiency in handling large datasets and improving model performance. The use of the ADAM algorithm ensures that the models achieve higher accuracy by optimizing the learning rate and minimizing errors during the training process.
Training of the LSTM-based neural network took an average of 15 min (30 runs) on an Intel i3 10100 3.6 GHz processor and 16 GB RAM. The inference time is a few milliseconds. This does not pose a problem, considering that the discreteness of the time series is 30 min and the forecast horizon is 24 hours. Training one eResNet network took an average of 8 min (30 runs).
3. Results—Network Development and Retrospective Data Analysis
This research focused on developing a representative model of the electrical distribution network to enable accurate forecasting and loss estimation. A thorough analysis of retrospective load data was conducted to identify trends and anomalies that affect neural network efficiency. This data served as the foundation for training and validating forecasting models based on deep neural networks. The results demonstrated that integrating accurate modeling with historical data significantly improves the prediction of energy losses and overall network performance.
The calculation of losses and the forecasting of electrical energy were carried out using retrospective data provided by a Ukrainian Distribution System Operator (DSO). This dataset includes measurements from 16 load nodes, covering the period from 2017 to 2019, with a time resolution of 30 min. The total volume of processed data amounts to 48,048 individual values, ensuring a substantial foundation for model training and validation. Such detailed temporal and spatial data enable accurate modeling of load behavior and improve the reliability of loss prediction in electrical networks.
For modeling the electrical network used in loss calculation, the CIGRE [
39] benchmark medium-voltage level network was selected as the foundational structure for the test power system. This standardized model is widely recognized for its applicability in analyzing the integration of distributed energy resources and assessing network behavior under varying load conditions. It provides a well-documented topology and parameters that support accurate simulation and validation of forecasting algorithms. The schematic layout of the adopted test system is illustrated in
Figure 5, highlighting key nodes and network components used in the analysis.
The topology of the distribution network directly influences the pattern and level of electrical energy consumption at various nodes, as it determines the flow paths and voltage profiles across the system. Nodes located farther from the source or in branches with higher impedance tend to exhibit greater losses and fluctuating consumption behavior due to voltage drops and load concentration [
40]. Analyzing energy consumption in relation to network topology allows for the identification of critical sections where optimization or reinforcement can lead to significant reductions in technical losses and improved energy efficiency.
The Python (version 3.13.0) programming language and the Pandapower data analysis library were utilized for the construction and analysis of the test power system. This library constitutes a standalone set of tools for the development and analysis of electrical systems. The library encompasses a wide array of diverse models for electrical networks, including numerous test systems and examples of CIGRE power systems [
41].
The test power system consists of two transformers with a capacity of 40 MVA 110/20 kV, 15 nodes, 18 load sources, with two sources removed to align with retrospective data, 14 lines, including 12 cable lines, two overhead lines, and a switch. All elements used in this network are elements of the Pandapower library.
This test network was developed to simulate the operating modes of the electrical network and determine electrical energy losses; this network is presented in the article [
42]. Since this network is built based on a real European network, it differs significantly in terms of load composition and network parameters from the Ukrainian network. Therefore, to ensure the proper functioning of the loss calculation algorithm, the calculation scheme was adapted to use the available retrospective data.
Taking into account the differences in load magnitude between the CIGRE power system and DSO data, cable and overhead lines were replaced with lines of larger cross-sections for the proper functioning of the test power system. Additionally, the node load data from the DSO was distributed according to the load magnitudes of the nodes in the CIGRE power system.
After conducting preliminary data analysis, it was identified that the data contains missing values and anomalous values that significantly deviate from normal values. To identify and replace missing values and anomalous data [
38,
43], a two-stage data validation algorithm was employed using the DBSCAN [
44] clustering method. The proposed validation algorithm for a single node is seen in Algorithm 1.
Algorithm 1. Two-stage data validation algorithm for a single node |
Input: Continuous load time series Rn×1 |
Output: Validated and corrected load time series |
|
1: Select time slices from the continuous load time series Rn×1 → R(n/24 × 24) |
2: for each time slice do |
3: Apply DBSCAN clustering method to detect rough anomalous values |
4: if value does not belong to the first cluster, then |
5: Mark values abnormal |
6: end if |
7: Replace abnormal values using linear interpolation |
8: end for |
9: Merge corrected time slices back into the continuous load series R(n/24 × 24) → Rn×1 |
10: Decompose the time series into trend, seasonal, and residual components |
11: Apply DBSCAN to detect anomalous values in the residual component of the time series |
12: for each detected anomaly do |
13: Replace value using linear interpolation |
To estimate local data density, the algorithm relies on two hyper-parameters: ε, the radius of the neighborhood centered on a point p, and minPts, the minimum number of observations that must fall within that radius for p to be treated as a core point. In this study, the DBSCAN clustering routine was configured with minPts = 5 and ε = 0.5.
In addition, a two-sided moving-average decomposition with variable window widths was applied to the load time series. This decomposition—implemented as an adaptive model with stable seasonality and a seasonal lag of seven periods—enhanced the detection of anomalous values while reducing false positives in nodes that exhibit strong daily and weekly cycles.
The retrospective nodal load data was divided into training and testing datasets for two categories: one containing anomalous values and another without anomalies (following validation and preprocessing). This categorization was implemented to evaluate the robustness and adaptability of the forecasting models under both normal and perturbed operating conditions. The training datasets included all nodal load values except for a subset of 336 values, which represented one full week (7 days) of data. These 336 values were extracted and designated as the testing set. This approach enabled a comprehensive assessment of the model’s forecasting accuracy and generalization capacity in realistic scenarios, including periods characterized by unexpected fluctuations or abnormal load behaviors.
The results of forecasting electrical energy losses for both daily and weekly horizons are presented in
Figure 6,
Figure 7 and
Figure 8. These visualizations illustrate the predictive performance of the neural network models under different temporal conditions. The comparative analysis of short-term (1 day) and medium-term (7 days) forecasts enables a deeper understanding of the model’s accuracy and adaptability across various load behavior patterns.
To assess the accuracy of the presented approaches for forecasting electrical energy losses, the Mean Absolute Percentage Error (MAPE) metric was employed. This evaluation criterion is widely used due to its interpretability and effectiveness in measuring prediction accuracy in percentage terms.
Table 1 provides a summary of the forecast errors for both daily and weekly loss prediction horizons. The results reflect the performance of two distinct forecasting approaches applied to retrospective nodal load data. Lower MAPE values indicate better model accuracy, helping to identify the more efficient forecasting method under varying network conditions.
Having conducted forecasting and loss calculation using all approaches, it is evident that the method of load forecasting with subsequent loss calculation tends to yield a lower error in loss predictions compared to other methods. This approach has proven to be more accurate in estimating energy losses in the distribution network. Additionally, the application of advanced data analysis techniques, such as the detection and replacement of anomalous values, significantly enhances the accuracy of the forecasting results. This highlights the importance of data quality in improving the reliability and precision of the predictions.
The research findings are an essential part of the broader effort to develop a Smart Grid model, which aims to optimize the operational management of distribution networks. By incorporating artificial intelligence methods into the management processes, the results of this study contribute to the refinement of Smart Grid technologies, enabling more efficient and reliable energy distribution. These advancements are particularly relevant for enhancing grid management in modern energy systems.
4. Discussion
The results of this research reveal key insights into the modeling and forecasting of electrical energy losses within a distribution network. By using deep learning techniques, this study successfully identified how load forecasting, followed by loss calculation, could reduce forecasting errors [
1,
14,
15,
20]. Specifically, the use of a detailed historical dataset provided by a Ukrainian DSO, combined with advanced data validation methods, significantly improved the accuracy of energy loss predictions. The comparison of forecasting approaches, particularly those involving data with and without anomalies, showed that preprocessing and data quality directly impact the forecasting outcomes [
20,
28,
39]. This highlights the critical role of effective data handling, such as the identification and replacement of anomalous values, in enhancing the reliability of predictive models.
Furthermore, the integration of the CIGRE benchmark network model allowed for a standardized and robust platform to simulate loss calculation and evaluate different forecasting methods [
15,
39,
41]. The results of the daily and weekly forecasting demonstrated that including node-specific data and applying different modeling techniques could account for fluctuations and anomalies inherent in real-world scenarios. This is important for distribution network operators as it provides more accurate projections, which are essential for improving operational efficiency. In addition, the study’s findings contribute to the growing body of research supporting the development of Smart Grid systems, where AI-driven management strategies can be used to optimize network performance and reduce energy losses [
7,
13,
18]. These advancements will play a significant role in enhancing the overall stability and efficiency of modern electrical grids.
Recent advances in deep reinforcement learning (DRL) for energy systems have demonstrated promising results in enhancing the real-time efficiency of complex infrastructure. For instance, recent research has developed a real-time energy management strategy for a smart traction power supply system using Deep Q-Learning, achieving improved adaptability under dynamic operational conditions [
45]. Similarly, another study proposed a multi-timescale reward-based DRL approach for managing energy in regenerative braking energy storage systems, highlighting the potential of reinforcement learning in multi-agent and time-sensitive scenarios [
46]. While these studies demonstrate the applicability of DRL in transportation-related energy systems, their focuses remain domain-specific and do not address broader challenges within electric distribution networks—particularly those involving the simultaneous tasks of load forecasting and real-time energy loss management. In contrast, our approach is situated within the distribution-level Smart Grid domain, integrating deep learning not only for predictive load modeling but also for the real-time identification of energy loss patterns. This positions our work as a complementary yet distinct advancement within the broader landscape of intelligent energy management solutions.
The design of the proposed neural network architecture incorporates several techniques to mitigate overfitting and ensure robust generalization across unseen data. First, both the LSTM-based and eResNet models utilize shortcut (residual) connections, which not only facilitate gradient flow but also act as implicit regularizers that prevent the network from overfitting to noise in the training data [
43]. Second, a unified model is used for simultaneous forecasting across multiple nodes, which inherently reduces the number of parameters compared to training separate models per node. This architectural decision minimizes the risk of overfitting by decreasing model complexity while preserving its ability to capture cross-node dependencies [
34]. Furthermore, DBSCAN-based preprocessing removes anomalous or extreme outliers, helping the network focus on learning meaningful patterns rather than noise [
14,
44]. Additionally, model performance was validated on a hold-out test set representing one week of unseen data, confirming its capacity to generalize beyond training inputs. During training, optimization was performed using the ADAM algorithm with appropriate learning rate schedules to avoid excessive parameter updates that could lead to memorization. Together, these design principles and validation strategies confirm that the proposed models are resistant to overfitting and capable of stable performance in real-world operational settings.
Classical forecasting techniques, including ARIMA and Holt–Winters exponential smoothing, have traditionally been employed for load and loss prediction due to their simplicity and interpretability [
26]. However, these methods rely on linear assumptions and exhibit a limited capacity in modeling the complex, non-stationary behavior of modern distribution networks, especially under conditions involving abrupt load fluctuations, topological reconfigurations, or incomplete datasets [
27]. In contrast, the deep learning approaches proposed in this study, specifically LSTM-based networks and the eResNet architecture, demonstrate a superior ability to learn the temporal dependencies and nonlinear relationships inherent in power system data [
14,
34]. These models not only outperform classical approaches in terms of predictive accuracy (achieving MAPE as low as 3.29%), but also exhibit enhanced robustness when handling anomalous or missing values through preprocessing with DBSCAN [
14,
44]. Additionally, while classical methods often require model retraining when network conditions change, the proposed neural networks can accommodate such changes at the loss calculation stage without retraining the forecasting model [
15]. Furthermore, the unified neural network architecture supports simultaneous forecasting across multiple nodes, improving scalability and learning of cross-node dependencies—capabilities not feasible with conventional univariate models [
34,
43]. Therefore, from both theoretical and practical standpoints, the proposed AI-based methods provide a more flexible, adaptive, and accurate framework for energy loss forecasting in Smart Grid environments.
By adopting these methods, grid operators can better anticipate load behavior and optimize resource allocation, leading to more sustainable energy consumption practices. Future research should further refine these models, exploring additional network configurations and forecast horizons to increase the model’s generalizability. Additionally, integrating real-time data and considering additional environmental factors may further enhance the predictive accuracy and operational effectiveness of the system.
Even though this approach allows us to significantly reduce the error of loss forecast, it is necessary to assess robustness under dynamic conditions. One of its advantages is that when changing the network topology, it is not necessary to retrain the models for load forecasting, except in particularly extreme cases when, due to emergency situations, the change in the network topology limits the load in the nodes.
The proposed approach has significant drawbacks that limit its practical application. For its effective use, long-term load data for each node is required, the storage period of which may be shorter than the data on total losses.
5. Conclusions
This research presents an effective method for short-term forecasting of aggregated electrical energy loss values in the power grid, utilizing deep learning artificial neural networks. This method significantly improves accuracy in predicting energy losses compared to traditional techniques that rely on direct loss calculations. By leveraging retrospective data, this approach eliminates the need for physical loss measurements, offering a more efficient and scalable solution for forecasting energy losses in distribution networks. The integration of advanced deep learning algorithms enhances the precision of these forecasts, ensuring more reliable predictions for grid management.
To forecast losses, this research also proposes methods for short-term forecasting of node loads based on deep learning artificial neural networks. By forecasting node loads, the model improves the accuracy of loss predictions by incorporating forecasted load values during the calculation process. This dual approach, forecasting both node loads and energy losses, proves highly effective in improving the reliability and accuracy of loss estimations. The incorporation of forecasted load data further refines the overall model, enabling more precise calculations of electrical energy losses in the power grid.
This study also emphasizes the importance of data verification and anomaly detection methods in improving the accuracy of forecasting models. By identifying and replacing anomalous values within the retrospective load data, this research demonstrates how additional data preprocessing can significantly reduce forecasting errors. The results show a marked improvement in loss predictions when using verified data, as opposed to forecasting based on raw data that includes anomalous values. This highlights the importance of maintaining high-quality, clean datasets for accurate predictive modeling in power grid management.
The forecast error analysis shows that when using artificial neural networks, the Mean Absolute Percentage Error (MAPE) for verified node load data used in loss calculations is as low as 3.29%. In contrast, forecasts based on unverified or anomalous data yield significantly higher error margins, with an MAPE of 20.24% for verified loss data and 20.08% for unverified loss data. These findings underscore the effectiveness of data verification methods in enhancing the precision of forecasting models. Overall, the proposed forecasting techniques offer a promising solution for optimizing power grid operations and improving energy loss management.