Next Article in Journal
An Automated ECG-PCG Coupling Analysis System with LLM-Assisted Semantic Reporting for Community and Home-Based Cardiac Monitoring
Previous Article in Journal
Evolutionary Optimization for Job Shop Scheduling with Blocking: A Genetic Algorithm Approach
Previous Article in Special Issue
YOLOv8-ECCα: Enhancing Object Detection for Power Line Asset Inspection Under Real-World Visual Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink

M.Eng. Program “Industry 4.0: Automation, Robotics & 3D Manufacturing”, SRH Berlin University of Applied Sciences (Berlin Campus), Sonnenallee 221, 12059 Berlin, Germany
*
Authors to whom correspondence should be addressed.
Algorithms 2026, 19(2), 116; https://doi.org/10.3390/a19020116
Submission received: 14 November 2025 / Revised: 21 January 2026 / Accepted: 27 January 2026 / Published: 2 February 2026

Abstract

This paper examines how hour-ahead forecasting uncertainty propagates to microgrid operation under intermittent renewable generation. Using hourly public data for Ontario and focusing on the FSA K0K in 2018, we evaluate four representative months (January, April, July, and December) to capture seasonal dynamics. We benchmark three univariate forecasting approaches for load demand, photovoltaic (PV) generation, and wind generation under a consistent 24-to-1 input setup, including GRU, LSTM, and a persistence baseline. We report point-forecast metrics (RMSE, MAE, and R 2 ) and also provide 90% prediction intervals (PI90) using conformal calibration to quantify uncertainty. To assess downstream impact, forecasts are coupled with a dual-branch MATLAB/Simulink microgrid model. One branch uses True profiles and the other uses forecast-driven Pred inputs, while both branches share the same rule-based EMS and BESS constraints. System performance is evaluated using time-series comparisons and monthly key performance indicators (KPIs) covering grid import and export, grid peak power, battery throughput, and state-of-charge (SoC) statistics. We further report an illustrative cost sensitivity under a flat tariff and a throughput-based degradation proxy. Results show that forecasting performance is target dependent. GRU achieves the best overall point accuracy for load and PV, whereas wind is strongly driven by short term persistence at the one hour horizon, and in this measurement only setup without meteorological covariates the persistence baseline can match or outperform the deep learning models. In the microgrid simulations, Pred and True trajectories remain qualitatively consistent, and SoC-related indicators and peak power remain comparatively consistent across months. In contrast, energy-flow indicators, especially grid export and battery throughput, show larger deviations and dominate the observed cost sensitivity. Overall, the findings suggest that compact hour-ahead forecasts can be adequate to preserve operational reliability under a constraint-driven EMS, while forecast improvements mainly translate into economic efficiency gains rather than reliability-critical benefits.

1. Introduction

As PV and wind become more widespread, local power systems face greater short-term variability, which complicates operational planning [1,2]. Microgrids that combine distributed generation, battery energy storage systems (BESS), and grid interconnection are a practical way to improve reliability and flexibility, but their performance depends on the quality of short-horizon forecasts used for scheduling and control [3,4,5,6,7].
Short-term forecasting has evolved from classical statistical models to deep learning. LSTM and GRU are still widely used for load and renewable forecasting because they capture nonlinear temporal dependencies with moderate complexity [8,9,10,11]. More recently, transformer-based architectures have been explored for energy time-series forecasting and have shown competitive results in several load and renewable settings [12,13]. However, the training and tuning overhead of larger models is not always a good fit when the goal is a lightweight and reproducible hour-ahead benchmark that can be deployed in measurement-only settings. Point forecasts alone are not enough for risk-aware operation, so uncertainty reporting has become more common. For this reason, prediction intervals and probabilistic outputs are increasingly reported together with RMSE and MAE [14,15].
Despite these advances, fewer studies explicitly connect forecast errors to downstream microgrid behavior and aggregated operational outcomes. Many papers focus on improving forecasting accuracy, while the operational question is whether forecast improvements actually translate into better grid exchange, battery cycling, or cost outcomes under a given energy management strategy [16,17,18]. Rather than proposing a new forecasting architecture, we frame this work as an end-to-end coupling study that quantifies how hour-ahead forecast errors propagate into time-series behavior, monthly KPIs, and cost-sensitive outcomes under a fixed EMS.
We address this need by coupling hour-ahead forecasts with a dual-branch MATLAB/Simulink (R2024a) microgrid model that isolates the effect of forecast uncertainty. Using hourly public data for Ontario and focusing on the FSA K0K, we model load demand, wind generation, and PV generation in 2018 and evaluate four representative months (January, April, July, and December) to capture seasonal patterns. We intentionally adopt a univariate, measurement-only setup to keep the forecasting stage reproducible and to isolate how forecast errors affect downstream operation, while noting that meteorological covariates can further improve PV and wind forecasting. We benchmark GRU, LSTM, and a persistence baseline under a consistent 24 to 1 formulation. We report point-forecast metrics and provide 90% prediction intervals using conformal calibration to characterize uncertainty with empirical coverage. The downstream simulations keep the EMS and all model parameters fixed and compare operation under True inputs and forecast-driven Pred inputs.
The main contributions are
  • A reproducible hour-ahead forecasting benchmark for load, PV, and wind using GRU, LSTM, and a persistence baseline under a consistent 24 to 1 univariate setup.
  • Conformal PI90 reporting that complements point metrics and supports uncertainty-aware extensions.
  • A dual-branch Simulink coupling that quantifies how forecast uncertainty propagates to time-series behavior, monthly KPIs, and an illustrative flat-tariff cost sensitivity with a throughput-based degradation proxy.

2. Methodology

The overall workflow follows a five-stage pipeline (Figure 1) consisting of (a) data acquisition, (b) data preprocessing, (c) forecasting setup, (d) microgrid simulation coupling, and (e) evaluation. Forecasting models and baselines were implemented in Python 3.12.4. (NumPy, pandas, scikit-learn, TensorFlow/Keras), and the resulting hourly forecasts were exported and integrated into a dual-branch Simulink simulation (True vs. Pred) to quantify how forecast errors propagate to EMS operation and system-level KPIs.

2.1. Data Acquisition

We collected hourly electricity time series from the Independent Electricity System Operator (IESO) of Ontario, Canada, at the Forward Sortation Area (FSA) level [19]. Similar microgrid-relevant time series have been analyzed using machine-learning approaches in prior work [7]. In this study, we focus on the FSA K0K and use 2018 data to ensure that demand and renewable generation are temporally aligned. The dataset includes hourly load demand, wind generation, and solar generation.
To capture seasonal variation without expanding the scope excessively, we selected four representative months. These are January (winter), April (spring transition), July (summer), and December (early winter). This selection follows common practice in short-term load and net-load forecasting, where model performance is reported under contrasting seasonal conditions [20]. Hourly load was taken from the IESO “Hourly Electricity Consumption Data” report [21]. Wind and solar generation were obtained from the IESO “Generator Output by Fuel Type Hourly Report” (XML) and compiled into a unified PV/WT table (PV_WT_value). This preprocessing step ensures consistent timestamp alignment and supports the downstream modeling workflow [22].

Descriptive Statistics and Seasonal Characteristics

Table 1 summarizes descriptive statistics (mean, standard deviation, minimum, and maximum) for hourly load in the K0K area together with the corresponding wind (WT) and photovoltaic (PV) generation profiles for four representative months (January, April, July, and December) [8]. Seasonal differences are evident. Average load is higher in winter (January and December) than in April and July. PV is strongest in summer, with July exhibiting the largest mean and peak values, while winter months remain low due to shorter daylight hours and long nighttime periods with zero generation. WT also varies substantially by season, with higher average output in winter than in July.
WT and PV are rescaled to the K0K microgrid scope using fixed scaling factors, and PV peak caps are applied to ensure physical plausibility. The scaling procedure is described in Section 2.2.3, and the peak-cap rules are given in Section 2.2.4.
Figure 2 visualizes seasonal characteristics through average diurnal (hour-of-day) profiles [23]. Load exhibits a clear daily structure, whereas PV shows the expected daytime-only generation with a pronounced midday peak in July and shorter daylight windows in winter months. Wind demonstrates weaker diurnal regularity and varies primarily by season.

2.2. Data Preprocessing

Before model training, we cleaned and harmonized the raw hourly time series to ensure consistent timestamps and units across load, wind (WT), and photovoltaic (PV). We then scaled the renewable generation profiles to a representative K0K microgrid scope using fixed scaling factors and physically motivated peak constraints.

2.2.1. Temporal Alignment and Quality Control

Hourly timestamps were created by combining the reported date and hour fields into a unified DATETIME index. In the IESO load files, the hour field (1–24) was first converted to 0–23 and load was then aggregated at the hourly level. Wind and solar outputs were compiled into a single hourly table and synchronized to the load series by retaining only the hours present in both datasets. This ensures consistent scaling and prevents time-shifted inputs in downstream modeling. Any missing renewable entries were set to zero to maintain a continuous hourly sequence.

2.2.2. Unit Harmonization (MW)

All series were expressed in MW. Load was converted to MW based on the reported unit. Wind and PV outputs were treated as MW by default; when reported in kW, they were converted to MW (1 MW = 1000 kW). This avoids unit inconsistencies when exporting forecasts to the microgrid simulation.

2.2.3. Scaling Renewables to the K0K Microgrid Scope

Because the renewable generation series are provided at the Ontario-wide level and cannot be directly filtered to the FSA K0K, wind and PV profiles were scaled to a K0K-representative level using fixed scaling factors that emulate a fixed installed-capacity assumption [24]. Specifically, fixed scaling factors α WT and α PV were computed across the selected months as
α WT = ρ WT L ¯ W ¯ , α PV = ρ PV L ¯ S ¯ ,
where L ¯ is the mean hourly load in K0K, W ¯ and S ¯ are the mean Ontario-wide wind and solar outputs over the aligned hours, and ρ WT and ρ PV denote target penetration ratios [25] (set to 0.50 and 0.30, respectively). The scaled renewable series were then obtained as W * ( t ) = α WT W ( t ) and S * ( t ) = α PV S ( t ) , preserving temporal patterns while adjusting magnitudes to a realistic microgrid scale.

2.2.4. Physical Plausibility Constraints (Peak Caps)

To keep the scaled renewable profiles consistent with a K0K-scale microgrid, we cap monthly peaks relative to the monthly peak load. PV is capped at 0.8 × max ( L ) , while wind is capped at 2.0 × max ( L ) to reflect its more volatile peaks. This step only affects occasional extremes, and the diurnal and seasonal timing of the profiles is preserved.

2.3. Forecasting Setup

We consider hour-ahead forecasting for three hourly time series, including load demand, wind generation (WT), and photovoltaic generation (PV). Each target is modeled separately using its own historical observations to keep the forecasting pipeline consistent across variables and to simplify downstream coupling with the microgrid simulation.

2.3.1. Forecasting Task and Input Construction

Let x ( t ) denote an hourly time series for a given target (Load, WT, or PV), where t is the hourly time index. The task is one-step-ahead forecasting, predicting the next-hour value x ( t + 1 ) from recent observations. Inputs are constructed using a sliding window of length w. With w = 24 , each input sample consists of the previous 24 hourly values { x ( t 23 ) , , x ( t ) } and the label is x ( t + 1 ) .

2.3.2. Train–Calibration–Test Split

We split each monthly dataset chronologically into training, calibration, and test subsets using a 60/20/20 ratio. The training set is used to fit the forecasting models. The calibration set is reserved for prediction-interval calibration [26] (Section 2.3.4), and all point and interval metrics are reported on the held-out test set.

2.3.3. Models and Baselines

We evaluate two recurrent neural network models, GRU and LSTM, for hour-ahead forecasting [27]. Both models use the same input window ( w = 24 ) and output a one-step-ahead point forecast to ensure a fair comparison. To keep the evaluation controlled, we use a fixed model configuration across targets and months rather than extensive hyperparameter tuning; near-optimal tuning and additional model families (e.g., hybrid CNN–RNN or tree-based methods) are left for future work. As a simple but strong baseline, we include persistence defined as x ^ ( t + 1 ) = x ( t ) .

2.3.4. Prediction Intervals via Conformal Calibration

Along with point forecasts, we report 90% prediction intervals (PI90) using conformal calibration [14]. After training, we compute absolute residuals on the calibration set,
e ( t ) = | x ( t ) x ^ ( t ) | .
We then take the 90th percentile of { e ( t ) } as a constant half-width q and form PI90 on the test set as [ x ^ ( t ) q , x ^ ( t ) + q ] . For wind generation, we update the half-width over time using a rolling window of the most recent 168 residuals to better track non-stationary behavior [28]. We also apply a mild 1.10× inflation to improve empirical coverage in winter months.

2.3.5. Evaluation Metrics

Point-forecast accuracy is evaluated on the test set using RMSE and MAE (MW), together with R 2 to quantify goodness of fit. For prediction intervals, we report 90% prediction intervals (PI90) and evaluate them using the prediction interval coverage probability (PICP@90) and the normalized average width (PINAW). In addition, we report the number of trainable parameters for GRU and LSTM as a compact measure of model complexity.

2.3.6. Modeling Choice and Scope

We adopt a univariate setup to isolate how hour-ahead forecast errors propagate to downstream microgrid operation, while keeping the forecasting stage reproducible and deployable with measurement-only inputs [29]. Meteorological covariates (e.g., irradiance, temperature, wind speed) are commonly used to improve forecasting performance in multivariate settings [30], but they are not available at the same spatial granularity as the FSA-level series in the public dataset used here. Our emphasis is therefore on operational sensitivity to forecast uncertainty in microgrid energy management and costs [31]. Extending the forecasting stage to multivariate inputs and direct net-load forecasting is left for future work.

2.4. Microgrid Simulation Coupling

To quantify how forecast errors propagate to downstream operation, we couple the forecasting stage to a simplified microgrid energy-flow model implemented in MATLAB/Simulink. The simulation is evaluated under two input settings: (i) measured True time series and (ii) forecast (Pred) time series, while keeping the microgrid topology and controller parameters identical. The overall simulation workflow and the resulting True vs. Pred comparison are illustrated in Figure 3.

2.4.1. Microgrid Architecture and Components

The microgrid consists of an aggregated demand (Load), renewable generation from photovoltaic (PV) and wind turbine (WT), a battery energy storage system (BESS), and an external utility grid. All signals are represented in MW with an hourly sampling step [32]. The model architecture and signal flow are shown in Figure 4.
At each time step t, the net load is computed as
P net ( t ) = P load ( t ) P pv ( t ) P wt ( t ) .
The BESS dispatch P bess ( t ) is then applied, and the grid exchange power is obtained by power balance:
P grid ( t ) = P net ( t ) + P bess ( t ) ,
where P grid ( t ) > 0 indicates grid import and P grid ( t ) < 0 indicates export. We adopt the sign convention P bess ( t ) > 0 for charging and P bess ( t ) < 0 for discharging.

2.4.2. Two-Branch Design for Pred vs. True

To isolate how forecast errors affect downstream operation [33], we run the Simulink microgrid in two branches that share the same topology and controller settings. The only difference is the input data. The True branch is driven by measured time series ( P load , P pv , P wt ) . The Pred branch uses the corresponding forecast series. Since all parameters and initialization steps are the same, any differences in trajectories or KPIs can be attributed to forecast uncertainty.
During data loading, the forecast outputs are read from .mat files with aligned vectors such as y_true and y_pred. We then perform a simple unit sanity check. If a unit tag is available, it is used directly. Otherwise, we apply a practical scaling rule to convert large-magnitude series from kW to MW when needed. If the imported series do not have the same length, we truncate them to the minimum common horizon for that month.

2.4.3. Energy Management Strategy and BESS Control

We use a lightweight rule-based energy management strategy for the BESS that balances the net load while respecting operational constraints [4,34]. We define the net load as P net ( t ) = P load ( t ) P WT ( t ) P PV ( t ) , so positive values indicate a deficit (grid import) and negative values indicate a surplus (grid export). At each time step, the controller applies a symmetric deadband around zero to avoid frequent switching. If P net ( t ) > deadband , the BESS discharges to reduce grid import, subject to discharge power limits and SoC constraints. If P net ( t ) < deadband , the BESS charges to absorb renewable surplus and reduce grid export, subject to charge limits and available SoC headroom. If | P net ( t ) | deadband , the BESS remains idle. Discharging is additionally disabled when the SoC falls below a reserve threshold SoC _ res .
With an hourly sampling interval Δ t = Ts , the SoC is updated using an energy balance with charge and discharge efficiencies and then saturated within the allowable bounds
SoC t + 1 = sat [ SoC _ min , SoC _ max ] SoC t + η c P bess + ( t ) 1 η d P bess ( t ) E _ cap Δ t ,
where P bess + ( t ) = max ( P bess ( t ) , 0 ) and P bess ( t ) = min ( P bess ( t ) , 0 ) . The operator sat [ a , b ] ( · ) clips the SoC to the interval [ a , b ] .

2.4.4. BESS Parameters and EMS Rules

Table 2 lists the BESS parameters used in both branches. Table 3 summarizes the rule-based EMS logic used in the Simulink microgrid model.

2.4.5. Simulation Settings and Exported Outputs

All simulations run at an hourly resolution with a fixed step size. For each month, the corresponding input series are loaded into the MATLAB base workspace and assigned to a consistent set of model input variables. This allows the same Simulink model to be executed across months without changing the model structure. The inputs follow a unified naming scheme, for example ts_true and ts_pred for load, and ts_pv_true, ts_pv_pred, ts_wt_true, ts_wt_pred for PV and WT.
Model outputs are collected from the SimulationOutput object rather than through logsout. For both the True and Pred branches, we export grid power, battery power, and battery state of charge. The exported signals are P grid ( t ) , P bess ( t ) , and SoC ( t ) . These time series are then used to compute system-level KPIs and to compare Pred against True.

2.4.6. Evaluation Metrics and System-Level KPIs

We compute system-level KPIs from the hourly simulation outputs to quantify downstream operational effects and cost sensitivity. Let Δ t denote the time step in hours. We adopt the sign convention that P grid ( t ) > 0 indicates grid import and P grid ( t ) < 0 indicates grid export. Grid import and export energies are obtained from the positive and negative parts of the grid power
E imp = t max P grid ( t ) , 0 Δ t ,
E exp = t max P grid ( t ) , 0 Δ t .
Grid peak power is defined as
P grid , peak = max t P grid ( t ) .
Battery utilization is summarized by throughput and peak power
E bess , abs = t P bess ( t ) Δ t ,
P bess , peak = max t P bess ( t ) .
Battery operating state is characterized by the SoC range and mean
SoC range = max t SoC ( t ) min t SoC ( t ) ,
SoC ¯ = 1 T t SoC ( t ) .
To compare Pred and True, we report the absolute percentage deviation for each KPI
Δ ( % ) = 100 × KPI Pred KPI True max KPI True , ϵ ,
with ϵ = 10 9 to prevent division by zero.
Economic Deviation Under a Flat Tariff and a Throughput-Based Degradation Proxy
To express forecast-driven operational differences in monetary terms, we compute an illustrative monthly cost under constant purchase and feed-in prices [35]. Let π buy and π sell denote flat electricity purchase and feed-in tariffs in €/MWh. The net energy cost is
J energy = π buy E imp π sell E exp .
To capture battery usage intensity, we use a throughput-based degradation proxy with a linear coefficient c deg in €/MWh [35,36]
J deg = c deg E bess , abs .
The total operating cost is then
J total = J energy + J deg .
This simplified cost model is used to compare Pred and True under identical assumptions rather than to reproduce a specific market tariff structure. We report deviations for J energy and J total as an indicator of economic sensitivity under the flat-tariff assumption.

3. Results

3.1. Forecasting Performance and Model Selection

We compare three forecasting approaches, GRU, LSTM, and a persistence baseline, for hour-ahead prediction of load, PV, and wind power over four representative months (January, April, July, and December). Point-forecast accuracy is measured using RMSE, MAE, and R 2 . To reflect predictive uncertainty, we additionally evaluate 90% prediction intervals (PI90) using the empirical coverage probability (PICP@90) and the prediction interval normalized average width (PINAW). PICP values closer to 0.9 indicate coverage closer to the nominal level, while a smaller PINAW corresponds to tighter intervals at the same nominal coverage.
Table 4 reports results averaged across the four months. For both load and PV, GRU achieves the lowest RMSE and MAE among the tested models and also yields the strongest R 2 , indicating consistently better point-forecast performance. For wind power, the persistence baseline performs best in terms of RMSE, MAE, and R 2 , suggesting that short-term persistence captures much of the hour-ahead wind variability in this dataset.
Across targets, PI90 calibration is generally close to the nominal level. For load and PV, PICP@90 stays around 0.92 and remains accompanied by relatively narrow intervals for GRU, especially for PV where GRU attains the smallest PINAW. For wind, GRU intervals are slightly under-covering (PICP@90 below 0.9), while LSTM and persistence are closer to nominal coverage but with wider intervals. Overall, GRU provides a good balance between accuracy and model complexity, using fewer trainable parameters than LSTM under the same configuration (12,929 vs. 16,961).
Figure 5, Figure 6, Figure 7 and Figure 8 illustrate representative trajectories for the four months. The examples visually confirm the summary trends in Table 4. GRU and LSTM closely follow the load and PV dynamics, while wind trajectories show stronger short-term persistence, which explains the strong baseline results.
Based on the averaged performance and the lower parameter count, we select GRU as the representative forecasting model for the downstream coupling. In the following microgrid simulations, GRU forecasts are used as the Pred inputs so that the operation-level analysis focuses on how forecast uncertainty propagates to trajectories and system-level KPIs.

3.2. Microgrid Trajectories Under True vs. Pred Inputs

To show how forecasting errors translate into operational differences, we compare simulated microgrid trajectories under True inputs and Pred inputs using the same EMS and identical model parameters. For each representative month (January, April, July, and December), we display a 120-h window to highlight short-term dynamics and controller responses. We focus on grid exchange power P grid ( t ) , battery dispatch P bess ( t ) , and the battery state of charge SoC ( t ) .
Across all months, the Pred and True trajectories show the same qualitative operating pattern. Deviations are most visible in P grid ( t ) around periods where the net load changes rapidly or crosses zero, because small forecast errors in load or renewable generation can flip the sign of the residual power balance and therefore alter the import/export profile. The battery controller compensates part of these discrepancies by adjusting P bess ( t ) , which reduces the propagation of forecast errors into the battery state. As a result, SoC ( t ) remains closely aligned between Pred and True in the shown windows, and we do not observe sustained saturation at the SoC bounds. Overall, these trajectory-level comparisons indicate that the rule-based EMS buffers part of the hour-ahead uncertainty in the GRU inputs. Figure 9, Figure 10, Figure 11 and Figure 12 compare 120-h trajectory windows under True and Pred inputs for January, April, July, and December, respectively. The remaining differences are mainly reflected in grid exchange profiles and therefore motivate the system-level KPI and economic deviation analysis in the next subsection.
We summarize system-level KPI deviations between True and Pred simulations using the monthly metrics defined in Section 2.4.6. Deviations are reported as absolute percentage differences, which show how hour-ahead forecast uncertainty propagates from trajectory-level behavior to aggregated operational outcomes.
Figure 13 summarizes the absolute percentage deviations of the main system-level KPIs between Pred and True across the four representative months.
Overall, grid exchange KPIs are more sensitive than battery state indicators. Grid export energy shows the largest deviations, which suggests that forecast errors mainly affect the import and export balance rather than triggering unstable control behavior. Battery utilization also changes with forecast inputs, as reflected by throughput deviations, while the SoC mean and range remain relatively close between the two branches. The BESS peak power deviation is zero because the same power limits are enforced in both branches, which caps the peak dispatch in an identical way. Table 5 reports the corresponding absolute percentage KPI deviations across the four representative months.
These results indicate that forecast errors primarily influence energy exchange and cycling intensity, which motivates reporting economic-cost deviations under the flat-tariff assumption in the next subsection.
We report an illustrative cost-sensitivity analysis under the flat-tariff setting introduced in Section 2.4.6. We use constant purchase and feed-in prices of π buy = 100 EUR/MWh and π sell = 50 EUR/MWh. We map battery usage to a throughput-based degradation proxy with c deg = 10 EUR/MWh-throughput. These price levels are illustrative and are used to express operational deviations on a monetary scale under a consistent flat-tariff setting rather than to reproduce a specific market tariff. For each month, we compute the energy cost J energy , the degradation proxy J deg , and the total cost J total = J energy + J deg for both True and Pred trajectories.
Figure 14 summarizes the absolute percentage deviations. The largest total-cost deviation occurs in July (19.44%), while April shows the smallest deviation (1.66%). Across months, the total-cost deviation largely tracks the energy-cost deviation, suggesting that the cost sensitivity is mainly driven by changes in grid import and export. The degradation proxy is smaller in magnitude, but it captures differences in BESS throughput and becomes more noticeable in months with larger dispatch differences. Table 6 provides the month-by-month cost values for True and Pred, together with the corresponding deviations.

3.3. Summary of Key Findings

Across four representative months, GRU achieves the best overall point-forecast performance for load and PV among the tested models while using fewer trainable parameters than LSTM, so we select it as the forecasting model for the downstream coupling. For wind power, the persistence baseline yields the strongest point metrics, indicating that short-term autocorrelation explains a large part of hour-ahead wind variation in this dataset. Prediction-interval results are generally close to the nominal 90% level, with GRU providing competitive coverage and relatively tight intervals for load and PV.
In the microgrid simulations, True and Pred trajectories remain qualitatively consistent within the 120 h windows. Deviations are most visible in grid exchange power around rapid net-load changes and zero-crossings, while the rule-based EMS buffers part of the forecast errors through battery dispatch so that SoC ( t ) stays closely aligned and we do not observe sustained saturation at the SoC bounds.
At the system level, deviations are more pronounced for energy-exchange metrics than for battery-state metrics. Grid export energy is the most sensitive KPI, with the largest deviation in December (31.98%) and elevated deviations also in July (23.69%). Battery throughput is more affected than SoC indicators, reaching its largest deviation in July (14.85%), whereas mean SoC and SoC range show smaller deviations.
Under the illustrative flat-tariff setting with a throughput-based degradation proxy, economic deviations are dominated by changes in energy cost, with total-cost deviations ranging from 1.66% (April) to 19.44% (July). The degradation proxy contributes a smaller share overall, but it becomes more visible in months where throughput differences increase.

4. Discussion

4.1. Key Findings and Implications

This study examined how hour-ahead forecasting uncertainty propagates to microgrid operation and operating cost under a rule-based energy management strategy. Across four representative months (January, April, July, and December), GRU achieved the best overall point-forecast accuracy for Load and PV among the tested models. In contrast, wind power remains strongly driven by short-term persistence at the one-hour horizon, and the persistence baseline outperformed the recurrent models on the main point metrics. Taken together, the results indicate that model choice is target-dependent and that a simple baseline can be difficult to beat for wind under the present data and horizon.
Despite these differences in forecasting performance, the microgrid simulations show that reliability- and constraint-related indicators exhibit smaller deviations between Pred and True. The BESS peak power deviation is 0% in all months, which reflects the fact that battery dispatch is frequently governed by the same actuator limits and rule thresholds in both branches. Differences are more pronounced in energy-exchange and cost-related outcomes. Grid export energy is the most sensitive KPI, with the largest deviation in December (31.98%), and the total operating cost shows its largest deviation in July (19.44%). Overall, under a constraint-driven rule-based EMS, short-horizon forecast errors tend to affect efficiency and economic outcomes more than constraint-related indicators such as peak power and SoC behavior.

4.2. Interpreting Forecasting Performance Across Targets

The averaged forecasting metrics show that load and PV behave differently from wind at the one-hour horizon. For Load and PV, GRU performs best among the tested models in both RMSE and R 2 . It also uses fewer trainable parameters than LSTM, which makes it a reasonable choice when computational budget and reproducibility matter.
The prediction-interval results broadly align with the point metrics. PI90 coverage (PICP@90) stays close to the nominal level across targets, and GRU often achieves comparable coverage with narrower intervals (lower PINAW) than the alternatives, especially for Load and PV. This suggests that the uncertainty estimates are sufficiently consistent for the downstream analysis and can support future uncertainty-aware control extensions.
In contrast, WT forecasting exhibits a regime where persistence is difficult to beat at the one-hour horizon. The persistence baseline achieves a lower RMSE (6.287) and a higher R 2 (0.909) than GRU and LSTM, which is consistent with strong lag-1 dependence in wind generation. Under the univariate setup used here, recurrent models have limited additional structure to exploit because meteorological covariates are not available at the same spatial granularity. As a result, the added model complexity does not necessarily translate into better generalization for hour-ahead WT.
Practically, this motivates keeping a simple baseline in the benchmarking suite and cautions against assuming that a more complex model will automatically translate into operational gains for every target series.

4.3. How Forecast Errors Propagate Through EMS Logic

A key contribution of this work is to connect forecasting errors to operational trajectories and, ultimately, to monthly KPIs and costs. The KPI deviations show that errors do not propagate uniformly. SoC-related indicators show smaller deviations, while throughput becomes more sensitive in months with larger control adjustments, reaching 14.85% in July and 8.54% in December. This suggests that differences mainly accumulate through cycling intensity rather than instantaneous extremes.
This pattern is consistent with rule-based dispatch under SoC bounds and power limits. The constraints cap extreme responses, which helps explain the zero deviation in BESS peak power across months. By contrast, deviations grow when the system operates near switching conditions such as a near-zero net power balance, where small forecast errors can flip import and export decisions. Grid export energy is therefore the most sensitive KPI, deviating by 19.81% in January and 31.98% in December.

4.4. Sensitivity of Operational KPIs and Operating Costs

A useful way to interpret the KPI outcomes is to separate reliability and constraint-related indicators from energy-flow and economics-related indicators. Reliability and constraint-related quantities show smaller deviations between Pred and True. BESS peak power shows 0% deviation in all months, and SoC indicators stay comparatively close, with SoC range deviations peaking at 5.11% in December. Grid peak power shows moderate sensitivity in higher-variability months such as July (12.16%) and April (9.36%), but we do not observe systematic constraint violations.
Energy-flow indicators show clearer amplification. Grid import energy deviations range from 1.92% (April) to 15.95% (July), while grid export energy is more sensitive and reaches 31.98% in December. These deviations translate into cost outcomes. The largest discrepancy occurs in July, where J energy deviates by 19.78% and J total deviates by 19.44%. The degradation proxy becomes more visible when cycling differs, with J deg deviating by 14.85% in July, consistent with the throughput deviation.
The July sensitivity can be interpreted as a regime effect. With higher PV output and more near-zero net power balance, small forecast shifts can flip import and export decisions, and these sign changes accumulate into larger monthly energy deviations. Under the flat tariff, the effect appears mainly in J energy , with an additional contribution from cycling intensity through J deg .
In addition, high frequency fluctuations in the forecast driven residual power can interact with the EMS deadband setting (0.02 MW) when the net power balance operates close to zero. Crossing the deadband more frequently may lead to additional short charge and discharge actions. While these small actions may have limited impact on headline SoC indicators at the hourly scale, they can accumulate over the month and increase battery throughput, which may help explain the more visible throughput and J deg deviations in July.

4.5. Practical Takeaways for Microgrid Operation

From an operator perspective, three points stand out. First, hour-ahead forecasts of load and PV from a compact recurrent model such as GRU can be sufficient for stable operation when a constraint-first EMS is used, because actuation limits and SoC dynamics buffer short-term errors. Second, at the one-hour horizon, wind power may be handled adequately by a persistence baseline in univariate and data-limited settings, and added model complexity does not guarantee operational gains unless additional predictors such as meteorological variables are available. Third, the main benefit of better forecasts is likely economic rather than reliability-related. Improvements primarily reduce unnecessary battery cycling and correct import and export allocation, especially in periods with frequent net-balance crossings.

4.6. Limitations and Future Work

Several limitations should be considered. The forecasting stage is univariate, which keeps the pipeline reproducible but limits performance for weather-driven resources such as wind and solar. The EMS is rule-based and does not optimize decisions under forecast uncertainty, so the results emphasize robustness and error propagation rather than optimal scheduling. The economic analysis uses a flat tariff together with a simplified throughput-based degradation proxy, which is suitable for comparison but does not reflect time-varying prices or detailed battery aging. Finally, the evaluation covers four representative months in one year, and broader multi-year testing would strengthen generalizability.
Future work could incorporate exogenous covariates and multivariate learning, and couple forecasts with uncertainty-aware dispatch such as robust control or MPC. Additional sensitivity studies over battery capacity, renewable penetration, and EMS thresholds would help identify regimes where improved forecasting yields the highest operational and economic value.

5. Conclusions

This paper investigated how hour-ahead forecasting uncertainty propagates to downstream microgrid operation and operating costs in a simulation-based energy management setting. Using four representative months (January, April, July, and December), we benchmarked GRU, LSTM, and a persistence baseline for Load, PV, and WT forecasting, and then compared microgrid trajectories, KPIs, and cost components under Pred versus True inputs.
Three main conclusions follow. First, forecasting performance is target-dependent. GRU provides the strongest accuracy for Load and PV, with Load RMSE 3.926 and R 2 0.903, and PV RMSE 6.207 and R 2 0.889. For WT, a persistence baseline remains difficult to beat at the one-hour horizon, achieving WT RMSE 6.287 and R 2 0.909. Second, under the constraint-driven rule-based EMS, reliability- and constraint-related indicators show smaller deviations under forecast errors. BESS peak power is unchanged between Pred and True across all months, and SoC-related metrics remain bounded, indicating that dispatch constraints and SoC limits buffer short-horizon uncertainty. Third, the main impact of forecast errors appears in energy-flow and economic outcomes. Grid import and export energies, battery throughput, and operating cost are more sensitive, with the largest total-cost deviation in July reaching 19.44%. Export energy is particularly sensitive when the net balance frequently crosses zero, with the largest export deviation occurring in December (31.98%).
The study has several limitations. The forecasting stage is univariate and does not include meteorological predictors, which can limit performance for weather-driven generation. The EMS is rule-based and not explicitly uncertainty-aware, and the economic formulation uses simplified assumptions including flat tariffs and a throughput-based degradation proxy. Finally, although four months span seasonal variation, broader multi-year testing would strengthen generalizability.
Future work will focus on multivariate forecasting with exogenous covariates, uncertainty-aware dispatch, and sensitivity studies to identify regimes where improved forecasts deliver the highest operational and economic value.

Author Contributions

Conceptualization, Y.-K.L., G.R. and S.I.; methodology, Y.-K.L.; software, Y.-K.L.; validation, Y.-K.L., G.R. and S.I.; formal analysis, Y.-K.L.; investigation, Y.-K.L.; resources, G.R. and S.I.; data curation, Y.-K.L.; writing—original draft preparation, Y.-K.L.; writing—review and editing, G.R. and S.I.; visualization, Y.-K.L.; supervision, G.R. and S.I.; project administration, S.I. and G.R.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are derived from the publicly available IESO Ontario dataset (KOK region, 2018). Processed datasets are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sun, Y.; Zhang, W.; Chen, H.; Li, J. A survey on deep learning methods for power load and renewable energy forecasting in smart microgrids. Renew. Sustain. Energy Rev. 2023, 176, 113162. [Google Scholar] [CrossRef]
  2. Zhao, X.; Duan, P.; Cao, X.; Xue, Q.; Zhao, B.; Hu, J.; Zhang, C.; Yuan, X. A probabilistic load forecasting method for multi-energy loads based on inflection point optimization and integrated feature screening. Energy 2025, 327, 136391. [Google Scholar] [CrossRef]
  3. Mahmoudinezhad, S.; Sadi, M.; Ghiasirad, H.; Arabkoohsar, A. A comprehensive review on the current technologies and recent developments in high-temperature heat exchangers. Renew. Sustain. Energy Rev. 2023, 183, 113467. [Google Scholar] [CrossRef]
  4. Nguyen, T.T.; Nguyen, T.T.; Nguyen, H.P. Optimal operation of battery energy storage system in microgrid to minimize electricity cost based on model predictive control using coyote algorithm. J. Energy Storage 2025, 114, 115904. [Google Scholar] [CrossRef]
  5. El Qouarti, O.; Nasser, T.; Essadki, A.; Akarne, Y. AC/DC hybrid microgrid energy management optimization as a decisive factor towards De-Carbonization and rational integration of electrical self-generating units using three-objective grey wolf optimization algorithm-power to X and renewable energies solutions-. Int. J. Hydrogen Energy 2025, 138, 1116–1130. [Google Scholar] [CrossRef]
  6. Tziolis, G.; Livera, A.; Theocharides, S.; Lopez-Lorente, J.; Makrides, G.; Georghiou, G.E. Net load forecasting: A comprehensive literature review. Sustain. Energy Technol. Assessments 2025, 82, 104450. [Google Scholar] [CrossRef]
  7. Islam, S.; Suaad, A.; Hartmann, M.; Rafajlovski, G. Analysing the Electricity Load and Production by Means of Different Machine Learning Methods: A Case Study of a MG System. Int. J. Inf. Technol. Secur. 2024, 16, 101–108. [Google Scholar] [CrossRef]
  8. Karamolegkos, S.; Koulouriotis, D.E. Advancing short-term load forecasting with decomposed Fourier ARIMA: A case study on the Greek energy market. Energy 2025, 325, 135854. [Google Scholar] [CrossRef]
  9. Misiurek, K.; Olkuski, T.; Zyśk, J. Review of Methods and Models for Forecasting Electricity Consumption. Energies 2025, 18, 4032. [Google Scholar] [CrossRef]
  10. Chang, Z.-H.; Ke, Y.-K. Cumulative Load Forecasting in Microgrid Clusters Using Enhanced-Gated Recurrent Unit Model. IEEE Access 2024, 12, 197907–197916. [Google Scholar] [CrossRef]
  11. Yunita, A.; Pratama, M.I.; Almuzakki, M.Z.; Ramadhan, H.; Akhir, E.A.P.; Mansur, A.B.F.; Basori, A.H. Performance analysis of neural network architectures for time series forecasting: A comparative study of RNN, LSTM, GRU, and hybrid models. MethodsX 2025, 15, 103462. [Google Scholar] [CrossRef] [PubMed]
  12. Chan, J.W.; Yeo, H.J. Sparse transformer networks for short-term electrical load forecasting in smart grids. Int. J. Electr. Power Energy Syst. 2024, 158, 110133. [Google Scholar] [CrossRef]
  13. Piantadosi, A.; Dutto, S.; Galli, A.; Vito, S.D.; Sansone, C.; Francia, G.D. A review of hybrid transformer-based models for photovoltaic power forecasting. Energy AI 2024, 18, 100444. [Google Scholar] [CrossRef]
  14. Angelopoulos, A.N.; Bates, S. A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification. arXiv 2021, arXiv:2107.07511. [Google Scholar]
  15. O’Connor, C.; Bahloul, M.; Rossi, R.; Prestwich, S.; Visentin, A. Conformal prediction for electricity price forecasting in the day-ahead and real-time balancing market. Energy and AI 2025, 21, 100571. [Google Scholar] [CrossRef]
  16. Limouni, T.; Yaagoubi, R.; Bouziane, K.; Guissi, K.; Baali, E.H. Intelligent real time control strategy and power management based on MPC and LSTM-TCN model for standalone DC microgrid with energy storage. Int. J. Electr. Power Energy Syst. 2025, 169, 110761. [Google Scholar] [CrossRef]
  17. Manzolini, G.; Fusco, A.; Gioffrè, D.; Matrone, S.; Ramaschi, R.; Saleptsis, M.; Simonetti, R.; Sobic, F.; Wood, M.J.; Ogliari, E.; et al. Impact of PV and EV forecasting in the operation of a microgrid. Forecasting 2024, 6, 32. [Google Scholar] [CrossRef]
  18. Yoo, J.; Jung, S. Modeling forecast errors for microgrid operation using Gaussian process regression. Sci. Rep. 2024, 14, 2166. [Google Scholar] [CrossRef]
  19. Independent Electricity System Operator (IESO). Public Reports (Power Data). Available online: https://reports-public.ieso.ca/public/ (accessed on 26 January 2026).
  20. Herodotou, P.; Tziolis, G.; Makrides, G.; Georghiou, G.E. Comparative analysis of machine learning methods for residential net load forecasting of solar-integrated households. Sustain. Energy Grids Netw. 2026, 45, 102106. [Google Scholar] [CrossRef]
  21. Independent Electricity System Operator (IESO). Hourly Electricity Consumption Data by Forward Sortation Area: Report Description. Available online: https://reports-public.ieso.ca/public/HourlyConsumptionByFSA/ (accessed on 26 January 2026).
  22. Independent Electricity System Operator (IESO). Generator Output by Fuel Type Hourly Report: Report Description. Available online: https://reports-public.ieso.ca/docrefs/helpfile/GenOutputbyFuelHourly_h1.pdf (accessed on 26 January 2026).
  23. Pullinger, M.; Zapata-Webborn, E.; Kilgour, J.; Elam, S.; Few, J.; Goddard, N.; Hanmer, C.; McKenna, E.; Oreszczyn, T.; Webb, L. Capturing variation in daily energy demand profiles over time with cluster analysis in British homes (September 2019–August 2022). Appl. Energy 2024, 360, 122683. [Google Scholar] [CrossRef]
  24. Lyu, C.; Zhang, Y.; Bai, Y.; Yang, K.; Song, Z.; Ma, Y.; Meng, J. Inner-outer layer co-optimization of sizing and energy management for renewable energy microgrid with storage. Appl. Energy 2024, 363, 123066. [Google Scholar] [CrossRef]
  25. Gabriel, L.F.G.; Ruiz-Cruz, R.; Coss y León Monterde, H.J.; Zúñiga-Grajeda, V.; Gurubel-Tun, K.J.; Coronado-Mendoza, A. Optimizing the penetration of standalone microgrid, incorporating demand side management as a guiding principle. Energy Rep. 2022, 8, 2712–2725. [Google Scholar] [CrossRef]
  26. Renkema, Y.; Brinkel, N.; Alskaif, T. Conformal prediction for stochastic decision-making of PV power in electricity markets. Electr. Power Syst. Res. 2024, 234, 110750. [Google Scholar] [CrossRef]
  27. Eren, Y.; Küçükdemiral, İ. A comprehensive review on deep learning approaches for short-term load forecasting. Renew. Sustain. Energy Rev. 2024, 189, 114031. [Google Scholar] [CrossRef]
  28. Gibbs, I.; Candès, E.J. Adaptive Conformal Inference Under Distribution Shift. arXiv 2021. [Google Scholar] [CrossRef]
  29. Hasan, M.; Mifta, Z.; Papiya, S.J.; Roy, P.; Dey, P.; Salsabil, N.A.; Chowdhury, N.U.R.; Farrok, O. A state-of-the-art comparative review of load forecasting methods: Characteristics, perspectives, and applications. Energy Convers. Manag. X 2025, 26, 100922. [Google Scholar] [CrossRef]
  30. Ibrahim, I.A.; Hossain, M.J. Short-term multivariate time series load data forecasting at low-voltage level using optimised deep-ensemble learning-based models. Energy Convers. Manag. 2023, 296, 117663. [Google Scholar] [CrossRef]
  31. Zhang, K.; Zou, G.; Zhang, J.; Li, H.; Sun, Y.; Li, G. Microgrid energy management strategy considering source-load forecast error. Int. J. Electr. Power Energy Syst. 2025, 164, 110372. [Google Scholar] [CrossRef]
  32. Liu, X.; Zhao, T.; Deng, H.; Wang, P.; Liu, J.; Blaabjerg, F. Microgrid Energy Management with Energy Storage Systems: A Review. Csee J. Power Energy Syst. 2023, 9, 483–504. [Google Scholar] [CrossRef]
  33. Putz, D.; Gumhalter, M.; Auer, H. The true value of a forecast: Assessing the impact of accuracy on local energy communities. Sustain. Energy Grids Netw. 2023, 33, 100983. [Google Scholar] [CrossRef]
  34. Jamal, S.; Pasupuleti, J.; Ekanayake, J. A rule-based energy management system for hybrid renewable energy sources with battery bank optimized by genetic algorithm optimization. Sci. Rep. 2024, 14, 4865. [Google Scholar] [CrossRef] [PubMed]
  35. Sorourifar, F.; Zavala, V.M.; Dowling, A.W. Integrated Multiscale Design, Market Participation, and Replacement Strategies for Battery Energy Storage Systems. IEEE Trans. Sustain. Energy 2020, 11, 84–92. [Google Scholar] [CrossRef]
  36. Bordin, C.; Anuta, H.O.; Crossland, A.; Lascurain Gutierrez, I.; Dent, C.J.; Vigo, D. A linear programming approach for battery degradation analysis and optimization in offgrid power systems with solar energy integration. Renew. Energy 2017, 101, 417–430. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed workflow. The lettered labels (a)–(e) indicate sequential steps within the same diagram and are not subfigure labels.
Figure 1. Overview of the proposed workflow. The lettered labels (a)–(e) indicate sequential steps within the same diagram and are not subfigure labels.
Algorithms 19 00116 g001
Figure 2. Average diurnal profiles (hour-of-day mean) of load (K0K), scaled wind (WT), and scaled photovoltaic (PV) generation for January, April, July, and December 2018. WT/PV are scaled using fixed factors, and PV is peak-capped to 0.8× the monthly peak load.
Figure 2. Average diurnal profiles (hour-of-day mean) of load (K0K), scaled wind (WT), and scaled photovoltaic (PV) generation for January, April, July, and December 2018. WT/PV are scaled using fixed factors, and PV is peak-capped to 0.8× the monthly peak load.
Algorithms 19 00116 g002
Figure 3. Simulation workflow under True vs. Pred inputs.
Figure 3. Simulation workflow under True vs. Pred inputs.
Algorithms 19 00116 g003
Figure 4. Simplified microgrid architecture and signal flow.
Figure 4. Simplified microgrid architecture and signal flow.
Algorithms 19 00116 g004
Figure 5. Monthly forecasting example (January). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Figure 5. Monthly forecasting example (January). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Algorithms 19 00116 g005
Figure 6. Monthly forecasting example (April). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Figure 6. Monthly forecasting example (April). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Algorithms 19 00116 g006
Figure 7. Monthly forecasting example (July). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Figure 7. Monthly forecasting example (July). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Algorithms 19 00116 g007
Figure 8. Monthly forecasting example (December). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Figure 8. Monthly forecasting example (December). Load, PV, and wind trajectories comparing True against GRU, LSTM, and the persistence baseline.
Algorithms 19 00116 g008
Figure 9. Trajectory comparison under True vs. Pred inputs for January (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Figure 9. Trajectory comparison under True vs. Pred inputs for January (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Algorithms 19 00116 g009
Figure 10. Trajectory comparison under True vs. Pred inputs for April (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Figure 10. Trajectory comparison under True vs. Pred inputs for April (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Algorithms 19 00116 g010
Figure 11. Trajectory comparison under True vs. Pred inputs for July (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Figure 11. Trajectory comparison under True vs. Pred inputs for July (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Algorithms 19 00116 g011
Figure 12. Trajectory comparison under True vs. Pred inputs for December (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Figure 12. Trajectory comparison under True vs. Pred inputs for December (120-h window): P grid ( t ) , P bess ( t ) , and SoC ( t ) .
Algorithms 19 00116 g012
Figure 13. System-level KPI deviations between Pred and True across four representative months.
Figure 13. System-level KPI deviations between Pred and True across four representative months.
Algorithms 19 00116 g013
Figure 14. Economic deviations under a flat tariff and a throughput-based degradation proxy. Bars report absolute percentage deviations between Pred and True for J energy , J deg , and J total .
Figure 14. Economic deviations under a flat tariff and a throughput-based degradation proxy. Bars report absolute percentage deviations between Pred and True for J energy , J deg , and J total .
Algorithms 19 00116 g014
Table 1. Descriptive statistics (MW) of hourly load (K0K) and scaled renewable generation (WT, PV) across four representative months in 2018. WT/PV are scaled using fixed factors; PV is peak-capped to 0.8× the monthly peak load.
Table 1. Descriptive statistics (MW) of hourly load (K0K) and scaled renewable generation (WT, PV) across four representative months in 2018. WT/PV are scaled using fixed factors; PV is peak-capped to 0.8× the monthly peak load.
MonthTargetMeanStdMinMax
JanuaryLoad94.4718.0755.39146.60
JanuaryWT59.6632.281.09116.91
JanuaryPV9.4818.590.00117.28
AprilLoad71.7112.7142.12110.59
AprilWT39.8227.730.50110.70
AprilPV19.1224.960.0088.47
JulyLoad74.3818.9140.90119.63
JulyWT20.3516.370.0681.56
JulyPV28.4231.830.0095.70
DecemberLoad84.0413.4752.59114.48
DecemberWT42.6128.722.44110.99
DecemberPV5.5611.860.0091.59
Table 2. BESS parameters used in the Simulink microgrid model.
Table 2. BESS parameters used in the Simulink microgrid model.
ParameterValue
Sampling step Ts 1 h
Energy capacity E _ cap 80 MWh
Charge power limit Pchg _ max 6 MW
Discharge power limit Pdis _ max 4 MW
SoC bounds ( SoC _ min , SoC _ max ) ( 0.20 , 0.80 )
SoC reserve threshold SoC _ res 0.35
Charge efficiency η c 0.95
Discharge efficiency η d 0.95
Deadband deadband 0.02 MW
Initial SoC SoC 0 0.40
Table 3. Summary of the rule-based EMS logic used in the Simulink microgrid model.
Table 3. Summary of the rule-based EMS logic used in the Simulink microgrid model.
ItemRule
DeadbandIf the residual power stays within the deadband (deadband = 0.02 MW), the BESS command is set to zero to avoid frequent small switching actions.
ChargeIf the residual power indicates surplus generation, the controller charges the battery subject to the charge power limit Pchg_max and the SoC upper bound SoC_max.
DischargeIf the residual power indicates a deficit, the controller discharges the battery subject to the discharge power limit Pdis_max, the SoC lower bound SoC_min, and the reserve threshold SoC_res.
SaturationCharge and discharge commands are saturated by the configured power limits and SoC bounds to respect the configured constraints.
Table 4. Forecasting performance averaged over four months (January, April, July, and December). Lower is better for RMSE, MAE, and PINAW, while higher is better for R 2 . For PI90, PICP close to 0.9 indicates good calibration.
Table 4. Forecasting performance averaged over four months (January, April, July, and December). Lower is better for RMSE, MAE, and PINAW, while higher is better for R 2 . For PI90, PICP close to 0.9 indicates good calibration.
TargetModelRMSEMAE R 2 PICP@90PINAWParams
LoadGRU3.9263.0220.9030.9210.29512,929
LSTM4.5233.5170.8650.9150.33016,961
Persistence4.7343.7840.8660.9150.337
PVGRU6.2073.6550.8890.9250.24112,929
LSTM7.1034.3420.8600.9230.27416,961
Persistence8.2524.7170.8110.9010.301
WTGRU7.2635.5410.8730.8950.30412,929
LSTM8.5036.5470.8370.9030.34916,961
Persistence6.2874.7300.9090.9130.265
Table 5. Absolute percentage deviations of monthly KPIs between Pred and True.
Table 5. Absolute percentage deviations of monthly KPIs between Pred and True.
KPIJanAprJulDec
Grid-related
Grid import energy E imp (%)5.321.9215.957.48
Grid export energy E exp (%)19.818.4323.6931.98
Grid peak power P grid , peak (%)1.749.3612.161.87
Battery-related
BESS throughput E bess , abs (%)1.160.3914.858.54
BESS peak power P bess , peak (%)0.000.000.000.00
SoC range (%)1.791.360.615.11
Mean SoC (%)0.400.294.592.54
Table 6. Monthly operating-cost comparison under the flat-tariff assumption. Costs are reported for True and Pred trajectories, together with the absolute percentage deviation (relative to True).
Table 6. Monthly operating-cost comparison under the flat-tariff assumption. Costs are reported for True and Pred trajectories, together with the absolute percentage deviation (relative to True).
Month J energy (EUR) J deg (EUR) J total (EUR)
TruePredDev. (%)TruePredDev. (%)TruePredDev. (%)
Jan288,646269,4296.66252424951.16291,171271,9246.61
Apr267,978272,4941.69305930470.39271,037275,5411.66
Jul326,254261,71319.783213369014.85329,467265,40319.44
Dec350,672315,9119.91139315128.54352,066317,4249.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.-K.; Rafajlovski, G.; Islam, S. GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink. Algorithms 2026, 19, 116. https://doi.org/10.3390/a19020116

AMA Style

Liu Y-K, Rafajlovski G, Islam S. GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink. Algorithms. 2026; 19(2):116. https://doi.org/10.3390/a19020116

Chicago/Turabian Style

Liu, Yu-Kuei, Goran Rafajlovski, and Saiful Islam. 2026. "GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink" Algorithms 19, no. 2: 116. https://doi.org/10.3390/a19020116

APA Style

Liu, Y.-K., Rafajlovski, G., & Islam, S. (2026). GRU-Based Short-Term Forecasting for Microgrid Operation: Modeling and Simulation Using Simulink. Algorithms, 19(2), 116. https://doi.org/10.3390/a19020116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop