1. Introduction
The electric power system is undergoing a profound transformation driven by the fast and widespread penetration of DERs—including photovoltaic (PV) generation, battery energy storage systems (BESS), electric vehicles (EVs), hydrogen-based technologies, and flexible loads—together with the electrification of technologies that were traditionally supplied by thermal processes. This change is pushing distribution grids toward the smart grid concept, with bidirectional flows, active prosumers, local energy communities, and an unprecedented density of manageable devices [
1,
2]. This creates new opportunities, but it also introduces significant operational challenges: increased variability and uncertainty due to weather-dependent generation, voltage, and congestion problems at the distribution-grid level, more complex interactions between transmission and distribution systems, and the need to coordinate large amounts of small assets, on some occasions in (near) real time. A response to these challenges can be the concept of flexibility, which refers to the capability of different assets to adjust their consumption or generation profiles in response to external signals, such as prices, grid constraints, or system needs [
3,
4,
5]. Flexibility can be leveraged for two complementary objectives: internally, to optimize the operation of the customer’s facilities (e.g., minimizing operating costs, maximizing self-consumption, or reducing pollutant emissions), and externally, to provide support services to grid operators [
6].
To unlock this flexibility in a systematic, scalable, and reliable way, modern installations require advanced control and optimization layers that can translate physical, electrical, and economic models into actionable control decisions. In this context, Energy Management Systems (EMS) and Model Predictive Control (MPC) frameworks play a central role as the “brain” of flexible assets [
7]. An EMS is generally responsible for supervising and optimizing the operation of a specific energy system—such as a building, microgrid, industrial site, or aggregated portfolio of assets—by scheduling controllable resources over an optimization horizon while enforcing technical and operational constraints. MPC provides a rigorous formulation for EMS operation, solving a finite-horizon optimization problem in a receding-horizon way, using updated measurements and forecasts of demand, renewable generation, or energy prices to compute optimal setpoints that are then applied to the real system [
8].
In practical implementations of EMS and MPC, several families of optimization techniques can be used, ranging from deterministic methods—such as linear programming (LP), quadratic programming (QP), convex nonlinear programming and mixed-integer formulations (MILP/MIQP)—to heuristic and metaheuristic approaches, including genetic algorithms, particle swarm optimization, simulated annealing, tabu search, or evolutionary strategies [
9]. Exact or deterministic methods exploit the algebraic structure of the problem by operating on a well-defined cost function and constraints to return a proven optimum (for LP/QP/convex problems) or a sub-optimal solution with a certified optimality gap (for MILP/MIQP and many non-convex cases) [
10,
11]. This approach offers several advantages in the context of EMS/MPC. First, constraint satisfaction is guaranteed by construction, which is critical when constraints represent thermal limits, state-of-charge ranges, or comfort and process quality requirements [
12]. Second, deterministic solvers ensure reproducibility of results; re-solving the same problem with the same inputs yields identical solutions. Third, they provide valuable by-products such as dual variables, sensitivities, and reduced costs, which can be used to analyze congestion, marginal values of flexibility, or to derive simplified rules for asset sizing and flexibility assessment. Finally, modern LP/MILP solvers are now fast and robust enough for rolling-horizon execution with granularities of 5–15 min [
13]. Notably, when the physical and economic models are well calibrated, research suggests that the primary source of performance degradation in MPC-based EMS is the forecast error in exogenous inputs, such as load or prices [
14,
15,
16].
In contrast, heuristic and metaheuristic methods are advantageous when problems are poorly structured, highly non-convex, or treated as black boxes. Both methods are often easy to implement, relatively tolerant to modelling imperfections, and can explore large combinatorial spaces using stochastic search. However, these methods do not guarantee optimality or feasibility. Their performance is sensitive to tuning parameters (such as population sizes, mutation rates, and cooling schedules) [
9], and different executions may yield different solutions even with identical input data or models. These characteristics are problematic in EMS/MPC applications.
Furthermore, research in MPC and hybrid systems shows that, when the dynamics and constraints of the system are correctly modelled, quadratic and mixed-integer formulations can enforce hard operational limits while providing stability and performance guarantees [
12]. In the energy domain, compact MILP formulations, initially developed for unit commitment, show that large-scale problems can be solved with certified optimality gaps and practical computation times, provided the model is tightly formulated [
17]. This synergy between feasibility, optimal control, and interpretability is challenging to replicate with purely heuristic approaches, whose convergence and solution quality are more difficult to certify in safety-critical and grid-connected environments.
The versatility of deterministic optimization for EMS is demonstrated by its successful deployment across residential, commercial, and industrial sectors, in both grid-connected and islanded configurations, utilizing a broad range of technologies, including PV, batteries, EVs, and hydrogen systems. In the residential sector, mixed-integer formulations have been used to optimize appliance and EV scheduling under real-time prices [
15], to implement day-ahead home energy management with PV and batteries, including degradation-aware operation [
18], and to extend the same principles to community Energy Management Systems with peer-to-peer energy trading [
19]. Similar MILP-based approaches have been demonstrated for off-grid homes with PV/BESS, showing that deterministic scheduling can reduce operating costs while extending battery life [
20]. Additionally, they have been applied to multi-objective HEMS that coordinate PV, storage, EVs, and hydrogen production/consumption in vehicle-to-home/grid setups [
21]. In commercial buildings, deterministic MILP and QP-based MPC have been shown to optimize HVAC plants and multi-zone climate control while respecting comfort constraints [
22,
23]. Field studies confirm that MPC with deterministic solvers can reliably deliver demand flexibility and cost savings in large office buildings [
24]. At a larger scale, MILP-based distributed energy management has been applied to interconnected microgrids [
25], industrial parks and other areas with integrated energy systems [
26], district heating [
27], industrial facilities [
28], and microgrids that combine short-term battery storage with long-term hydrogen storage [
29,
30]. Collectively, these examples demonstrate that deterministic optimization methods are not only theoretical but a practical and scalable option for EMS/MPC in real DER systems, capable of handling mixed technologies and operating modes while providing the transparency and reliability essential for both internal optimization and external grid-support services.
Deterministic, optimization-based EMS and MPC formulations not only provide high-quality operating schedules but also serve as a powerful analytical framework for deriving simplified design rules and characterizations of flexibility that can be used in planning and market participation. Joint sizing and operation studies for microgrids and integrated energy systems employ MILP techniques that are essentially identical to those used for EMS, extended to include investment variables and long-term horizons [
31]. By co-optimizing investment (PV capacity, BESS energy and power ratings, hydrogen storage size) alongside operational decisions, these models reveal marginal values and binding constraints that yield practical sizing guidelines (e.g., BESS energy-to-power ratios or PV-to-load ratios) [
32]. Similar HEMS/CEMS formulations have been utilized in both grid-connected and off-grid settings, demonstrating that the same deterministic core can be parameterized to explore alternative technology combinations and obtain techno-economic envelopes for residential and community systems incorporating PV, BESS, EVs, and hydrogen [
33]. Regarding flexibility, recent research demonstrates that flexibility “products” (upward/downward capacity, duration, activation limits) can be directly inferred from families of optimal trajectories computed by MPC/EMS under different price and constraint scenarios. This leads to the creation of flexibility envelopes and key performance indicators (KPIs) grounded in the same deterministic optimization framework used in daily operations [
34,
35,
36]. Furthermore, data-driven methods leveraging these optimization outputs can expedite flexibility quantification by training surrogate models on deterministic EMS runs. This approach preserves the physical consistency and interpretability inherent in the underlying optimization framework [
37].
The deterministic EMS/MPC framework can be seamlessly integrated with local flexibility markets (LFMs) and peer-to-peer (P2P) energy platforms by incorporating market outcomes—such as accepted bids, activation volumes, and negotiated exchanges—as additional constraints or targets within the optimization horizon. Systematic reviews of European LFMs describe clear information flows from market platforms to aggregators and local controllers, where accepted flexibility offers are translated into activation quantities and time windows that an EMS must satisfy [
38,
39]. Similarly, in P2P and transactive energy models, the clearing engine determines agreed energy exchanges and prices among peers, which are subsequently embedded as equality or inequality constraints (e.g., scheduled imports/exports, minimum traded volumes) in deterministic EMS/MPC formulations at building, microgrid, or community level [
40]. Real-world hierarchical (two-level) implementations, where remote solvers like GAMS 25.0.3/CPLEX 12.8.0.0 compute optimal 24–72 h schedules while local controllers adapt setpoints in real time, demonstrate that deterministic EMS can reliably incorporate market-driven setpoints. By reconciling these signals with local technical constraints, these systems provide a practical blueprint for market-integrated, flexibility-enabled operations [
41].
When a complete physical and economic model of the system is available, and optimization is performed using exact deterministic methods (LP/QP/MILP/MIQP), a consistent pattern emerges [
15]: the dominant source of performance degradation is not the optimizer itself, but rather the uncertainty in exogenous forecasts and data discretization. In building applications, studies on MPC for climate control explicitly show that, for a well-identified building model, the closed-loop performance is primarily driven by the accuracy of weather and load predictions rather than by limitations of the control algorithm [
42]. This is reinforced by recent work that quantifies the value of information for building load forecasts: when the underlying model and MPC formulation are kept fixed, degrading the load forecast directly increases operating cost and distorts storage schedules, clearly identifying the forecast as the primary source of error [
43]. Similar conclusions are obtained in simulation campaigns that inject realistic weather forecast errors into predictive controllers while keeping the microgrid model and optimization unchanged, observing significant increases in cost and discomfort compared to the ideal case with perfect forecasts [
44]. In systems with large shares of variable renewable generation, wind and PV forecasting studies further confirm that forecast error distributions, especially infrequent but large deviations, are the primary driver of sub-optimal commitment and dispatch decisions when the network and unit models are otherwise accurate [
45,
46]. In parallel, reviews of electricity price forecasting emphasize that the difficulty of predicting prices translates directly into operational and arbitrage inefficiencies whenever deterministic optimization is used downstream [
47].
The same separation between “perfect” optimization and imperfect information is also evident in microgrid and system-level studies. In microgrid EMS, stochastic MPC and stochastic security-constrained planning are introduced precisely because the main source of operational risk is the uncertainty in demand and renewable generation. At the same time, the network and device models are treated as fixed and reliable [
48,
49]. Rolling-horizon frameworks for joint energy supply and demand planning make this explicit: the deterministic formulation (constraints, cost function, physical model) is held constant, and performance improvements are achieved by periodically updating forecasts and re-optimizing, not by changing the underlying model [
16]. Comparative studies on microgrid energy management demonstrate that, with a deterministic model and forecasts updated in a rolling horizon, economic performance can match or even surpass more complex uncertainty-modelling approaches, underscoring that forecast handling is the key lever once the model is well defined [
50]. Fitted rolling-horizon control schemes follow a similar logic, utilizing repeated deterministic optimizations to compensate for forecast errors as new measurements become available [
51]. Finally, real two-level EMS implementations—where remote deterministic optimization (e.g., GAMS/CPLEX) computes optimal 24–72 h schedules and local controllers adapt setpoints in real time—explicitly rely on updated demand, PV and price forecasts as the main source of variation between the planned and realized operation, while the physical/economic model and optimization method remain unchanged [
52]. Together, these results support the claim that, once the system is correctly modelled and a complete deterministic optimizer is used, the principal—and in many practical cases, almost the only—source of operational error is the quality of the forecasts for demand, renewable generation, and energy prices.
Although references in the previous paragraphs have shown that rolling-horizon optimization and frequent forecast updates are effective ways to mitigate the impact of uncertainty in EMS/MPC, most studies focus on reporting improvements in cost, comfort, or reliability without explicitly analyzing how forecast errors degrade the value of the optimal objective compared to an ideal perfect-forecast benchmark [
43]. Existing microgrid and building-control studies typically treat the choice of time step and optimization horizon as secondary design parameters; they may observe that finer granularity or shorter horizons improve robustness, but they do not systematically characterize how the combination of step size and horizon length shapes the gap between the theoretical optimum and realized performance under realistic forecast errors [
50,
51]. Likewise, hierarchical two-level implementations—where a remote optimiser computes day-ahead or multi-day schedules and local controllers adapt setpoints in (near) real time—have primarily aimed to demonstrate operational feasibility or the provision of grid-support services, rather than quantifying their ability to attenuate the degradation of the objective function induced by faulty forecasts [
52]. In contrast, the present work explicitly uses the optimal objective value as a primary performance metric. It shows that a two-stage deterministic EMS—with a carefully selected granularity and horizon at the upper level, and a fast local corrective layer at the lower level—can measurably reduce the impact of demand, renewable generation, and price forecast errors on the achievable optimum, thereby providing a quantitatively grounded design guideline for cloud–edge EMS architectures. Another point that differentiates this work from previous ones is that, although the second EMS step is proposed to be located always on the edge, near microgrid manageable assets, the first one is proposed to be executed on the cloud, remotely, or locally on the edge, constantly adapted to the optimal calculation algorithm’s needs.
Although the time-scale mismatch between high-level energy management and fast physical dynamics has been widely acknowledged in the literature, this challenge remains unresolved, mainly in practical microgrid deployments. The root causes are structural: market-based scheduling, forecasting processes, and optimization-based EMS are typically formulated at coarse temporal resolutions to ensure tractability and robustness, while real-world generation, storage, and loads evolve at much faster time scales. Fully addressing this mismatch through multi-level or high-frequency optimization often leads to prohibitive computational complexity, increased modelling requirements, and limited feasibility for real-time or edge implementation. As a result, most operational EMS frameworks implicitly accept this mismatch and rely on simplified assumptions, leaving residual tracking errors and inefficiencies unaddressed. This work builds upon this observation and proposes a minimal yet practical two-stage architecture that explicitly targets the consequences of time-scale mismatch, rather than attempting to eliminate it, thereby improving robustness and operational performance under realistic constraints.
The rest of this paper is organized as follows.
Section 2 presents the proposed two-stage deterministic EMS architecture, detailing the upper-level rolling-horizon optimization model, the local corrective control layer, and the simulation framework used to evaluate different granularities and execution platforms (cloud, edge, and mixed configurations).
Section 3 presents the numerical results, divided into two parts: first, the effect of temporal granularity, forecast uncertainty, and execution platform on the objective function obtained by the upper-level optimization; second, a quantitative assessment of discretization error and its compensation through fast local adaptation using real high-resolution PV data.
Section 4 provides a comprehensive discussion of the results, highlighting the trade-off between granularity, computational needs, and local corrective capability, and analyzing the implications for practical cloud–edge EMS deployments. Finally,
Section 5 summarizes the main conclusions and outlines future research directions regarding adaptive temporal resolution and the co-design of forecasting, optimization, and edge-level control.
2. Methodology
In this work, a hierarchical two-stage EMS architecture is proposed to coordinate the operation of complex microgrids with multiple DERs (see
Figure 1). The upper or first stage is devoted to calculating optimal set-points over a multi-day optimization horizon (three days in the case study, but can be adapted to every case), typically with an economic objective such as minimizing operating costs, maximizing self-consumption, or reducing emissions under technical and operational constraints. This stage aggregates forecasts of demand, PV generation, EV availability, energy prices, and other variables, and applies them to detailed models of PV, BESS, hydrogen systems, and grid interaction, forming a deterministic optimization problem (e.g., LP/QP/MILP/MIQP) that is solved in a rolling-horizon framework. From a functional point of view, Stage 1 acts as the planner: it produces a time-stamped schedule (set-points) for the main controllable assets (e.g., BESS charge/discharge, EV charging power, H
2-related set-points, and grid exchange where applicable), to be followed during the next control interval until the next re-optimization. In the rolling-horizon implementation, this planning step is periodically repeated using updated measurements and refreshed forecasts, so that forecast mismatches do not accumulate over long horizons. The second or lower stage acts as a corrective layer that adapts these scheduled set-points (calculated in the first stage) to the actual state of the microgrid and applies them to the physical devices, compensating for forecast errors, discretization effects, and device-level anomalies while preserving the main optimization objectives. Functionally, Stage 2 is a fast tracking and feasibility layer: it runs at a higher refresh rate than Stage 1, uses local measurements to compute the deviation between scheduled and real operation, and updates the real-time set-points sent to the devices to (i) enforce hard constraints (e.g., SoC and power limits) and (ii) keep the operation as close as possible to the planned objective. In this way, uncertainty is turned into small local corrective actions rather than objective-function degradation or constraint violations. Main uncertainty sources in deterministic EMS deployments include: (i) forecast errors in exogenous inputs (load, PV generation, energy prices and EV availability/demand), (ii) structural discretization error introduced by coarse time steps, and (iii) unexpected device-level events (e.g., unavailability, communication/actuation issues, or model/parameter mismatch). The proposed two-level EMS compensates these uncertainties through two complementary principles: (1) rolling-horizon recurrent re-optimization at the upper layer to continuously refresh the plan as new data arrive, and (2) fast local closed-loop setpoint adaptation at the lower layer to correct residual mismatches between re-optimizations and preserve feasibility.
Unlike many existing hierarchical EMS implementations, where the computation platform of each control layer is fixed a priori (for example, day-ahead optimization always in the cloud and local control always at the edge), the present architecture explicitly decouples the functional role of the first stage from its execution platform. The second stage is assumed to run on local edge controllers, as close as possible to the microgrid assets, to guarantee fast reaction, high availability, and independence from external communication links. In practice, this lower-layer placement is critical (and effectively mandatory) because it operates on sub-minute time scales, directly interfaces with measurements and actuators, and must remain available to enforce hard constraints even under communication delays or cloud connectivity loss. By contrast, the first stage is designed to be deployable either remotely in the cloud or locally on an edge or on-premise server. The choice between cloud, edge, or hybrid cloud–edge execution depends on the complexity of the optimization problem (system model, granularity, horizon, and number of assets) and the available computational resources on each platform [
53]. Cloud execution typically offers higher computational power and more effortless scalability at the expense of additional communication latency and dependence on connectivity, while edge execution reduces latency and increases resilience but must respect more strict constraints on CPU, memory, and execution time; hybrid arrangements seek to combine both advantages by placing heavy optimization tasks in the cloud and fast corrective control at the edge. Overall, the combined use of both stages mitigates uncertainty through two complementary mechanisms: Stage 1 reduces the planning error via rolling-horizon re-optimization with updated data, while Stage 2 reduces the tracking error by closing the loop locally and compensating residual mismatches between re-optimizations.
2.1. Stage 1—Microgrid Optimization
To carry out the tests shown in
Section 3.1, a general microgrid model was developed, featuring solar photovoltaic installations for energy generation (or other renewable energy sources, such as wind or hydro), BESS, EV charging points, hydrogen systems, and energy consumption points, both manageable and non-manageable (
Figure 2). This general microgrid model is tailored to the target installation through a configuration file that specifies the existing devices, as well as the grid’s connected or isolated operation, the integration of DC or AC elements, and other operational aspects, such as the possibility of feeding energy back into the grid.
The main operational characteristics of the microgrid components (power, storage capacity, efficiency, etc.) are incorporated through another configuration file. Forecasts (such as solar generation, energy prices, and energy demand) that largely determine the system’s performance are integrated through a forecasts file. This file also includes forecasts of equipment availability, enabling the calculation and generation of setpoints to adapt to the microgrid’s state from the first level of the EMS, a function not all examples seen in the literature incorporate.
This optimization algorithm has been programmed in the PYTHON environment, using the following main libraries:
It is worth noting that this optimization code has been successfully tested in various execution environments, including cloud and edge environments, as well as in different real-world microgrids [
54,
55].
The following paragraphs summarize the mathematical model implemented, and
Appendix A lists all the parameters and variables used in this model.
In this case, the objective function (1) minimizes the operating and maintenance costs of the system components, including electric batteries, EV charging, the H
2 system, and grid connection.
where Cost
bat represents the costs associated with battery operation and management (2), Cost
ev are the costs associated with charging electric vehicles (3), Cost
h2 are the costs associated with the hydrogen system (4), and finally Cost
grid are the costs of buying and selling electricity from the grid (5).
The first term of the sum in Equation (2) allows for the imputation of the usage cost of batteries. The second term is a mathematical artifice to penalize highly variable charges or discharges. The alpha term is minimal, so it has no decisive effect on the result, but it does prevent peaks in charging or discharging due to non-homogeneous or time-varying processes.
The first term of Equation (3) allows for the imputation of the usage cost of EV batteries, which is essential in V2G operation modes. The second term provides for imputing incomes from the sale of energy to EVs at charging stations, for example, in the case of private or domestic facilities, this cost would be zero. The last term, as in the case of the BESS, is a mathematical artifice to avoid an irregular or peaky charging or discharging process.
In this work, battery and electric vehicle operational costs are modelled as an equivalent cost per unit of energy throughput, derived from the asset investment cost, nominal capacity, and expected lifetime, to account for degradation effects within an operational optimization framework.
The first term of the sum in Equation (4) allows us to calculate the cost of using the hydrogen facility, both for the electrolyser and the fuel cell. The second term provides for calculating the costs of charging the hydrogen storage facility using external means, specifically a truck. The last term allows us to calculate the revenue from supplying hydrogen to vehicles.
Equation (5) represents the costs associated with the interaction between the microgrid and the external electricity network. The first term represents the contracted power charge, while the second term accounts for the variable cost of energy imported from or exported to the grid. Although the contracted power charge can be fixed and therefore does not affect the optimization result, it is retained to preserve the generality of the model, as contracted power may become a decision variable in demand-side management, flexibility-oriented, or system sizing applications.
The main constraint of the system, in this example, is the energy balance in the DC grid of the system. Equation (6) has been established based on Kirchhoff’s law, considering the efficiencies of the different system elements and their connections to the DC grid. In the equation, it can be seen that power flows are affected by the converter’s efficiency.
Equation (7) determines the maximum and minimum power that can be extracted from the photovoltaic installation at any given time (
), while Equation (8) determines the power that can be extracted at each moment of the simulation, which is always less than or equal to the estimated prediction (
).
Equation (9) indicates that the initial charge state of the batteries is entered as input data into the optimization process.
Equation (10) represents the maximum and minimum operating capacity of the battery. If a minimum capacity is set (e.g., 10% of SoC), this is reflected in the batsoc_min parameter.
On the other hand, Equations (11) and (12) determine the maximum and minimum charging power of the battery. Equation (13) prevents the battery from being charged and discharged simultaneously, and Equation (14) sets the battery’s state of charge for the following period, considering the state of charge associated with the charging and discharging power for that period.
Finally, Equations (15)–(18) add up the power difference between the different periods. The variables
and
(t) count the positive difference, and
(t) and
(t) count the negative difference. In all cases, the variables are binary. These values are used in (2) to avoid an irregular or peaky charging or discharging process.
Equation (19) indicates that the initial charge state of the EV batteries will be zero if no vehicle is connected to the charging station. Equation (20) limits the minimum and maximum state of charge of the EV battery. In this case, if there is no vehicle connected, as indicated by evavailability(t), the minimum value is 0; otherwise, it is the established minimum value. Equation (21) limits the maximum and minimum charging power.
Equation (22) determines the state of charge of the vehicle battery, taking into account the previous state, the vehicle’s availability, the state of charge upon arrival if a new vehicle has been connected, and the energy managed during charging in the previous period.
Finally, Equations (23) and (24) calculate the power difference between the different periods. The variable
counts the positive difference, and
(t) counts the negative difference. In both cases, the two variables are always positive, equal to or greater than zero. These values are used in the irregular or peaky charging or discharging process described in (3).
The first two equations determine the operating power limits of both the fuel cell (25) and the electrolyser (26). Equations (27)–(30) define the characteristics of hydrogen storage. Following a similar pattern to electric batteries, the initial state of charge (27), the minimum and maximum storage capacity (28), the impossibility of charging and discharging simultaneously (29), and the state of charge at the next instant (30) are determined. To do this, the hydrogen inputs from either the electrolyser or the auxiliary truck, as well as the hydrogen output for the fuel cell or vehicle recharging, are considered. Equations (31) and (32) correspond to the fuel cell and electrolyser models, which determine the amount of hydrogen required to consume or generate a certain amount of electrical power, respectively.
Finally, Equations (33) and (34) associate the hydrogen generated by the electrolyser and consumed by the fuel cell with the system’s hydrogen storage, limiting the amount that can be consumed and/or generated.
The power extracted from the grid must be less than or equal to the contracted power (
), both for purchasing energy (
, (35)) and selling energy (
, (36)).
2.2. Stage 2—Set-Points Adaptation to the Actual State of the Microgrid
Several studies have shown that discretizing high-frequency energy time series with coarser temporal resolutions introduces a structural error that increases with the granularity of the sampling interval. This effect has been documented for PV generation, household demand, and hybrid PV–storage systems, where increasing the time step leads to significant distortions in peak power, energy balance, self-consumption indices, and optimal sizing decisions [
54,
55,
56,
57,
58,
59]. For example, ref. [
53] demonstrated that time-averaging domestic loads smooths peaks and alters key statistics relevant for on-site generation assessments. At the same time, ref. [
57] showed that PV–load matching errors increase systematically with the simulation time step. Similar behaviours have been observed in real PV facilities, where transitioning from sub-minute to hourly data introduces non-negligible deviations in self-consumption and self-sufficiency calculations [
58]. Additionally, in hybrid PV–battery assessments, averaging windows of up to one hour can bias energetic and economic indicators by several percent [
59]. These findings confirm that the first-stage EMS, which necessarily operates on discretized and forecast-based signals, inevitably injects an aggregation error into the control chain. Consequently, a fast second-stage EMS is required to correct these set points at the edge, adapting them to the actual microgrid state and mitigating both discretization-induced deviations and the associated degradation of the objective function.
A two-stage EMS structure with a fast local adaptation layer is established as essential in the literature for reliable and high-performance microgrid operation under uncertainty. Numerous studies have demonstrated that upper-level optimization, typically performed with PV production, demand, EV availability, or market price forecasts, inevitably diverges from real-time conditions due to prediction errors, discretization errors, and device-level anomalies. To mitigate these mismatches, a second, local control layer is commonly introduced to adjust set-points to the measured state of the microgrid, ensuring feasibility and maintaining the economic or technical objective despite forecasting imperfections [
60,
61]. Some sources highlight that this local layer not only compensates forecast uncertainty but also corrects the effect of temporal resolution or sample-time mismatch between the upper-level schedule and fast system dynamics. Authors in [
53] show that their real-time controller compensates for both “forecast uncertainties and sample-time resolution” when tracking the reference trajectory in a home microgrid, while [
62] emphasizes that coarse day-ahead or half-hourly set-points require intra-hour local adjustment to maintain system stability and minimize deviations. This hierarchical structure is further supported in stochastic MPC frameworks, where the first stage provides a robust schedule and the second stage operates at a finer resolution to correct realization-specific deviations [
63,
64].
Beyond uncertainty, the local adaptation layer is also the only mechanism capable of reacting to unexpected device-level events, such as temporary disconnection or derating of PV inverters, BESS units, or EV chargers, without recomputing the whole optimization problem. By relying on real-time measurements rather than forecasts, the second stage absorbs such anomalies, preserving the continuity of operation and preventing constraint violations.
This paper extends this line of research by quantitatively analyzing the propagation of discretization error from the first-stage EMS to the real system, using real 5 s PV generation data. Whereas previous studies primarily focus on the qualitative need for a second stage, this study explicitly measures how the discretization of set-points at 1 min, 5 min, 15 min, and 1 h resolutions affects tracking accuracy and how this error accumulates in practice. Furthermore, the amount of local flexibility required—expressed as nominal power Pnom and needed energy capacity Ecap of a BESS—to fully absorb the mismatch induced by each granularity is computed. This provides, to our knowledge, one of the first empirical quantifications of the flexibility required to counteract the combined effects of discretization error, forecasting uncertainty, and measurement delay using a simple local controller at the edge. These results complement the existing literature by demonstrating not only that a second stage is necessary, as recognized, but also the extent of local corrective capability required to ensure the microgrid operates as intended while respecting hardware limits and maintaining the EMS objective.
Hierarchical control architectures consistently emphasize that the lowest layer must operate at short cycle times—typically seconds or sub-seconds—to ensure stability, enforce operational limits and track set-points that may deviate from upper-level forecasts due to uncertainty or disturbances [
65,
66]. For these time scales, only fast-responding resources, such as BESS, supercapacitors, or inverter-interfaced DER, are suitable; traditional large-scale storage technologies, like pumped-hydro or compressed-air systems, exhibit response times of minutes and are incapable of performing such rapid corrections [
67,
68]. Numerous studies have shown that BESS units, in particular, can absorb short-term PV fluctuations, regulate grid exchange, or maintain EV charging profiles thanks to their millisecond-scale response and high controllability [
69,
70,
71]. Because these fast corrections must be performed close to the hardware and within strict timing constraints, the local controller is typically implemented with simple, robust, low-complexity rules rather than computationally intensive optimization, a design principle validated in several real EMS deployments and rule-based architectures [
72,
73]. This is also the approach followed in CIRCE’s multi-technology DSM control structure, where 15 min set-points derived from an upper-level optimiser are converted locally into 5 s commands through a lightweight decision matrix executed in an embedded industrial device, ensuring real-time correction of deviations and the ability to react to unexpected behaviour such as a sudden disconnection or derating of batteries, PV inverters or local loads (see the Energy Box control logic on page 2 and Table I in [
42]). According to this, this paper evaluates a second stage of the EMS that applies a fast and rule-based corrective layer.
While previous works demonstrate the conceptual need for such fast local adaptation, the contribution presented in this paper goes further by quantifying the magnitude of discretization error, its propagation across several temporal granularities (1 min, 5 min, 15 min, and 1 h), and the concrete BESS flexibility needs required to absorb those deviations under realistic measurement delays and forecast errors. This analysis, therefore, complements existing hierarchical EMS designs by providing empirical evidence of the corrective capability required from the second stage and demonstrating that even simple rule-based controllers, when supported by appropriately sized BESS units, are sufficient to maintain high-quality tracking at the edge despite the combined effects of discretization, forecasting uncertainty, and local disturbances.
3. Results
In this section, the main research results are presented in two main subsections, each related to a specific EMS stage: Stage 1, which involves the optimal scheduling of microgrid operations, and Stage 2, which involves set-point adaptation to the actual grid state. Although the analyzed scenarios adopt a simple structure, all simulations are based on real facilities and real high-resolution operational data, ensuring that the observed behaviours and conclusions are representative of practical EMS deployments rather than synthetic or idealized scenarios.
3.1. Stage 1—Microgrid Optimization and Error Effect Evaluation
As previously justified in this work, a two-stage EMS architecture is particularly suitable for coordinating complex microgrids, with the upper level responsible for optimal resource scheduling and the lower level for fast corrective control. The upper stage computes optimal operating schedules based on a detailed physical and economic model. In contrast, the lower stage adapts set-points in (near) real time to cope with measurement deviations, unforeseen events, and equipment behaviour. In addition to forecast errors, another relevant source of mismatch between the planned and the realized operation is the discretization of inherently continuous signals (e.g., solar PV generation or energy demand) when they are represented with a finite temporal resolution in the optimization problem; this discretization effect is partly analyzed in a separate section, but it is ultimately absorbed at local level by the second stage. Depending on the size and complexity of the optimization problem, as well as on the available computational capabilities, the upper stage can be executed either locally on an edge device or remotely in the cloud. Temporal granularity and rolling-horizon execution play a central role in this context: shorter time steps and frequent re-optimization reduce both forecast and discretization errors and allow the EMS to react to changes in the microgrid configuration, such as the disconnection of assets, unexpected variations in demand or generation, or equipment operating outside its nominal regime.
To analyze these trade-offs, a set of numerical experiments is conducted on a real test microgrid, whose main characteristics are summarized in
Table 1, and the schema is shown in
Figure 3. The system, isolated from the grid, includes a photovoltaic plant, a controllable electric vehicle charging point, a hydrogen refuelling station, and a battery energy storage system. Hydrogen is supplied by external trucks, and the optimization model is designed to determine when to import hydrogen as well. In this case, hydrogen is supplied to the system via external trucks, as there is no electrolyser; however, a fuel cell is present to produce electricity from the stored hydrogen.
Two series of simulations are performed over one month for several temporal granularities and execution platforms: in the first series, each optimization execution starts from a fixed state of charge for the BESS and hydrogen storage; in the second series, the initial state of charge is set equal to the final SoC of the storage systems state the end of the first step resulting from the previous execution, emulating continuous operation in real life and propagating the impact of past decisions.
The first set of tests is used to evaluate the effect on the objective function of the granularity in different platforms, cloud/edge execution. The objective function is to minimize operative costs, so negative values in the objective function imply incomes, in this case derived from vehicle charges.
As shown in
Table 2, reducing granularity in execution improves the objective function on both platforms. However, the results are very similar, confirming that lower granularity in the first stage of EMS yields better results.
The results in
Table 3 indicate that smaller granularities yield better results, albeit at the cost of increased calculation requirements. For granularities of 10 min executed on the server and for both cases of execution on the edge, maximum execution times of 300 s are observed, equivalent to 5 min, which leaves little time for other processes necessary for EMS execution, such as communication with external platforms, data collection, calculation of forecasts and, in the case of the cloud, communication with the second level of the EMS. In the case of edge execution, this effect is accentuated by the fact that the maximum execution time was limited to 300 s.
These results confirm that higher granularities yield better results and a more effective objective function, but also require enhanced execution platforms. Additionally, some granularities may become invalid solutions as execution time approaches the granularity.
The effect of forecast error, specifically in the case of solar PV generation, on the optimization process has also been evaluated.
Table 4 and
Table 5 show the effect of adding an error in solar forecasts on the performance of the optimization system with granularities of 15 min and 1 h. In both cases, the error was introduced as normal noise with a mean of zero and standard deviations of 5%, 15%, and 25%.
As can be seen, when the error increases, the maximum, minimum, and average values observed for the objective function also change erratically, with no clear trend apparent. This may be due to the random nature and regular distribution of the error introduced, which modifies solar production values both upwards and downwards. Other systematic forecasting errors, which systematically underestimate or overestimate the prediction, would lead to different results.
When comparing 15 min and hourly executions, it is confirmed that the objective function yields better results in cases of lower granularity.
Overall, the deviation from the perfect-forecast benchmark (Error = 0%) increases as the injected prediction error grows, and finer granularity generally reduces this degradation compared with hourly discretization. A limited non-monotonic effect may appear in the 15 min case at the highest error level, where the objective happens to be closer to the benchmark than for intermediate errors; this should not be interpreted as a systematic benefit of higher uncertainty, but as the outcome of a particular error realization combined with rolling-horizon re-optimization.
These observations suggest that reducing granularity, along with more frequent forecast updates, will improve the results of the optimization process. The disadvantage of this strategy is that it requires more computing power or longer execution time, so a balance must be found between these objectives: accuracy and execution time.
3.2. Stage 2—Set-Points Adaptation to the Actual State of the Microgrid and Error Effect
In a deterministic two-stage EMS, a local corrective layer is required for several reasons. First, upper-level optimization utilizes, for example, forecasts of demand, renewable generation, EV availability, and energy needs and prices. Even with accurate physical and economic models, these forecasts are imperfect, and as a consequence, the operation set-points calculated will deviate from the perfect and optimal schedule. A fast local controller can partially compensate for these deviations in real time to preserve the original objective (e.g., minimize operational costs, maximize self-consumption, or reduce pollutant emissions). Second, the simple fact of discretizing continuous-time signals into finite time steps or granularity (1 min, 5 min, 15 min, 1 h, etc.) introduces a structural “sampling error” between the continuous underlying trajectories and the step-constant setpoints used in the first stage. This discretization error can also be absorbed at the local level by adjusting setpoints at a higher frequency. Third, local adaptation is essential and the only way to accommodate device-level anomalies or unmodelled behaviour (e.g., a BESS or inverter temporarily out of service or an electric vehicle connected unexpectedly to the charging point) without recalculating the whole optimization problem (it would be calculated using these unexpected phenomena in the following calculation step in the rolling horizon). In this section, a simple yet representative PV/BESS example—similar to implementations already deployed in real-world projects and operational edge environments—is used to illustrate how a second-stage EMS can mitigate these effects through a straightforward, rapid, and effective local correction mechanism.
The numerical analysis is based on real power measurements from a single PV inverter over two consecutive days in a real solar PV facility (see
Figure 4). The raw data consist of active power values sampled every 5 s, which serve as the reference data. From this 5 s series, four block-wise discretizations are constructed with time steps of 1 min, 5 min, 15 min, and 1 h, respectively. These discretized profiles emulate the setpoints that would be produced by a first-stage EMS operating at different temporal granularities. All subsequent calculations—errors, corrective actions, and equivalent BESS requirements—are performed at the original 5 s resolution to capture the fast dynamics of the inverter.
As a first evaluation, the pure tracking error between the 5 s reference and each discretized profile is quantified, without any local correction. Considering only solar hours (non-zero PV output), the mean absolute error (MAE) and root mean square error (RMSE) grow significantly with longer granularity, see
Table 6.
As can be seen, the use of 15 min granularities at level 1 introduces an RMSE of ~260 W compared to the actual 5 s for this inverter; therefore, level 2 must be capable of assuming at least that correction range. On the other hand, when lowering the granularity to 1 min, the error decreases significantly (RMSE ~85 W), but the size of the optimization problem and the communication requirements at the initial EMS level increase.
Assuming a “perfect” second stage that knows the 5 s trajectory in advance and can act instantaneously, a BESS could theoretically cancel this tracking error entirely. In that idealized case, the corrective power needs and capacity using different BESS to avoid 95%, 99%, or all deviations are shown in
Table 7.
For this case study, the use of hourly set-points at the first level introduces errors of up to ~380 W RMSE. It requires the second level to have approximately 2.6 kW/0.3 kWh of local flexibility per inverter to compensate for the time mismatch between planning resolution (1 h) and control resolution (5 s). By reducing the granularity to 15 min, these requirements are relaxed.
As indicated, a fast and straightforward rule-based correction is proposed for application at this second level of the EMS. The local corrective mechanism focuses on compensating for the mismatch between the discretized PV generation set-points produced by the first stage and the actual high-frequency PV output. This compensation is implemented through the controlled charging or discharging of the BESS: when the discretized set-point is lower than the actual PV generation, the battery is charged to absorb the excess power; conversely, when the discretized value is higher than the actual generation, the battery is discharged to compensate for the deficit. In this way, the local stage counteracts the discretization and forecast-induced error while preserving the operating objective of the upper-level EMS.
In reality, the second stage operates with a delay of at least one sampling delay: the local controller only observes the deviation once a new measurement arrives and can react within the next 5 s interval. To account for this, a proportional compensation mechanism with gain K has been evaluated, applied to the action calculated to compensate for the deviation observed in the previous time step. Values of K < 1 correspond to partial compensation, whereas K = 1 applies the full corrective action computed from the last observed error. In this study, the values K = 0, 0.5, and 1 are intentionally selected as representative limiting cases (no correction, partial correction, and total correction) to provide a clear first assessment of whether applying none/part/all the corrective action is beneficial in general. It is acknowledged that, in real deployments, the optimal K may be site- and data-dependent and should be tuned over a finer set of values (or even made adaptive) using longer time series and broader simulation campaigns. The objective here is not to identify the optimal K for a specific situation, but to identify the overall trend and quantify the potential benefit of a partial corrective action versus a complete one.
Therefore, the analysis is repeated assuming a one-step delay and a simple proportional correction (K) based on the last observed deviation.
Table 8 and
Table 9 show the effect of the proposed compensation procedure for 1 min and 1h discretizations.
The effect of a 5 s delay in signal compensation and the total application of compensation, K = 1, or partial application, K = 0.5 and K = 0, has been evaluated. It can be seen that, in this case, the one-second delay and the total application of the compensation calculated for a previous step reduce the tracking error. This phenomenon is also observed at higher granularities (5 min, 15 min, and 1 h). Therefore, it can be seen that total error compensation, even with a delay, is a suitable solution for mitigating the effect of data discretization at the first level of the EMS.
The effects of a forecasting error and the local correction have also been evaluated. To achieve this, a file with a normally distributed forecasting error was generated, using a mean of μ = 0 and a standard deviation of σ = 5%, see the effect in
Table 10 and
Table 11.
To complete the analysis of the correction effect, tests were conducted with greater errors in prediction (see
Table 12,
Table 13,
Table 14,
Table 15,
Table 16 and
Table 17). The reference errors have been extracted from the SOLCAST web [
74] (
Table 12). Solar PV generation forecast error (MAPE = 10%), 5 s delay effect, and 1 min discretization.
As shown in
Figure 5, in environments with high error, solutions with full gain (K = 1) are still superior to those with partial compensation (K = 0.5) or no compensation (K = 0). However, it can be observed that the effect of error dilutes the difference between full and partial compensation. In environments with coarser granularity, such as hourly versus minute-by-minute discretizations, full compensation remains superior to partial compensation or no compensation, although in this case, full compensation is clearly more effective than other types of compensation.
4. Discussion
In this section, the previous theoretical analysis and main research results are analyzed and discussed. This discussion has been conducted in the same manner as in the other sections for the two-stage EMS levels.
4.1. Stage 1—Microgrid Optimization to Reduce Forecast Error Effect
In the first stage of the EMS, the numerical results clearly illustrate the classical trade-off between temporal granularity, objective function, and computational burden. For both cloud and edge executions, reducing the time step from 1 h to 15 or 10 min improves the objective function systematically, leads to more favourable operating costs (more negative values in this case, as the objective aggregates costs and revenues from EV charging, grid exchanges, BESS operation, and the hydrogen system). In the fixed-SoC and chained-SoC experiments over one month, average objective values at 10–15 min remain close to the “ideal” perfect-forecast case. In contrast, hourly optimization shows a significantly degraded average, reflecting a poorer exploitation of PV generation, storage, and EV charging flexibility. At the same time, execution times grow noticeably as granularity is reduced: cloud runs move from average times around 1 s at 1 h resolution to tens of seconds at 10–15 min intervals, and edge executions frequently hit the imposed 300 s cap for the most demanding configurations. This confirms that finer granularity is an effective way to reduce both forecast and discretization impact on the objective function, but only at the cost of significantly higher computational requirements and longer solver runtimes.
The sensitivity tests with synthetic PV forecast errors further reinforce this picture. Forecasts were perturbed with zero-mean Gaussian noise with increasing standard deviation (e.g., σ ≈ 5–25% of nominal PV power), representing typical day-ahead and intraday PV prediction uncertainty. For each configuration, the degradation of the objective function was quantified by comparing against the perfect-forecast baseline using normalized error metrics (NMAE and NRMSE). At 1 h granularity, these indicators grow from small values. For low forecast error to substantially higher levels for large forecast deviations, whereas at 15 min, the relative degradation is consistently smaller. This shows that reduced granularity not only yields a better optimum but also makes that optimum more robust to forecast noise. Shorter time steps limit the temporal aggregation of errors, allowing the rolling-horizon EMS to react earlier to deviations and effectively constrain how much a given forecast error can distort the achievable objective value. However, the same reduction in granularity amplifies the number of decision variables and constraints. It increases the dependency on high-resolution input data, which must be taken into account when designing real EMS deployments.
From an architectural standpoint, the results highlight that the choice of execution platform for the first stage of the EMS is not intrinsically “cloud vs. edge”, but rather a design decision driven by computational capabilities and timing constraints. For moderate granularities (e.g., 15 min) and problem sizes, both cloud and edge executions achieve similar objective values, and average runtimes remain compatible with a 15 min rolling horizon; in such cases, an edge–edge deployment (both stages local) is technically viable provided the on-site hardware can sustain the required worst-case execution time. For more aggressive granularities (e.g., 10 min with complex models), the observed maximum runtimes close to 300 s leave very little slack within each optimization interval for data acquisition, forecast generation, communication, and synchronization with the second stage, making those configurations impractical on constrained edge devices and better suited to cloud or high-end on-premise servers. Reduced granularity and frequent rolling-horizon updates are powerful levers to limit the impact of forecast and discretization errors on the objective function. Still, they must be co-designed with the available computing platform: cloud, edge, or hybrid arrangements are all valid options, as long as the first stage can systematically deliver solutions within the allocated time budget dictated by the chosen granularity.
4.2. Stage 2—Set-Points Adaptation to the Actual State of the Microgrid to Reduce the Forecast Error Effect
The results obtained in
Section 3.2 confirm that a fast local adaptation layer is structurally necessary in a deterministic two-stage EMS. Even when the upper level employs a well-calibrated physical and economic model and a deterministic optimization method, the combination of forecast errors and temporal discretization inevitably creates a gap between the scheduled set points and feasible operation. The second stage acts as a safety and performance filter, projecting these coarse, forecast-based set-points onto the real microgrid state while preserving the objective function as much as possible, ensuring that device and system limits are respected.
The example shows that the local stage of an EMS can effectively absorb different error types simultaneously. First, it compensates for the structural error introduced by coarse temporal granularity at the upper level. Without local correction, the tracking error increases significantly as the step size grows from 1 min to 1 h; however, a simple proportional compensator with a one-sample delay can reduce the residual MAE. Second, it mitigates the impact of forecast uncertainty. When a realistic, normally distributed forecast error is added on top of discretization, the local controller still reduces MAE and RMSE compared with a “no-action” baseline. Third, the same mechanism would naturally react to unmodelled local events (e.g., temporary disconnection or derating of a PV inverter or BESS, unexpected EV connection), since it acts on the measured deviation rather than on the forecast itself.
At the same time, the results highlight an essential design balance between temporal granularity at the upper level and the burden placed on the local stage. Finer granularity (e.g., 1 min set-points) reduces the intrinsic discretization error and, consequently, the range of corrections that the local controller must provide; however, it also increases the size of the optimization problem, the dependence on high-resolution forecasts, and the communication requirements. Coarser granularity (e.g., 15 min or 1 h) makes the upper-level optimization more manageable and scalable. Still, it pushes more variability onto the second stage and requires sufficient local flexibility (in this case, a BESS of the order of a few kilowatts and several kilowatt-hours per inverter) to compensate for the mismatch.
From an implementation perspective, the second stage must also respect the practical constraints of edge devices. Local controllers typically run on hardware with limited computational resources compared to cloud or server-based platforms, and they must operate under strict timing constraints. In this work, a deliberately simple strategy is proposed and tested: a proportional correction based on the deviation observed in the previous 5 s interval and applied with a one-cycle delay. Despite its simplicity, this scheme proves to be fast, robust, and effective in reducing the impact of both discretization and forecast errors. In practical applications, K should be calibrated using site-specific data; here, we restrict K to three representative values to highlight the general behaviour and the benefit of applying partial versus full correction. More sophisticated local controllers could, in principle, further improve performance; however, they must be carefully designed to fit within the sensing, actuation, and computation cycles of edge devices. The achievable cycle time will depend on sensor sampling rates, actuator response times, communication latencies, and the processing capabilities of the embedded platform.