Author Contributions
Conceptualization, U.E.E., T.T.L. and M.A.M.; Methodology, U.E.E., T.T.L. and M.A.M.; Software, U.E.E.; Validation, U.E.E.; Formal analysis, U.E.E.; Investigation, U.E.E.; Data curation, U.E.E.; Writing—original draft, U.E.E.; Writing—review and editing, U.E.E., T.T.L. and M.A.M.; Visualization, T.T.L.; Supervision, T.T.L. and M.A.M. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Conceptual workflow of the multi-source adaptive rolling-window TRM estimation framework.
Figure 1.
Conceptual workflow of the multi-source adaptive rolling-window TRM estimation framework.
Figure 2.
Single-line diagram of the modified IEEE 30-bus test system showing exporting (Area 1, shaded in blue) and importing areas (Area 2, shaded in green).
Figure 2.
Single-line diagram of the modified IEEE 30-bus test system showing exporting (Area 1, shaded in blue) and importing areas (Area 2, shaded in green).
Figure 3.
System profiles over a 24-h study horizon: (a) load profile with actual and forecast values showing diurnal variation with midday peak demand; (b) wind power output; (c) net load (load minus wind generation); (d) net load forecast error.
Figure 3.
System profiles over a 24-h study horizon: (a) load profile with actual and forecast values showing diurnal variation with midday peak demand; (b) wind power output; (c) net load (load minus wind generation); (d) net load forecast error.
Figure 4.
Illustration of the rolling-window statistical framework applied to the forecast error time series. A fixed- length window slides forward in time, and the rolling mean and standard deviation are computed over the most recent samples { − + 1,…, }.
Figure 4.
Illustration of the rolling-window statistical framework applied to the forecast error time series. A fixed- length window slides forward in time, and the rolling mean and standard deviation are computed over the most recent samples { − + 1,…, }.
Figure 5.
Time-varying adaptive confidence factor and the corresponding rolling standard deviation over 24 h simulation horizon. Subplot (a) shows the evolution of the adaptive. confidence factor, while subplot (b) presents the associated rolling standard deviation of forecast error, highlighting periods of elevated system uncertainty.
Figure 5.
Time-varying adaptive confidence factor and the corresponding rolling standard deviation over 24 h simulation horizon. Subplot (a) shows the evolution of the adaptive. confidence factor, while subplot (b) presents the associated rolling standard deviation of forecast error, highlighting periods of elevated system uncertainty.
Figure 6.
Adaptive window length W(t) and corresponding volatility change rate |R(t)| over the 24 h simulation horizon. (a) Time-varying adaptive window size W(t). (b) Volatility change rate |R(t)|, illustrating how the window contracts and expands in response to real-time variability governed by Equations (15)–(18).
Figure 6.
Adaptive window length W(t) and corresponding volatility change rate |R(t)| over the 24 h simulation horizon. (a) Time-varying adaptive window size W(t). (b) Volatility change rate |R(t)|, illustrating how the window contracts and expands in response to real-time variability governed by Equations (15)–(18).
Figure 7.
LHS convergence analysis: (a) estimated standard deviation σ versus scenario count K; (b) computation time versus scenario count K. The selected configuration K = 100 is highlighted.
Figure 7.
LHS convergence analysis: (a) estimated standard deviation σ versus scenario count K; (b) computation time versus scenario count K. The selected configuration K = 100 is highlighted.
Figure 8.
Window size sensitivity analysis: (a) Average RMSE with standard deviation error bars for window sizes, and (b) rolling RMSE time series over the 24 h horizon.
Figure 8.
Window size sensitivity analysis: (a) Average RMSE with standard deviation error bars for window sizes, and (b) rolling RMSE time series over the 24 h horizon.
Figure 9.
Reliability performance across TRM methods: (a) coverage probability and (b) Loss of Load Probability (LOLP) for all four methods. The dashed red line indicates the 95% coverage target (equivalently, 5% LOLP).
Figure 9.
Reliability performance across TRM methods: (a) coverage probability and (b) Loss of Load Probability (LOLP) for all four methods. The dashed red line indicates the 95% coverage target (equivalently, 5% LOLP).
Figure 10.
Computation time per 1 min update for all evaluated TRM methods (logarithmic scale).
Figure 10.
Computation time per 1 min update for all evaluated TRM methods (logarithmic scale).
Figure 11.
24 h TRM trajectories for static, fixed-K rolling, MCS-based, and proposed adaptive methods.
Figure 11.
24 h TRM trajectories for static, fixed-K rolling, MCS-based, and proposed adaptive methods.
Figure 12.
ATC profiles over the 24 h horizon for the four methods (Static, Fixed-K Rolling, MCS-based, and Proposed method).
Figure 12.
ATC profiles over the 24 h horizon for the four methods (Static, Fixed-K Rolling, MCS-based, and Proposed method).
Figure 13.
Summary comparison across methods: (a) mean TRM, (b) mean ATC, (c) reliability coverage, and (d) adaptation ratio for the static, Fixed-K rolling, MCS-based, and proposed adaptive methods. The dashed red line in (c) indicates the target reliability coverage level of 95%.
Figure 13.
Summary comparison across methods: (a) mean TRM, (b) mean ATC, (c) reliability coverage, and (d) adaptation ratio for the static, Fixed-K rolling, MCS-based, and proposed adaptive methods. The dashed red line in (c) indicates the target reliability coverage level of 95%.
Figure 14.
Effect of adaptive confidence adjustment: (a) TRM trajectories for Fixed-K (adaptive W, K = 1.645) versus the proposed method (adaptive W and K); (b) confidence factor comparison between fixed K = 1.645 and adaptive K(t).
Figure 14.
Effect of adaptive confidence adjustment: (a) TRM trajectories for Fixed-K (adaptive W, K = 1.645) versus the proposed method (adaptive W and K); (b) confidence factor comparison between fixed K = 1.645 and adaptive K(t).
Table 1.
Comparison of major probabilistic methods for TRM assessment.
Table 1.
Comparison of major probabilistic methods for TRM assessment.
| Method | Description | Strengths | Limitations |
|---|
| Monte Carlo Simulation (MCS) | Random sampling to obtain ATC/TRM distributions | Rigorous; widely validated | Computationally heavy; impractical for real-time |
| Latin Hypercube Sampling (LHS) | Stratified sampling of uncertainty distributions | High efficiency; reduced variance | Still offline; assumes static distributions |
| Bootstrap-Based Methods | Resampling historical data or assumed distributions | Statistically robust | Limited adaptiveness under changing conditions |
| Bayesian Networks | Probabilistic dependency modeling | Captures correlations; supports updating | High computation and data requirements |
| Stochastic Process Models (Markov chains, Petri nets) | Temporal correlation modeling via state transitions | Captures sequential dependencies | Parameter validation challenges; execution time |
| Interval/Robust Methods | Bounded uncertainty sets without full distributions | Conservative guarantees; computationally tractable | No explicit confidence levels; potentially over-conservative |
Table 2.
Modified IEEE 30-bus test system configuration.
Table 2.
Modified IEEE 30-bus test system configuration.
| Parameter | Value | Description |
|---|
| Test System | 30 | Standard IEEE 30-bus topology |
| Generators | 6 | Conventional units at buses 1, 2, 5, 8, 11, 13 (Exporting Area) |
| Wind farms | 2 | Integrated on buses 14 and 16 (Importing Area) |
| Wind capacity | 80 MW | 40 MW each on buses 14 and 16 |
| Total load | 283.4 MW | Base case loading |
| Branches | 41 | Transmission lines and transformers |
| Solver | MATPOWER | AC power flow using Newton-Raphson method |
| Transfer corridor | Area 1 → 2 | Tie-lines: 12–15, 12–16, 14–15, 15–18, 15–23 |
Table 3.
Summary of complete simulation parameters.
Table 3.
Summary of complete simulation parameters.
| Parameter | Value | Description |
|---|
| Time horizon | 24 h | One full operational day |
| Time step (Δt) | 1 min | High-resolution temporal granularity |
| LHS scenarios | 100 | M = 10 load × N = 10 wind intervals |
| Baseline window (Wbase) | 15 | Selected from sensitivity analysis |
| Baseline confidence (Kbase) | 1.645 | 95% confidence level |
| Window bounds | 10–60 min | Wmin to Wmax |
Table 4.
Base case power flow results.
Table 4.
Base case power flow results.
| Parameter | Value |
|---|
| Total system load | 189.2 MW |
| Wind generation | 80 MW |
| Wind penetration | 42.3% |
| Base transfer (Export → Import) | −56.50 MW |
| Total Transfer Capability (TTC) | 216 MW |
| Number of tie-lines identified | 9 |
Table 5.
LHS convergence analysis.
Table 5.
LHS convergence analysis.
| M | N | K (Total) | σ (MW) | Relative Change (%) | Time (ms) |
|---|
| 5 | 5 | 25 | 15.89 | 0 | 12.13 |
| 10 | 10 | 100 | 12.41 | 21.86 | 6.33 |
| 15 | 15 | 225 | 12.93 | 4.15 | 5.48 |
| 20 | 20 | 400 | 13.20 | 2.10 | 6.38 |
| 25 | 25 | 625 | 13.29 | 0.69 | 12.50 |
Table 6.
Window size sensitivity results.
Table 6.
Window size sensitivity results.
| Window Size (min) | Mean RMSE (MW) | Std RMSE (MW) |
|---|
| 10 | 11.60 | 3.10 |
| 15 | 11.76 | 2.61 |
| 30 | 11.91 | 1.88 |
| 45 | 11.94 | 1.63 |
| 60 | 11.95 | 1.46 |
Table 7.
Derived adaptive parameters.
Table 7.
Derived adaptive parameters.
| Parameter | Symbol | Value | Description |
|---|
| Wind variability ratio | Β | 0.4618 | Ratio of wind to net-load variability |
| Reference std dev. | | 11.49 MW | Daily avg. of rolling |
| Window adjustment factor | A | 25.47 | Controls window contraction rate |
| Max volatility rate | |R(max)| | 0.1963 | 95th percentile of |R(t)| |
| Historical forecast error std | | 12.02 MW | Overall forecast error std dev. |
Table 8.
Comparative performance of TRM estimation methods.
Table 8.
Comparative performance of TRM estimation methods.
| Metric | Static | Fixed-K | MCS | Proposed Method |
|---|
| Mean TRM (MW) | 16.20 | 18.77 | 24.01 | 19.19 |
| TRM Std. (MW) | 0.00 | 4.46 | 0.76 | 6.65 |
| TRM Span (MW) | 0.00 | 31.50 | 5.71 | 47.62 |
| Mean ATC (MW) | 199.80 | 197.23 | 191.99 | 196.81 |
| Coverage (%) | 81.80 | 88.40 | 94.80 | 88.0 |
| LOLP (%) | 18.20 | 11.60 | 5.20 | 12.0 |
| Adaptation Ratio | 1.0:1 | 8.6:1 | 1.3:1 | 18.8:1 |
| Hours Enhanced ATC | 0 | 6.1 | 0.0 | 7.50 |
| Computation time (ms) | 0.02 | 2.26 | 261.75 | 1.82 |