1. Introduction
With the global transition toward sustainable energy, nations worldwide are significantly increasing the penetration of renewable energy and low-carbon power generation technologies [
1]. To mitigate greenhouse gas emissions, countries are actively accelerating the integration of renewable energy sources (RESs). The share of electricity produced by RESs, predominantly solar and wind [
2], is expected to grow from 37% in 2020 to more than 60% by 2030 in Europe. In 2024, renewable energy made up 50.39% of the EU’s gross electricity consumption [
3]. Concurrently, China has witnessed exponential growth in its installed power capacity, with renewable energy installations leading this expansion. By the end of 2024, the country’s new energy power generation capacity reached 131 million kW, marking significant progress toward its carbon peak and neutrality goals [
4]. Moreover, Long-distance power transmission and electricity trading play a pivotal role in optimizing power resource allocation [
5], facilitating renewable energy integration, and enhancing power supply reliability. China’s substantial investments in transmission infrastructure are exemplified by the completion of 20 AC and 20 DC ultra-high voltage transmission lines by 2024, forming the backbone of its West-to-East Electricity Transmission Project [
6]. To achieve optimal resource allocation and efficient operation of the power industry, the electricity market’s transactions are becoming more frequent [
7]. China is currently establishing a unified national electricity market framework to enhance cross-regional power trading mechanisms.
Available transfer capability (ATC) serves as a fundamental parameter in power system operation and electricity market transactions, providing critical assessment criteria for transaction feasibility [
8]. It represents the maximum amount of additional power that can be transferred over the transmission network without violating system security constraints. In order to ensure sufficient transmission capacity under various uncertainties, the North American Electric Reliability Council gave the definition of ATC and the calculation method in 1996 [
9]. ATC is mathematically defined as total transfer capability minus the sum of transmission reliability margin (TRM), existing transmission commitments (ETC), and capacity benefit margin (CBM) [
10]. It can thus be seen that TRM is a key step in calculating ATC. TRM plays a critical role in the determination of ATC, as it accounts for uncertainties in cross-regional power transmission and market transactions to ensure compliance with tie-line capacity constraints and maintain the security of regional networks. The objective of TRM is to guarantee the reliability of transmission due to various uncertain factors in power systems [
11], such as renewable generation, load, and power contract execution deviation affected by new energy sources. How to accurately evaluate TRM is a crucial problem for the economic and secure management of cross-regional electricity trading and various uncertainties that electricity markets inevitably encounter [
12]. An accurate assessment of TRM is important when calculating ATC. In addition, TRM provides buffer space for these uncertainties by reserving a part of the margin in the total transferable capacity to prevent transmission efficiency [
13], voltage instability [
14], and other power crises [
15]. The scientific setting of TRM parameters can maximize the tradable capacity of the power market under the premise of ensuring system security [
16]. The accurate evaluation of TRM under multi-source uncertainty remains a critical issue in modern power systems. Variability in renewable generation, fluctuations in load demand, and deviations in contract execution collectively determine the stochastic characteristics of interregional power flows. The statistical dependence among these uncertainties can further increase transmission risk, and ignoring such correlations may result in biased margin estimation and suboptimal operational decisions.
Current research on TRM focuses on the following approaches. The first approach is called the systematic case-based method [
17]. The system evaluates all the factors that may affect the transmission capability, such as thermal stability, voltage constraints, transient stability, etc., which results in more safety margins [
18]. However, this also means that it is computationally intensive and has poor real-time performance. When it comes to frequent sudden load changes or dramatic uncertainties, such as wind and solar [
19,
20], it is difficult to handle rapidly fluctuating system states [
21]. The second approach is the fixed margin method [
22]. We use fixed TRM values based on experience. The fixed value may sometimes be higher than the necessary margin, resulting in a small ATC and reduced market efficiency. In addition, during periods of intensive and high uncertainties, the fixed value may be far from sufficient, and safety cannot be guaranteed. Regarding the probabilistic computation method of TRM, ref. [
23] proposes a data-driven approach for estimating the TRM using artificial neural networks. The method is designed for short-term, hourly TRM estimation in deregulated power systems. ref. [
24] proposed a method that TRM is estimated using a probabilistic approach rather than fixed-percentage assumptions. The authors introduce load uncertainties by applying a 2% normal distribution to generate 100 different load scenarios for each hour. In addition, there are other approaches to estimating TRM, such as the stochastic response surface method (SRSM). The SRSM is an effective method that uses statistical and probability distribution concepts to estimate TRM accurately [
25].
In conclusion, existing TRM assessment methods have struggled to achieve both computational efficiency and evaluation accuracy [
26]. To overcome these challenges, this study develops a correlation-aware probabilistic framework for TRM assessment. The framework characterizes renewable generation, load variation, and contract execution deviation as correlated uncertainty sources through a Gaussian mixture formulation, forming the basis for subsequent stochastic analysis. This method uses a Beta distribution to simulate renewable generation, load, and power contract execution deviation, allowing flexible control of the disturbance range and skewness utilization. It accurately captures the joint distribution of disturbances, supporting probabilistic modeling under non-Gaussian and nonlinear conditions. By modeling the uncertainties and calculating the magnitude of their impact on the line currents, the TRM is determined accordingly. This achieves fast, robust, and high-precision probabilistic power flow and transmission margin assessment, more accurately reflecting the impact of uncertainties in actual operation on grid safety margins.
The paper’s main contributions are summarized as follows:
- (1)
Comprehensive consideration of multi-source uncertainties: This study extends TRM assessment by jointly considering renewable generation variability, load fluctuation, and power contract execution deviation. Incorporating transaction execution deviations captures the physical–market interactions often neglected in conventional analyses, thereby improving the realism and completeness of uncertainty representation in power system operation.
- (2)
Correlation-aware probabilistic modeling for TRM evaluation: A correlation-aware Gaussian mixture model (GMM) is developed to represent the joint distribution of multiple uncertainties. This approach captures multimodal characteristics and statistical dependencies among renewable, load, and contractual factors, enabling accurate and efficient probabilistic evaluation of the transmission reliability margin under correlated disturbances.
The rest of the paper is organized as follows: The multi-source uncertainty modeling approach is presented in
Section 2, along with its application to capturing uncertainties from multiple sources. The formulas of probabilistic AC power flow calculation are presented in
Section 3, where they are employed to obtain the probabilistic assessment results. The test case and results are included in
Section 4. Finally, the conclusion on the obtained results is given in
Section 5.
2. Correlation-Aware Modeling of Multi-Source Uncertainty in Inter-Regional Electricity Trading
2.1. Uncertainty Resources in Inter-Regional Electricity Trading
Cross-regional electricity trading is influenced by three primary sources of uncertainty: renewable generation, load variation, and contract execution deviation. These uncertainties collectively affect nodal power injections and serve as fundamental inputs for probabilistic TRM assessment [
27].
Among these uncertainties, variability and forecasting errors in wind and photovoltaic (PV) generation introduce significant randomness in power output. Due to physical constraints and meteorological conditions, wind and solar resources typically follow bounded and non-symmetric probability distributions.
In the case of PV generation, cloud cover under conditions such as cloudy, rainy, or snowy weather can cause a sudden drop in irradiation intensity in a short period of time, resulting in sudden fluctuation changes in PV output [
28]. Ref. [
29] reported that the intensity of solar radiation reaching the ground can be approximated by a Beta distribution over short time intervals, which characterizes the probability density of short-term PV output. Ref. [
30] also proposed a typical probability distribution model for photovoltaic power generation.
Accordingly, the Beta distribution is employed to represent renewable generation deviations.
where
and
are the shape parameters controlling the asymmetry of the distribution, and
is the Beta function that normalizes the probability density.
For load demand, deviations are primarily caused by the uncertainty of consumption behaviors and their sensitivity to socio-economic and environmental conditions. In recent years, emerging load types, particularly those driven by the rapid development of the new energy automobile industry, have further intensified such uncertainties. Similarly, the development of the new energy automobile industry leads to a large number of new energy automobiles being connected to the load side of users, resulting in a greater impact on the load side trend. As the charging time is concentrated in the evening peak, it is easy to superimpose with the original load to form a ‘peak on peak’, exacerbating the load fluctuations in the power grid, affecting the economic operation of the system. A large number of electric vehicles in the distribution network can cause random access, which also shifts part of the node voltage, or even over the limit, endangering power quality and power supply security. At the same time, the charging load under a high penetration rate increases the line current and loss, aggravating the network operation burden [
31]. Ref. [
32] mentions that the typical probability model for composites assumes a normal distribution.
where
and
represent the mean and standard deviation of the normal distribution, respectively. These parameters define the central tendency and variability of the random variable.
Deviations in electricity trading contracts mainly result from the mismatch between forecast-based commitments and the actual realization of renewable generation. The intermittent and spatially uneven characteristics of renewable resources cause persistent supply–demand imbalances, which in turn necessitate long-distance power transmission and frequent interregional transactions. In current markets, medium- and long-term contracts (annual, monthly, or weekly) account for more than 70% of traded electricity. However, the variability of renewable generation often leads to discrepancies between scheduled and actual outputs, introducing additional uncertainty and increasing balancing pressures in the spot market. Contract execution deviations are further affected by behavioral and market-related factors, including user participation, bidding strategies, and the flexibility of contract fulfillment. Consequently, conventional probabilistic models seldom capture the coupled physical and market uncertainties inherent in contract execution.
Considering these coupled uncertainties, the Beta distribution is employed to represent both source–load fluctuations and transaction execution deviations. Its use is justified by the inherent physical boundaries and statistical characteristics of these variables.
First, the relevant stochastic variables, including normalized renewable generation, load ratio, and transaction execution rate, are inherently bounded within the closed interval [0, 1]. Since the Beta distribution is defined on the same bounded domain, it provides a theoretically consistent representation for these variables.
Second, these uncertainties exhibit evident non-Gaussian and asymmetric patterns. Renewable generation often shows right-skewed profiles under high irradiance or strong wind conditions, whereas load ratios and contract execution rates tend to be left-skewed during stressed or congested market periods [
33]. The two shape parameters
of the Beta distribution offer sufficient flexibility to describe a broad range of skewness and kurtosis, enabling accurate probabilistic characterization under diverse operating scenarios.
Third, the suitability of the Beta distribution for energy-related stochastic processes has been extensively verified in previous studies. Empirical results demonstrate that the clearness index of solar radiation, the normalized wind-power output, and the normalized load demand closely follow Beta-type distributions [
34]. Similarly, observed transaction execution rates exhibit bounded and asymmetric probability structures that are well approximated by the Beta family.
Consequently, the Beta distribution provides both statistical robustness and physical interpretability, forming a unified probabilistic foundation for modeling multi-source uncertainties in generation, load, and market transactions. This serves as the basis for the Gaussian-mixture modeling introduced in
Section 2.2.
2.2. Correlation-Aware Gaussian Mixture Modeling of Uncertainty Sources
A Gaussian mixture model is a commonly used probabilistic model to represent the situation where a data set is ‘mixed’ by multiple Gaussian distributions (normal distribution). It is particularly suitable for modeling complex, multi-peaked data distributions and is widely used in clustering, density estimation, anomaly detection and signal modeling. Each Gaussian distribution is called a mixture component, and a linear combination of each mixture component is fitted to the entire probability distribution [
35], which theoretically can be fitted to any probability distribution, and the GMM can be represented by the following function:
where k denotes the number of mixture components contained in the model,
denotes the weight ratio of each model,
and
denotes the mean and covariance matrices of each mixture component, respectively, where
is denoted by the following equation:
The parameters of the GMM, including the mixture weights
, mean vectors
, and covariance matrices
, are estimated using the maximum likelihood estimation (MLE) approach. Given a set of observed data and a predefined number of Gaussian mixture components, the likelihood function can be expressed as:
The log-likelihood function can be obtained by taking the natural logarithm of Equation (5) as follows: The corresponding log-likelihood function is obtained by taking the natural logarithm of Equation (5), as shown below:
The GMM parameters are estimated by maximizing the log-likelihood function. Since the logarithmic term involves a summation, a closed-form analytical solution does not exist. The estimation is therefore carried out iteratively using the Expectation-Maximization (EM) algorithm [
36].
Precisely because the perturbation variables in power systems are often non-Gaussian, asymmetric or even multi-peaked, and GMM can theoretically approximate complex or irregular distributions by fitting multiple Gaussian models, it is particularly suitable for modeling these complex perturbation characteristics [
37]. Therefore, this paper adopts GMM to model these perturbation variables.
2.3. Joint Probability Distribution Modeling
The fluctuations in source load and contract execution deviations both manifest as variations in nodal power injection. A Gaussian mixture model is established to represent the nodal power injection fluctuations under the combined influence of these two factors. The main innovation of this study is that it considers not only source–load fluctuations but also contract execution deviations in the uncertainty modeling process. A correlation-aware Gaussian mixture model is employed to capture the statistical dependence among these uncertainty sources and to quantify their joint impact on nodal power injection variations. This approach extends traditional single-variable GMM applications to a system-level analysis, providing a probabilistic basis for the subsequent modeling of power deviation distributions. First of all, the random fluctuation data of the output power of each unit and load is obtained, the output data is decomposed into steady state mixture components and fluctuation deviation mixture components, and only the fluctuation deviation mixture component is taken, which is denoted as
; the number of Gaussian mixture components is selected to be K, and the node caused by source load fluctuation injected power deviation probability density function is expressed as the following GMM:
where
denotes the weight coefficients of the kth Gaussian mixture component,
and
are the expectation and covariance matrices of the kth Gaussian mixture component, respectively, and
is the probability density function of the kth Gaussian mixture component:
where
M is the dimension of the source load fluctuation power
; with
denoting the deviation vector of the execution of each trading contract, the number of Gaussian mixture components is chosen as
K. The probability density function of the node injection power deviation caused by the contract execution deviation is expressed as the following Gaussian mixture model:
For computational tractability, the source–load fluctuation
and contract execution deviation
are assumed to be statistically independent but not identically distributed. The internal correlations within each group are captured through the covariance matrices of their respective GMM mixture components. Accordingly, the joint probability density function is formulated as the product of the two marginal GMM-based densities, as shown in Equation (10).
where
In this study, the correlation of interest mainly refers to the internal dependencies among multiple elements within each type of disturbance. Specifically, the output fluctuations of different renewable generation units may exhibit statistical correlations, and the deviations of different transaction lines may also influence each other. Such internal correlations are captured through the covariance structure of the Gaussian mixture model, providing a more accurate characterization of the disturbance distributions.
2.3.1. Visual Assessment of Goodness of Fit
To verify the representational capability of the proposed GMM, synthetic random samples are generated from theoretical Beta distributions with various parameters. Multiple independent samples are drawn to eliminate random bias. These synthetic data are used solely for numerical validation of the model’s approximation ability rather than for real power-system data analysis.
A visual assessment of the goodness-of-fit is first performed.
Figure 1 illustrates the decomposition of a univariate Beta random variable by Gaussian Mixture Models with different numbers of mixture components
. The blue line represents the theoretical Beta density
, while the orange line denotes the GMM density
, and the colored thin curves show the individual Gaussian mixture components. As
increases, the fitting flexibility improves, and when
K = 10, the GMM nearly coincides with the target Beta curve, confirming its strong representational capacity.
Figure 2 presents the joint probability density surface of a two-dimensional Beta distribution fitted with a GMM of
K = 20 components. The color gradient represents the magnitude of the probability density function
, transitioning from dark blue (low density) to bright yellow (high density). The model accurately reproduces the multimodal and skewed characteristics of the target distribution. These visual results confirm that the proposed GMM can effectively approximate both univariate and multivariate distributions.
To further assess the statistical goodness of fit, formal tests including the BIC, K-S, and A-D methods are applied in
Section 2.3.2.
2.3.2. Statistical Assessment of Goodness of Fit
To evaluate the goodness of fit of the proposed GMM to the Beta distribution, three classical statistical tests are applied: the Bayesian Information Criterion (BIC), the Kolmogorov–Smirnov (K-S) test, and the Anderson–Darling (A-D) test. These tests assess the model’s performance in terms of complexity, overall distributional consistency, and accuracy in fitting the tail regions, providing a rigorous statistical validation of the proposed approach. For this analysis, Monte Carlo (MC) samples are independently generated from the fitted distributions using the parameters estimated in
Section 2.2. All samples are independent and identically distributed, ensuring consistency between the theoretical and empirical distributions used in the K-S and A-D tests.
- (1)
Bayesian Information Criterion
The BIC is a commonly used model selection criterion that introduces a penalty term for model complexity based on the log-likelihood function [
38]. Its mathematical expression is given by:
where
denotes the maximum log-likelihood of the model,
represents the number of estimated parameters, and
is the sample size. A smaller BIC value indicates a better trade-off between model fitting accuracy and model complexity.
Table 1 reports the BIC values for different numbers of GMM mixture components K. As K increases, the BIC value first decreases and then rises. When K = 5, the BIC reaches its minimum of −9555.32, indicating that the model achieves an optimal balance between fitting accuracy and model complexity. For K = 10, the BIC value is −9422.86, differing from the minimum by only 1.4%. The subsequent A-D test indicates that the configuration with K = 10 yields a more accurate fit in the tail regions of the distribution. Considering the need to characterize extreme disturbance conditions in power-system applications, K = 10 is selected as the standard configuration to ensure an appropriate balance between model simplicity and tail-fitting precision.
- (2)
Kolmogorov–Smirnov test
The Kolmogorov–Smirnov (K-S) test is a nonparametric statistical method used to evaluate the maximum deviation between the empirical cumulative distribution function (CDF) of the sample data and the theoretical distribution [
39]. The test statistic is defined as follows:
where
denotes the empirical cumulative distribution function of the sample, and
represents the theoretical cumulative distribution function. A smaller value of
indicates a closer agreement between the sample distribution and the theoretical distribution.
Figure 3 illustrates the results of the K-S test, where the horizontal axis represents the range of the random variable and the vertical axis denotes the cumulative probability. The blue solid line corresponds to the theoretical CDF of the Beta distribution, the orange solid line represents the CDF obtained from the GMM fitting, and the gray dashed line shows the empirical CDF of the original samples. As shown in the figure, the CDF of the GMM aligns closely with that of the theoretical Beta distribution, with a maximum deviation of
= 0.0057 and a corresponding
p-value of 0.9035, which is much greater than the significance level of 0.05. This indicates that the null hypothesis cannot be rejected, confirming that the GMM provides an excellent fit to the target Beta distribution. The red circles mark the position of the maximum deviation, while the red dashed lines highlight the distance between the two CDF curves at that point, visually demonstrating the rigor of the test.
- (3)
Anderson–Darling test
The Anderson–Darling (A-D) test is an improved version of the K-S test that assigns greater weight to deviations occurring in the tails of the distribution [
40]. The test statistic is defined as follows:
where
denotes the ordered sample observations, and
represents the theoretical cumulative distribution function. A smaller value of
indicates a better overall fit, particularly in the tail regions of the distribution.
Table 2 presents the A-D test statistics and corresponding p-values under different numbers of GMM mixture components
K. The results show that when
, all p-values exceed 0.92, indicating that the null hypothesis cannot be rejected at the 5% significance level, and thus the GMM and the Beta distribution are statistically indistinguishable. In particular, when
, the test statistic is
= 0.1316 with a
p-value of 0.9817, which is significantly higher than 0.9221 obtained when
. This demonstrates that increasing the number of mixture components further improves the tail-fitting performance. Compared with the K-S test, the A-D test imposes stricter requirements on the tail behavior of the fitted distribution. The proposed GMM exhibits excellent performance under this more rigorous criterion, further confirming its effectiveness in modeling extreme disturbance scenarios.
The GMM is employed in this study to characterize multi-source uncertainties. Based on a comprehensive evaluation using the BIC, K-S test, and the A-D test, the number of GMM mixture components is determined as K = 10. Although the BIC indicates an optimal value at K = 5, the A-D test yields a p-value of 0.9817 for K = 10, which is significantly higher than 0.9221 for K = 5. This result demonstrates that K = 10 provides a superior fitting performance in the tail regions of the distribution, corresponding to extreme disturbance scenarios that are critical for power system reliability assessment.
3. Probabilistic TRM Assessment
3.1. Concept and Definition of TRM
To maximize the role of interregional transmission in resource allocation under secure grid operation, it is necessary to conduct accurate assessments of the ATC of transmission lines. In 1996, the North American Electric Reliability Council systematically defined ATC and proposed its calculation methodology. ATC of a transmission line refers to the remaining transfer capacity available for power transactions, under the premise of ensuring system security and reliability and respecting existing transmission schedules. Mathematically, ATC is defined as the TTC minus the TRM, minus the ETC, and minus the CBM, as expressed by the following equation:
As illustrated in
Figure 4, the figure depicts generations, loads, and contract execution deviation within the power system. On the source side, renewable generations such as wind and photovoltaic exhibit significant forecast uncertainty and fluctuation ranges. On the load side, demand also shows prediction errors and stochastic variations, which jointly affect the system power balance. In the transaction process, electricity flows along the transaction path are constrained by TTC, TRM, and CBM, resulting in the ATC. Meanwhile, discrepancies often exist between the ETC and the actual contractual transactions, further amplifying operational uncertainty.
This framework therefore provides a comprehensive view of how uncertainties from sources, loads, and contract execution influence system operation, serving as a foundation for subsequent modeling and optimization.
3.2. Probabilistic Assessment of TRM
The calculation methods of TRM can be categorized into two types: deterministic methods and uncertainty-based methods. This paper adopts a deterministic framework while performing a probabilistic assessment of TRM [
16].
The stochastic disturbances originating from generation and transaction execution must first be converted into nodal active-power injection deviations for subsequent power-flow and reliability analysis [
41]. The mapping procedure is summarized as follows.
denotes the active-power injection deviation at node
caused by generator
i, and
represents the corresponding deviation due to transaction
k.
The slack generator at the node
maintains overall power balance, and its injection deviation equals the negative sum of all non-slack generator disturbances:
For each bilateral transaction
connecting the node
(source) and node
(sink), the corresponding nodal injection deviations are
The total active-power injection deviation at node
is the sum of its generation and transaction contributions:
where
is the set of transactions incident to node
, and the incidence coefficient
is defined as
This definition ensures global power-balance conservation:
By stacking all generator and transaction disturbances, define the disturbance vector:
where
and
contain the generator and transaction mixture components, respectively.
The nodal-injection deviation vector can then be compactly expressed as:
with the node–disturbance mapping matrix defined by
Here, maps each non-slack generator disturbance to its corresponding node and enforces the balance at the slack bus, and assigns +1 to the source node and −1 to the sink node of each transaction. Each column of thus represents the nodal injection pattern associated with one elementary disturbance.
The resulting stochastic nodal injection vector
serves as the probabilistic input for the subsequent power-flow computation. Substituting into the network equations gives:
where
denotes the nominal operating point of the system, and
represents the stochastic perturbation induced by generation and transaction uncertainties modeled through the Gaussian mixture framework.
The joint probability density function of the line currents is calculated based on the Gaussian mixture model of the random variable x. The network equations of the AC power system are modeled under the DC power flow framework for computational efficiency:
is the nodal conductance matrix and is the nodal voltage phase-angle vector. The active power flow on each branch can therefore be expressed as a linear combination of the nodal power injections through the sensitivity coefficients derived from the inverse of . This relationship reflects how stochastic nodal injection deviations are propagated through the network, allowing the evaluation of the branch power flow of the line required for TRM analysis.
The power flow of line
can then be obtained from the following network equation:
is the admittance of branch
and
is the m-th column of the node-branch incidence matrix A. The relationship between the line currents and the random variable
x is derived as follows, according to Equation:
To simplify the expression, we define the coefficient matrix
as:
Thus, the line current can be rewritten in a compact form as:
According to the nature of the Gaussian distribution, if
, then the random variable
after linear transformation still obeys the Gaussian distribution and
. According to the Gaussian mixing mode, the joint probability density function
of the critical line trend
is obtained, which is expressed as a Gaussian mixing model as follows [
35]:
Calculate the cumulative probability distribution function (CDF) of the line currents:
Calculate the transmission reliability margin for critical lines; take the confidence level as
and the transmission reliability margin as [
42]:
This means that under this transmission reliability margin, the probability that the actual tidal current of the line does not exceed the maximum transmission capacity is , taking into account the source load fluctuation and the uncertainty of the contract execution deviation, and can be taken as a typical value for the practical calculation application.
This study accounts for both source–load fluctuations and transaction execution deviations, while explicitly incorporating the correlations among uncertain factors. By doing so, it improves the accuracy of TRM evaluation and provides more reliable ATC for coordinated cross-regional power trading and secure grid operation. Furthermore, the proposed method is derived analytically, avoiding the high computational cost of Monte Carlo simulations.
4. Case Study
4.1. Simulation Setup
In this study, the calculation example uses the four-region system shown in
Figure 5. Only the transmission capacity of inter-regional links is considered, so the inter-regional network can be ignored and each region can be simplified to a node. The sources and loads within the region are connected to this node. This paper uses a 4-region system as a case study, which has the advantages of a simple structure, complete functionality, ease of analysis, and good scalability. The system includes reference nodes, multiple PQ nodes, AC interconnection branches, and typical power generation and trading disturbances, covering the core aspects of power flow calculation, sensitivity analysis, and TRM assessment. The system is of moderate scale, making it easy to control variables and visualize results, and is particularly suitable for method validation and result interpretation. Additionally, the structure has good scalability, allowing it to be smoothly expanded into larger-scale regional power grids or provincial power market models, laying the foundation for future research on AC/DC hybrid systems and power trading optimization.
The wind generation data used in this study are obtained from real operational measurements of a coastal wind farm in Jiangsu Province, China, recorded at hourly intervals. The data reflect the typical output fluctuations of large-scale wind power integration in East China. The load profiles and network parameters are adapted from provincial grid statistics and standard IEEE benchmark systems to ensure both realism and reproducibility of the case study.
The computational framework was developed in MATLAB R2023a, integrating the YALMIP optimization environment with the Gurobi 10.0 solver for model formulation and solution. Power-flow analysis and network initialization were conducted using the MATPOWER 7.1 toolbox. The statistical characterization of uncertainties, including Gaussian mixture model (GMM) fitting and Beta-distribution sampling, was implemented in Python 3.10 employing the Scikit-learn and NumPy libraries. This hybrid MATLAB–Python workflow ensures numerical robustness, computational efficiency, and reproducibility of the proposed probabilistic TRM assessment framework.
4.2. Illustrative Simulation of TRM
The stochastic data used in this section are synthetic samples generated from the fitted probability distributions of real operational data described in
Section 4.1. These samples preserve the statistical characteristics of actual wind generation and transaction deviations while enabling controlled simulation of TRM scenarios.
The sources and loads within the region are connected to this node. Regions 1, 2, and 3 each include a renewable generation unit and a load with correlated fluctuations. Their variations exhibit statistical consistency, reflecting the influence of common meteorological patterns and operational conditions. The probability density functions of the source–load fluctuations are shown in
Figure 6, and Region 4 serves as the power-balance node. Disturbance magnitude (MW) is plotted on the x-axis, and probability density is shown on the y-axis. These statistical characteristics form the basis for subsequent stochastic modeling and TRM evaluation.
There are a total of five inter-area trading contracts, and the contract execution deviation probability density function is shown in
Figure 7.
Figure 8 investigates the dynamic output timing adjustments of generation units under concurrent power generation disturbances and market trading disturbances, with consideration given to their temporal correlation characteristics. Each subplot corresponds to one renewable generator. The plotted curves represent the sum of the baseline generation and its stochastic disturbance modeled by a Beta distribution.
A Gaussian mixture model is employed to represent the joint probability distribution of the stochastic variables, including the source–load power deviations and the contract execution deviations. The resulting probability density functions of the line power-flow fluctuations are illustrated in
Figure 9. The red dashed line indicates the
TRM for the branch, which is used to assess the system’s resilience to uncertain perturbations at a given confidence level.
4.3. Comparative Accuracy Analysis of TRM with Different Methods
This section compares the TRM evaluation results obtained by different methods, thereby verifying the superiority of the proposed approach in terms of assessment accuracy.
The conventional fixed 5% TRM approach (
Table 3, Col. 3) offers simplicity in implementation, but its empirical and static nature prevents it from accurately capturing the stochastic fluctuations in renewable generation and market-induced power flow variations. Consequently, it either overestimates the necessary margin—resulting in wasted transmission capacity—or underestimates it, potentially compromising system reliability.
In contrast, the proposed correlation-aware GMM-based probabilistic TRM method (
Table 3, Col. 2) represents the core innovation of this study. The GMM framework captures the multi-modal and correlated uncertainties stemming from renewable generation, load, and contract execution deviations. As shown in
Table 3, the GMM-based TRM values (ranging from 25.5 MW to 117.3 MW) are consistently higher than those obtained by the fixed-margin approach, reflecting a more realistic and risk-aware allocation of transmission capacity. This ensures adequate security margins under uncertain operating conditions while avoiding excessive conservatism. Moreover, the correlation-aware GMM method provides superior accuracy and adaptability, enabling dynamic reliability evaluation aligned with system operation states, which is particularly valuable for power systems with high renewable penetration.
The Point Estimation Method (PEM) (
Table 3, Col. 4) serves as a computationally efficient comparative benchmark. By approximating probabilistic characteristics through a small number of representative points, PEM achieves faster evaluation but assumes symmetrical distributions and weak inter-variable correlation. As reflected in the results, PEM yields TRM values between 21.4 MW and 102.4 MW, moderately lower than those from GMM yet much higher than the fixed-margin results. This indicates that PEM captures first- and second-order statistical effects but fails to represent the complex correlations and non-Gaussian features inherent in real-world systems. Therefore, while PEM provides a reasonable balance between speed and accuracy, the GMM method delivers a fundamentally more comprehensive and robust TRM assessment, validating its significance as the main methodological advancement of this study.
This study conducted a comprehensive statistical analysis of 10,000 Monte Carlo samples incorporating system disturbances, comparing three margin-setting methodologies: the conventional fixed 5% margin approach, the proposed correlation-aware GMM-based probabilistic margin method, and the Point Estimation Method (2 m). The analysis focused on two key indicators: the proportion of samples exceeding the prescribed margin (
Table 4) and the average magnitude of margin violations (
Table 5).
The results reveal that the traditional fixed 5% margin method exhibits substantial limitations. Several branches show extremely high violation rates—35.60%, 58.81%, and 62.85% for Branches 3, 4, and 5, respectively—far exceeding the intended 5% confidence threshold. This clearly indicates that the fixed-margin rule cannot represent the stochastic and asymmetric characteristics of actual system disturbances, leading to either insufficient protection or overly conservative reserve settings that reduce transmission utilization efficiency.
In contrast, the proposed correlation-aware GMM-based probabilistic method demonstrates markedly superior performance. As shown in
Table 4, the GMM method maintains violation probabilities close to the theoretical 5% target for all branches (4.79–5.18%), achieving a nearly perfect match between statistical confidence and operational safety. Moreover,
Table 5 shows that the GMM approach yields the smallest average violation magnitudes (13.86–21.78 MW), confirming its capability to simultaneously ensure system reliability and minimize unnecessary reserve capacity. By capturing the multi-modal, correlated, and non-Gaussian nature of renewable and transaction uncertainties, the GMM framework produces adaptive reliability margins that align with real operating conditions—demonstrating both physical interpretability and robustness.
The Point Estimation Method (2 m) provides a useful benchmark between computational simplicity and probabilistic accuracy. It reduces the average violation rate compared with the fixed-margin approach (7.32–10.67%), yet its results remain notably less consistent with the 5% confidence target than those of the GMM. Likewise, the average magnitude of margin violations (19.28–33.24 MW) is higher than that of the GMM, indicating that while PEM can capture first- and second-order effects of random variations, it fails to fully model the correlation and nonlinearity among uncertainties.
Overall, the comparison highlights that the GMM-based probabilistic framework provides the most accurate and physically meaningful TRM estimation. It achieves the intended reliability level, minimizes margin violations, and offers a dynamic, correlation-aware alternative to static or semi-probabilistic methods—representing a significant advancement for secure and economic inter-regional transmission operation under uncertainty.