An E ﬃ cient Framework for Adequacy Evaluation through Extraction of Rare Load Curtailment Events in Composite Power Systems

: With the growing robustness of modern power systems, the occurrence of load curtailment events is becoming lower. Hence, the simulation of these events constitutes a challenge in adequacy indices assessment. Due to the rarity of the load curtailment events, the standard Monte Carlo simulation (MCS) estimator of adequacy indices is not practical. Therefore, a framework based on the enhanced cross-entropy-based importance sampling (ECE-IS) method is introduced in this paper for computing the adequacy indices. The framework comprises two stages. Using the proposed ECE-IS method, the ﬁrst stage’s purpose is to identify the samples or states of the nodal generation and load that are greatly signiﬁcant to the adequacy indices estimators. In the second stage, the density of the input variables’ conditional on the load curtailment domain obtained by the ﬁrst stage are used to compute the nodal and system adequacy indices. The performance of the ECE-IS method is veriﬁed through a comparison with the standard MCS method and the recent techniques of rare events simulation in literature. The results conﬁrm that the proposed method develops an accurate estimation for the nodal and system adequacy indices (loss of load probability (LOLP), expected power not supplied (EPNS)) with appropriate convergence value and low computation time. curation, A.A.M.; Formal analysis, V.O. and A.A.M.; Investigation, R.V. and M.N.I.; Methodology, V.O.; Software, A.A.M.; Supervision, V.O., M.N.I., and H.R.; M.N.I.; Visualization, V.O. and H.R.; Writing—original draft, A.A.M.; Writing—review & editing, V.O., T.S.H.,


Introduction
Power system reliability evaluation plays a crucial role in the decision-making process of power system development planning. The reliability evaluation process invariably involves consideration of adequacy and security concepts [1,2]. Security is explained as "the measure of how an electric power system can withstand sudden disturbances such as electric short circuits or unanticipated loss of system components". While adequacy is determined as "a measure of the ability of a bulk of the surrogate model. Therefore, the accuracy of the surrogate model needs to be ensured to avoid adding uncertainty to the already existing input uncertainty. The main disadvantage to this technique is that there is a difficulty in measuring the impact of the modeling errors on reliability results. Compared with the abovementioned methods, sampling-based methods have the advantages of being insensitive to the complexity of limit-state functions, avoiding errors from approximations of the limit-state function, and being straightforward to apply. Thus, more efficient sampling techniques for overcoming the MCS computation burden are adopted in this paper.
In the context of reducing the number of the states that need to be evaluated, variance reduction techniques enable us to extract the set of states, which make a significant contribution to the evaluation of adequacy indices. Since the variance of the MCS estimate is inversely proportional to the failure probability [5], the variance reduction methods [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36] have been developed for reducing the variance of the MCS estimate through generating samples exploring the rare load curtailment events and so shortening the computation time of obtaining an accurate estimate of the adequacy indices. The name of variance reduction techniques gathers various techniques, such as subset sampling [16], importance sampling [17][18][19], control variates [27], antithetic variates [28], stratified sampling [29], line sampling [30,31], and directional sampling [32]. For the sake of conciseness, subset simulation (SS) and importance sampling (IS) are reviewed in this paper, as they are the most substantial variance reduction approaches applied in adequacy studies. SS is based on splitting the failure domain into a series of partial failure domains. This facilitates describing the probability of failure event as a product of conditional probabilities of the partial failure events. The main advantages of SS are its capability to handle complex limit-state functions (e.g., nonlinear, with possibly multiple failure regions). On the other hand, SS have some drawbacks. Firstly, the variance estimator is not directly calculated by an analytical formula as MCS and IS techniques but must be evaluated by repetition. Secondly, even if SS provides a variance reduction compared to MCS, the number of samples needed to achieve convergence is larger than that needed with other IS techniques [16]. The IS techniques propose an alternate sampling density, called the ISD, which is the density of the input variables conditional on the failure domain. The optimal selection of the ISD can result in zero variance of the estimate of failure probability. However, in practice, sampling from the theoretically optimal ISD is not handy, because it needs knowing the failure probability and failure domain in advance. To overcome this problem, the cross-entropy [33,34] and sequential importance sampling (SIS) [35,36] techniques were applied to approximate the optimal ISD in a sequential manner.
The CE method determines the ISD iteratively through defining a sequence of more frequent events (intermediate failure events) in several probability spaces that gradually reach the target theoretical ISD (small failure event). The densities of intermediate failure events are chosen as parametric family of densities. Typically, the densities are chosen as the same family as the density of the input random variables, and the initial intermediate sampling density is chosen as the original density of the random variables. For each intermediate failure event k, the CE method identifies the parameter of a chosen density model through minimizing KL divergence between the optimal ISD of k-th intermediate failure event and the chosen probability density. The optimal ISD of k-th intermediate failure event is defined based on the intermediate failure threshold (ζ k ≥ 0), which is estimated such that a fraction (θ ∈ [0.01, 0.1]) of the limit-state function values of the samples from the fitted parametric density obtained at the previous sampling step are beyond this threshold ζ k . Starting from an initial sampling density, the density parameter updating is executed until the threshold ζ k becomes beyond zero (i.e., at least θ of the limit-state function values of the samples fall in the actual failure domain). In this case, the target optimal ISD is approximated well enough by the current parametric density. The advantage of the CE-IS approach is related to the fact that analytical updating formulas can be derived for density parameters when dealing with probability densities belonging to the natural exponential family. Another advantage concerns the fact that, similarly to the standard MCS, the estimation error is directly controlled using the estimator of the variance [34]. However, the major difficulty is to construct efficient intermediate densities used in the adaptive sampling process to approach the target optimal ISD. This paper presents an improved version of the CE-IS by incorporating two enhancements. The first one is developing a new updating scheme for the parameter of the intermediate density. In the proposed method, the indicator function of the intermediate failure events is defined by a smooth approximation function instead of using step function as the traditional CE-IS method. This allows exploiting all the samples from intermediate sampling levels in the density parameter updating, contrary to the traditional CE-IS method, which uses a small portion of the samples. In addition, a smooth shifting for the optimal ISD of intermediate failure events towards the target optimal ISD of the small failure event is achieved. This effective use of the intermediate samples leads to better estimate of the density parameter and hence to a smaller sampling error in the corresponding probability estimate. Secondly, exploiting as stopping criterion the coefficient of variation of the weights according to the smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD improves the robustness of the method convergence. These modifications contribute to obtaining the accurate optimal ISD of nodal generations and loads, and so, the nodal and system adequacy indices are computed accurately. We compare the performance of the proposed method to the traditional CE-IS and recently proposed techniques in literature such as sequential importance sampling and subset simulation. The paper is structured as follows. After a brief introduction, Section 2 illustrates the principles of adequacy assessment in power systems. Moreover, the mathematical objective function and constraints of the optimal power flow (OPF) algorithm are defined. Section 3 describes the framework of adequacy indices evaluation based on the proposed ECE-IS method. In addition, the mathematical description of the ECE-IS is presented. Section 4 depicts the case study and numerical results. The results discussion is explained in Section 5. Section 6 outlines the main findings.

Principles of Adequacy Evaluation in Power Systems
For adequacy evaluation purposes, a bulk power system model is normally represented by a group of areas or market nodes connected by transmission lines. Then, the uncertainties of total available generation capacity and total required load are depicted for each area or node. The available capacity of generators in each node can be defined by a binomial distribution. Figure 1 shows the binomial PDF for the failure of m generators in a power plant including y generators with the same probability of failure (q = 5%). The probability of failure is calculated using Equation (1). analytical updating formulas can be derived for density parameters when dealing with probability densities belonging to the natural exponential family. Another advantage concerns the fact that, similarly to the standard MCS, the estimation error is directly controlled using the estimator of the variance [34]. However, the major difficulty is to construct efficient intermediate densities used in the adaptive sampling process to approach the target optimal ISD. This paper presents an improved version of the CE-IS by incorporating two enhancements. The first one is developing a new updating scheme for the parameter of the intermediate density. In the proposed method, the indicator function of the intermediate failure events is defined by a smooth approximation function instead of using step function as the traditional CE-IS method. This allows exploiting all the samples from intermediate sampling levels in the density parameter updating, contrary to the traditional CE-IS method, which uses a small portion of the samples. In addition, a smooth shifting for the optimal ISD of intermediate failure events towards the target optimal ISD of the small failure event is achieved. This effective use of the intermediate samples leads to better estimate of the density parameter and hence to a smaller sampling error in the corresponding probability estimate. Secondly, exploiting as stopping criterion the coefficient of variation of the weights according to the smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD improves the robustness of the method convergence. These modifications contribute to obtaining the accurate optimal ISD of nodal generations and loads, and so, the nodal and system adequacy indices are computed accurately. We compare the performance of the proposed method to the traditional CE-IS and recently proposed techniques in literature such as sequential importance sampling and subset simulation. The paper is structured as follows. After a brief introduction, Section 2 illustrates the principles of adequacy assessment in power systems. Moreover, the mathematical objective function and constraints of the optimal power flow (OPF) algorithm are defined. Section 3 describes the framework of adequacy indices evaluation based on the proposed ECE-IS method. In addition, the mathematical description of the ECE-IS is presented. Section 4 depicts the case study and numerical results. The results discussion is explained in Section 5. Section 6 outlines the main findings.

Principles of Adequacy Evaluation in Power Systems
For adequacy evaluation purposes, a bulk power system model is normally represented by a group of areas or market nodes connected by transmission lines. Then, the uncertainties of total available generation capacity and total required load are depicted for each area or node. The available capacity of generators in each node can be defined by a binomial distribution. Figure 1 shows the binomial PDF for the failure of generators in a power plant including generators with the same probability of failure ( = 5%). The probability of failure is calculated using Equation (1).   For a large number of generators, according to the de Moivre-Laplace theorem, the binomial distribution could be approximated by a Gaussian one. Such an approach is accepted in this paper. The Gaussian distribution is also used for describing the aggregated load connected to each node. In order to calculate realistic adequacy indices of each node, it is necessary to take into account both load curtailment strategy and physical constraints [37,38]. For each sample within the simulation method, a power flow model must be solved to detect the state of the transmission system. If there are transmission lines outside their loading limits, an OPF algorithm is executed to apply corrective actions (e.g., generation rescheduling and/or load curtailment) by solving an optimization problem. The objective of optimization problem is to minimize the total amount of load curtailment constrained to the operating limits of generating units and transmission circuits, and to the power flow equations. As there are several procedures in the evaluation of a state, a linear representation for the power flow equations is often adopted in HL2 adequacy studies to significantly decrease the computational effort [39]. This representation cannot analyze the impact of bus voltages and reactive power on the system adequacy. This simplification is acceptable, since the adequacy studies mainly deal with long term power system analysis [16]. Hence, a DC-power-flow-based optimization method is used because it is simpler (being linear model) and faster in computation compared with AC power flow.

Objective Function
A load curtailment sharing philosophy is considered in this work. Unserved demand is shared across all the nodes of a power system. Using weight factors in the objective function, a priority order policy is employed for nodes [18]. The objective function is presented in a quadratic form: In addition to the objective function, the power system physical constraints are of paramount importance and will strongly affect resulting adequacy indices.

Constraints
The main aim of adequacy evaluation is to calculate adequacy indices of a power system, while not paying special attention to voltages drops, congestions, and frequency problems of the grid [2]. The nodal power balance constraint can be written in the next form: In addition to (3), a system power balance equation is used: The power flows vector is calculated using a simplified form of DC power flows equations: Power losses for each Monte Carlo simulation sample are calculated based on the assumption that all nodal voltages are equal to the nominal ones. Power flows through transmission lines are limited by maximum transmission line capacities: The generation and curtailed load in each node are restricted by supply and demand bid limits:

A Framework of Adequacy Indices Evaluation
The framework includes two stages. In the first stage, the samples or states of the nodal available generations and required loads that lead to load curtailment events are extracted. These samples are used in the second stage to compute the adequacy indices. The adequacy indices adopted here are loss of load probability (LOLP) and expected power not supplied (EPNS) for system and nodes.
The load curtailment event is defined as the inability to satisfy the loads at all nodes without violating the system operating constraints such as the limited capacity of transmission lines. Thus, the load curtailment event may result from low available generation or high required load or the limited capacity of transmission lines or union of some (or all) above. To carry out the first stage aim, the occurrence of load curtailment is verified at each sampled state of the uncertain inputs . . , L n , N is the number of samples, and n is the number of nodes. Suppose the set F is the failure domain in the input parameter space, i.e., x i ∈ F, which leads to a load curtailment in the system. The failure domain F is expressed by a limit-state function S(x i ) as follows: The function S(x i ) defines the degree of deficiency of system state x i . It equals the amount of load curtailment when the state fails, i.e., n d=1 L curt.d (x i ) > 0. Otherwise, the system deficiency is defined as the sum of the difference between nodal available generation capacity and the required load as shown in (10). Since the target load curtailment event has a small probability in the original sample space, the limit-state function is used to define a sequence of simulations of more frequent events (intermediate failure events) in several probability spaces. This is illustrated in detail in the following subsections. In (10), the amount of load curtailment or power not supplied (PNS) is computed through the DC-OPF algorithm. The OPF algorithm includes two actions: generation redispatch and load curtailment that aim to minimize the system load curtailment and satisfy the security constraints of the transmission network, as illustrated in the previous section.
For estimating the target failure probability (load curtailment probability), the 2n-variate normal distribution of uncertain inputs is expressed by g(x; w), which is depicted by mean µ and covariance Σ vectors, and so w = [µ; Σ]. To simplify the writing of equations, the vector {x i , i = 1, . . . , N} is symbolized by x. Hence, the probability of failure can be computed by the following expression: in which E g indicates that the expectation operator is taken with respect to the density g and L F (x) denotes the indicator function. The MCS estimator for P F is where Since the estimator is unbiased, and its variance is defined by The coefficient of variation CV is considered as the error measure for the P F estimator. The squared CV is given as follows: Hence, the CV of MCS is approximately (NP F ) − 1 2 and so, for small P F , the needed number of samples N is large for getting an accurate estimate. To improve the efficiency of MCS, the proposed method, ECE-IS, as a variance reduction technique has been developed for reducing the variance of the MCS estimator. We primarily revise the traditional CE-IS to develop the ECE-IS.

Implementation of CE-IS Method
IS introduces an alternate sampling density q(x), termed the ISD. A proper selection of q(x) is required to represent accurately the failure domain of inputs. The probability of failure shown in (10) is computed regarding q(x) and rewritten in the following manner: where W(x) = g(x;w) q(x) is the likelihood ratio or importance weight function, which is expressed as a ratio of original and proposed densities. The IS estimate of P F is given by: in which {x i , i = 1, . . . ., N} are identically distributed samples based on q(x). To obtain the optimal selection of the ISD q * (x), the variance of P F estimators have to be minimized as follows: The theoretically optimal ISD leading to zero variance is given by the following Equation [5]: Since the optimal ISD is dependent on unknown quantities, P F and L F (x), as shown in (14), the computation of the optimal ISD is not possible directly. The CE as a sampling technique is used to find a near-optimal ISD through fitting a parametric density model. It exploits the samples from intermediate sampling levels for fitting the selected parametric density. The parameter vector (u) is defined through minimizing the cross entropy or KL divergence between the unknown optimal ISD given in (15) and the selected probability density q(x; u). In this work, the selected q(x; u) is a multivariate normal probability distribution having u = [µ IS ; Σ IS ]. The cross entropy between the q * (x) and q(x; u) can be described as follows (see [17]): Then, the optimization problem can be expressed in the following manner: By substituting q * (x) in (16) with the Equation (14), the optimization problem becomes in which The CE method solves this optimization problem iteratively through defining a series of intermediate densities q(x; u k ), k = 1, . . . ., NT that gradually reach the target density representing well the failure region as shown in Figure 2. The intermediate failure domain F k is defined using a threshold ζ k as stated in (19). ζ k is calculated as the θ-quantile of the sorted limit-state function values S(x i ) from smallest to largest of the samples from the fitted parametric density obtained at the previous step q(x; u k−1 ). For each intermediate failure event k, the optimization problem is solved to get its optimal parameter u k using the samples distributed based on q(x; u k−1 ). The aim is to obtain the final parameter vector u NT , which approximates the solution of (17).
Starting from the initial parameter vector u 0 , each following u k is evaluated by solving the CE optimization problem written in (20) with target optimal ISD set to q k * (x), which is the optimal ISD of the k-th intermediate failure event. The u 0 is taken as nominal parameter vector of input variables, which is w.
in which q k * (x) q(x i ;u k−1 ) . The procedure is repeated until ζ k becomes a negative value, i.e., at least θm samples fall in the actual failure domain, where θ ∈ [0.01, 0.1] [17]. Thus, NT is set to the current event k, and the optimal ISD is approximated quite by the density q(x; u NT−1 ), and the probability of failure is estimated as follows:  Starting from the initial parameter vector , each following is evaluated by solving the CE optimization problem written in (20) with target optimal ISD set to * ( ), which is the optimal ISD of the -th intermediate failure event. The is taken as nominal parameter vector of input variables, which is .

Implementation of ECE-IS Method
In The indicator failure function can be defined as follows: , where δ k is the control parameter of function bandwidth, and Φ is the standard normal CDF. When δ k approaches zero, the smooth function approaches to the target step indicator function. Hence, δ 0 > δ 1 > · · · > δ NT > 0 defines a decreasing series of bandwidths, as shown in Figure 3.
Using the smooth function of L F k , the optimization problem (20) becomes as follows: . δ k is determined such that the optimal ISD q k * (x) is approximated well enough by samples drawn from q(x; u k−1 ), i.e., the variance of the importance weights W k (x; u k−1 , δ k ) is small. This is done by minimizing the difference between the CV of the weights W k (x i ; u k−1 , δ k ), i = 1, . . . , N and the specified CV target at each intermediate event k, as written in (22). The indicator failure function can be defined as follows: is the control parameter of function bandwidth, and is the standard normal CDF. When approaches zero, the smooth function approaches to the target step indicator function. Hence, > > ⋯ . > > 0 defines a decreasing series of bandwidths, as shown in Figure 3.  As shown in Algorithm 1, starting with δ 0 = ∞ and u 0 as a nominal parameter vector, this procedure is reiterated and stopped when the CV of the weights calculated in (24) of the present smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD is lower than the CV target . Hence, NT is set to the current event k, and the optimal ISD is approximated well enough by the density q(x; u NT−1 ). Utilizing the CV of the weights as stopping criterion instead of the parametric density q(x; u k−1 ) improves the robustness of the method convergence, as will be shown in the section of results.
The samples from the density q(x; u NT−1 ) will be used in the second stage as shown in Algorithm 1 for computing the system adequacy indices (LOLP-EPNS) for each node and system. The system LOLP and EPNS can be expressed as follows: For nodes, the LOLP and EPNS can be rewritten as follows: in which

Calculate the LOLP and EPNS indices and their coefficient of variation (convergence)
for each node and system, as shown in the equations from (25) to (28).

Results
The proposed method is tested and evaluated for the five-bus test scheme, presented in Figure  4. The data for the buses and transmission lines are presented in Table 1. In terms of mean values of the generation and demand capacities, the test scheme includes two in surplus, one in balance, and two deficient buses. The probabilistic nature of the generation and demand could potentially result in different combinations of operational states of the power system buses. All the buses are connected by transmission lines of the same transmission capacity but different resistance and reactance. All tests are carried out using MATLAB 2017 on an Intel Core i5-8 G memory personal computer. The standard MCS simulation method is used as a benchmarking method. The maximum number of simulation samples for MCS is 5 × 10 4 . A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. The rare events simulation techniques (CE-IS [33], SIS [35], SS [16], ECE-IS) use the following parameter values: In Figure 5, the PDFs of the system limit-state function S(x) are shown for different methods (CE-IS, SIS, SS, ECE-IS) and MCS. It illustrates that the proposed ECE-IS method has the largest probability of S(x) < 0, which is 80% in comparison with 23, 55, 32, and 20% for MCS, CE-IS, SIS, and SS, respectively. This means that the samples extracted by the ECE-IS method are more likely to cause the load curtailment events. It represents 80% of the total sampled system states. Once the samples from the optimal distributions of nodal generations and loads are obtained, the adequacy indices (EPNS, LOLP) can be evaluated. For test purposes the EPNS, LOLP are calculated for all the buses. The histogram in Figure 6 shows the computation time and the EPNS values obtained by the different methods. The computation time includes the time spent in the first stage for extracting the rare load curtailment events.

First stage
Second Stage

Results
The proposed method is tested and evaluated for the five-bus test scheme, presented in Figure 4. The data for the buses and transmission lines are presented in Table 1. In terms of mean values of the generation and demand capacities, the test scheme includes two in surplus, one in balance, and two deficient buses. The probabilistic nature of the generation and demand could potentially result in different combinations of operational states of the power system buses. All the buses are connected by transmission lines of the same transmission capacity but different resistance and reactance. All tests are carried out using MATLAB 2017 on an Intel Core i5-8 G memory personal computer. The standard MCS simulation method is used as a benchmarking method. The maximum number of simulation samples for MCS is 5 × 10 4 . A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. The rare events simulation techniques (CE-IS [33], SIS [35], SS [16], ECE-IS) use the following parameter values: θ = 0.1, CV target = 0.05, maximum number of iterations = 50, and number of samples per iteration = 2000.
In Figure 5, the PDFs of the system limit-state function S(x) are shown for different methods (CE-IS, SIS, SS, ECE-IS) and MCS. It illustrates that the proposed ECE-IS method has the largest probability of S(x) < 0, which is 80% in comparison with 23, 55, 32, and 20% for MCS, CE-IS, SIS, and SS, respectively. This means that the samples extracted by the ECE-IS method are more likely to cause the load curtailment events. It represents 80% of the total sampled system states. Once the samples from the optimal distributions of nodal generations and loads are obtained, the adequacy indices (EPNS, LOLP) can be evaluated. For test purposes the EPNS, LOLP are calculated for all the buses. The histogram in Figure 6 shows the computation time and the EPNS values obtained by the different methods. The computation time includes the time spent in the first stage for extracting the rare load curtailment events.    1  1700  170  600  60  1  4  5  50  500  2  50  5  50  5  2  3  1  10  500  3  1440  144  900  90  2  4  3  33  500  4  1200  120  1500  150  3  4  2  10  500  5  1500  150  2000  200  4  5  5 50 500      1  1700  170  600  60  1  4  5  50  500  2  50  5  50  5  2  3  1  10  500  3  1440  144  900  90  2  4  3  33  500  4  1200  120  1500  150  3  4  2  10  500  5  1500  150  2000  200  4  5  5 50 500  The load curtailment sharing philosophy results in relatively small EPNS values of potentially deficient buses 4 and 5. At the same time, the limited transmission lines capacity does not allow unlimited power supply from surplus buses and so the greatest EPNS is at bus 5. From the simulation results, it became clear that all the discussed methods significantly reduce a computation burden and the most computationally effective is the CE-IS method. It also could be seen that the method used in extracting load curtailment events could significantly affect results. Figure 7 shows the LOLP values for all the busses of the test scheme. Theoretically, in the case of unlimited transmission lines capacity, the objective function (1) must result in equal LOLP of all the buses. However, in terms of mean values, the power surplus of the bus 1 is 10% higher than the transmission capacity of the connected lines. Possible power supply from bus 1 is locked, and as a result, in some deficient test scheme state samples, the demand at the bus is not curtailed. It can be seen from the histogram that the LOLP of bus 1 is relatively small in comparison to other bus values. Figures 8 and 9 show the errors of methods in comparison to the standard MCS method. For almost all the buses, EPNS values computed using the proposed method are accurate within range of 8%. Even though the ECE-IS method is less computationally effective than other methods, it could provide significantly more accurate results. In comparison to the traditionally used MCS approach, the proposed method is nearly eleven times faster. The smallest LOLP error values are generally also represented by the ECE-IS approach. Notice, that in the case of LOLP, even small deviations would introduce a great error, as it can be seen. The load curtailment sharing philosophy results in relatively small EPNS values of potentially deficient buses 4 and 5. At the same time, the limited transmission lines capacity does not allow unlimited power supply from surplus buses and so the greatest EPNS is at bus 5. From the simulation results, it became clear that all the discussed methods significantly reduce a computation burden and the most computationally effective is the CE-IS method. It also could be seen that the method used in extracting load curtailment events could significantly affect results. Figure 7 shows the LOLP values for all the busses of the test scheme. Theoretically, in the case of unlimited transmission lines capacity, the objective function (1) must result in equal LOLP of all the buses. However, in terms of mean values, the power surplus of the bus 1 is 10% higher than the transmission capacity of the connected lines. Possible power supply from bus 1 is locked, and as a result, in some deficient test scheme state samples, the demand at the bus is not curtailed. It can be seen from the histogram that the LOLP of bus 1 is relatively small in comparison to other bus values. Figures 8 and 9 show the errors of methods in comparison to the standard MCS method. For almost all the buses, EPNS values computed using the proposed method are accurate within range of 8%. Even though the ECE-IS method is less computationally effective than other methods, it could provide significantly more accurate results. In comparison to the traditionally used MCS approach, the proposed method is nearly eleven times faster. The smallest LOLP error values are generally also represented by the ECE-IS approach. Notice, that in the case of LOLP, even small deviations would introduce a great error, as it can be seen.

Discussion
A trade-off between the computational accuracy and computational efficiency leads to a wide number of approaches for a power system adequacy evaluation. Therefore, the comparison among methods must include the number of samples or computation time required to reach the accurate adequacy indices with acceptable convergence. In terms of the number of samples, the system LOLP and EPNS indices and their convergence using the standard MCS method and the different rare events simulation approaches are shown in Figures 10-13. For MCS, the number of samples to reach a convergence of 5% for both system LOLP and EPNS indices is 38,945 samples. The CE-IS and SS methods have the fastest convergence rate, while they do not develop accurate LOLP values. The ECE-IS has performed better, reaching a 0.0099 LOLP value with 4.7% convergence and 8.95 MW EPNS with 2.4% convergence. On the other hand, the standard MCS has achieved 0.0102 with 5% and 8.8 MW with 2.5% for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both nodal indices, as shown in Figures 8 and 9, and system indices, as shown in Figures 10 and 12. In the attitude of computational efficiency, the ECE-IS needs seven iterations for extracting 2000 samples representing the load curtailment events, and so, the total number of samples is 14,000 samples. Hence, the ECE-IS can achieve accurate results with a smaller number of samples and computation time. As shown in Figure 6, an 11-times speedup is achieved.

Discussion
A trade-off between the computational accuracy and computational efficiency leads to a wide number of approaches for a power system adequacy evaluation. Therefore, the comparison among methods must include the number of samples or computation time required to reach the accurate adequacy indices with acceptable convergence. In terms of the number of samples, the system LOLP and EPNS indices and their convergence using the standard MCS method and the different rare events simulation approaches are shown in Figures 10-13. For MCS, the number of samples to reach a convergence of 5% for both system LOLP and EPNS indices is 38,945 samples. The CE-IS and SS methods have the fastest convergence rate, while they do not develop accurate LOLP values. The ECE-IS has performed better, reaching a 0.0099 LOLP value with 4.7% convergence and 8.95 MW EPNS with 2.4% convergence. On the other hand, the standard MCS has achieved 0.0102 with 5% and 8.8 MW with 2.5% for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both nodal indices, as shown in Figures 8 and 9, and system indices, as shown in Figures 10 and 12. In the attitude of computational efficiency, the ECE-IS needs seven iterations for extracting 2000 samples representing the load curtailment events, and so, the total number of samples is 14,000 samples. Hence, the ECE-IS can achieve accurate results with a smaller number of samples and computation time. As shown in Figure 6, an 11-times speedup is achieved.    For comparing the robustness of the ECE-IS method with other rare events simulation techniques, the system LOLP, LOLP relative bias, and LOLP convergence values are illustrated in Table 2 for different numbers of samples per iteration. The sample analyzing times differ from each other; hence, the computation time is included in Table 2. Taken the LOLP value (0.0102) obtained by the standard MCS method as a reference value, the relative bias of LOLP values is computed as follows: . From the results in Table 2, the convergence acceleration for all methods is For comparing the robustness of the ECE-IS method with other rare events simulation techniques, the system LOLP, LOLP relative bias, and LOLP convergence values are illustrated in Table 2 for different numbers of samples per iteration. The sample analyzing times differ from each other; hence, the computation time is included in Table 2. Taken the LOLP value (0.0102) obtained by the standard MCS method as a reference value, the relative bias of LOLP values is computed as follows: . From the results in Table 2, the convergence acceleration for all methods is achieved by increasing the number of samples. However, there is a significant accuracy loss with a small number of samples for CE-IS and SS methods, while the SIS achieves a proper LOLP bias at 1000 samples but bad convergence. However, the ECE-IS is still effective for a small number of samples. It achieves a small LOLP bias (17%) with 21% convergence at 250 number of samples. In order to verify the computation accuracy and efficiency of the proposed method with the dimension of the power network and the number of random variables being considered, the results of adequacy indices are presented for the IEEE-RTS 79 system. The system includes 24 buses and 32 generators divided among 14 generating stations totalizing 3405 MW of installed capacity. The annual system peak load is 2850 MW. More information on the IEEE-RTS 79 can be found in [40]. The mean and standard deviation values for the loads were taken at the peak value and the ±10% of the peak value, respectively. A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. For the ECE-IS and MCS methods, Table 3 shows the results of the system LOLP and EPNS indices, number of samples, and computation time. The computation time includes the time spent in the first stage for extracting the rare load curtailment events. The results of the ECE-IS method are 0.00121 and 0.16 MW for LOLP and EPNS, respectively. On the other hand, the standard MCS has achieved 0.00119 and 0.154 MW for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both system indices. Moreover, the ECE-IS method is considerably more efficient than the standard MCS method. The proposed method needs only a small fraction (23%) of the samples needed by the MCS. However, in contrast to MCS, the ECE-IS method, as other techniques of rare events simulation, come along with its own set of tuning parameters, which are the target the coefficient of variation of importance weight function and the number of samples per iteration. The proper tuning of these parameters has consequences on the efficiency of the technique.

Conclusions
In this paper, a framework of adequacy indices evaluation has been proposed in composite generation and transmission power systems. The main purpose of the framework is to obtain accurate adequacy indices and enhance the computational efficiency of the standard MCS method. This is achieved by integrating rare events simulation methods in the framework's first stage for identifying the approximately optimal distortion of the nodal generations and load distributions, making rare load curtailment events more likely to be drawn. As a result of the integration of the rare event simulation methods, a new approach named ECE-IS is proposed that is more efficient and robust than other methods, such as traditional CE-IS, SIS, and SS in extracting the optimal distortion. From the reported results of the framework's second stage, the proposed method contributes to accurately evaluating the adequacy indices (LOLP-EPNS) and further enhancing the convergence of the indices in comparison with other methods. Moreover, a great speed-up was shown in terms of computation time with respect to the standard MCS method.
The implementation of the proposed method in adequacy evaluation could allow us to use more detailed power system models, which would more accurately reflect real power system operation. The impact of different network topologies (i.e., transmission network contingencies), non-linearity of the power flow equations, and chronological characteristics of power systems, such as time and spatially correlated load models, the time-dependency of renewable energy resources could be included into the power system model. This method can also be included into the many problems in both operation and planning phases, such as the evaluation of spinning reserve margins and the integration of renewable energy resources. Using adequacy indices assessments and knowing the critical nodes during system disturbance, planners can better manage the penetration of renewable sources, ensuring sustainable and reliable operation at both the system and nodal level. In the context of power system operation, operators schedule the generating units and allocate enough generation reserve amounts to ensure a lower adequacy index (the probability of load curtailment) than the maximum allowed. These issues will be adopted in future works.