Bayesian Estimation on Load Model Coefficients of ZIP and Induction Motor Model

Parameter identification in load models is a critical factor for power system computation, simulation, and prediction, as well as stability and reliability analysis. Conventional point estimation based composite load modeling approaches suffer from disturbances and noises, and provide limited information of the system dynamics. In this work, a statistics (Bayesian Estimation) based distribution estimation approach is proposed for both static and dynamic load models. When dealing with multiple parameters, Gibbs sampling method is employed. The proposed method samples all parameters in each iteration and updates one parameter while others remain fixed. The proposed method provides a distribution estimation for load model coefficients and is robust for measuring errors. The proposed parameter identification approach is generic and can be applied to both transmission and distribution networks. Simulations using a 33-feeder system illustrated the efficiency and robustness of the proposal.

approaches calculate the parameters using algorithms such as least-squares (LS), genetic algorithm (GA) based on selected model structure by minimizing the objective function as follows.
obtaining the comprehensive load composition information is another fact that needs to be carefully 48 considered when implementing component based approaches. Bayesian probability is one of several interpretations of probability theory. To evaluate the probability of a hypothesis, Bayesian probability specifies some prior probability, which is then updated to a posterior probability in the light of new, relevant data (likelihood). The formula of Bayesian estimation can be written as: where p(θ) is the prior, p(θ|x) is the posterior given a observation x. p(x|θ) is the likelihood. The 61 denominator is called the normalization factor. Bayesian estimation gives the probability density 62 function (PDF) of a parameter, which is given by some already known observations. It is the product of 2. Draw initial samples θ (0) ∼ q(θ), where q(θ) is the prior and θ = {θ 1 , θ 2 , · · · , θ M }. 3. For iteration k = 1, 2, ..., K Calculate p(θ 1 |θ End For 4. The distribution estimate is the histogram of θ k , k = K, · · · , K, where K is the preset maximum iteration number, and K is the burn-in number. The results θ k , k = 1, · · · , K − 1, are burn-in data and discarded.

72
The following contributions are made in this work.

73
• By the best knowledge of the authors, the proposed Bayesian approach has not been investigated • Numerical experiments illustrate that the proposal provides robust distribution estimation 81 despite of existing noise. Furthermore, the proposal estimation is more accurate in point 82 estimation than conventional algorithms.

83
The rest of the paper is organized as follows. In section 2, a ZIP model for steady state is presented, 84 and the Gibbs sampling is derived. In section 3, an induction motor model in the context of dynamic 85 state is presented. A modified version of Gibbs sampling is proposed. Numerical experiments are 86 carried out in section 4 to illustrate the effectiveness and robustness of the proposed algorithms.

87
Section 5 discusses existing difficulties in applying the proposal to the real-time operation, and section 88 6 concludes the paper. The ZIP model describes how the power of load changes as voltage varies in the steady-state condition. The ZIP model is presented as follows.
, P and Q are real and reactive power, and V is voltage magnitude at the load terminal. will discuss only the real power as an example in the following. The identification of reactive power 96 parameters will follow the same procedure.

97
Define where The reasons for making this assumption are: 1) According to the law of large numbers, a normal distribution would be the best one to represent the characteristics of the noise if the number of experiment is large enough. 2) Since normal distribution is a conjugate distribution, it is easier for model parameter updating when implementing Gibbs sampling.
where the distribution of τ is a gamma distribution follows G(a, b). It can be shown that after each 106 iteration of Gibbs sampling, the post distributions of these three parameters remains the same form.

107
Gibbs sampling of the ZIP model is stated in Algorithm 2. The detailed derivation of the algorithm can 108 be found in Appendix A.

109
Algorithm 2 Gibbs Sampling in the ZIP model 1. Get measurement of x, y.
2. Draw initial samples α 1 ∼ N (µ (0) End For 4. Discard burn-in data, and the distribution estimate is the histogram samples. The dynamic part of load is usually represented by an IM model, which is discussed below [6].

Induction Motor Model and Coefficient Estimation
where T X r +X m R r , X X s + X m , X X s + X m X r X m +X r , A + B + C = 1. In (6), R s is motor stator winding 113 resistance, X s is motor stator leakage reactance, X m is motor magnetizing reactance, R r is rotor 114 resistance, X r is rotor leakage reactance, H is rotor inertia constant, ω is rotor speed, I d and I q are stator The differential equation in (7) is non-linear and the differentiations of E d , E q , ω cannot be measured directly. These features make the procedure in ZIP models unsuitable for IM model parameter identification and complicates the estimation process. In this work, the state of IM model is assumed to follow the transitions below.
The prior distribution of these parameters is assumed to be as follows.
That is, the transition of the IM model is assumed to be disturbed by some Gaussian noise and the 124 prior distribution of the parameters follows Gaussian distributions. In this case, if the measurement of

142
Simulation studies are carried out in this section for ZIP and IM models, separately. In this work, 143 the maximum Gibbs sampling runs K is set to 40000 and the burn-in length K is chosen as 5000 [23].
Algorithm 3 Gibbs Sampling in the ZIP model ) according to equation A13. End For 4. Discard burn-in data, and the distribution estimate is the histogram samples.

145
A 33-bus test feeder is used to generate the testing data. Load at bus 18 is replaced by a ZIP model. Randomness is added to loads at other buses by multiplying a random factor drawing from a uniform distribution following U [0.1, 4.5] as shown in (8).
where P i is the original load in the 33-bus test feeder, i indicates the node number, and P w,i is the to loads at other buses. The dash line in Figure 4 indicates the voltage at bus 18 using the real ZIP 160 coefficients, and the solid line is the voltage measured at the same bus using estimated coefficients.

161
The comparison shown in Figure 5 is the distributions of voltage and active power differences, ∆V 162 and ∆P. It can be seen that the differences are in the range of 10 −4 and 10 −3 , respectively. Figure 6   163 shows the burn-in process observed. It shows that the process is very fast, and there is no significant 164 burn-in process that can be observed from this figure. The estimation of the parameters in the IM model follows the same procedure. A dynamic 167 model is built in Matlab/Simulink to generate the data that is used in (7)   shown in Figure 7. The sampling results are shown in Figure 9 and compared with the real data in 169   Table 1. According to Table 1, the estimation error is less than 6%, with 5% measurement error. It is 170 worth to notice that estimation of parameters is highly relied on the prior distributions. A good prior 171 can significantly increase the estimation accuracy and shorten the burn-in period. A comparison of 172 the active power response using test parameter and identified parameter is shown in Figure 8. The

188
Some difficulties during the identification are discussed.

189
1 It is assumed that the noise in ZIP and IM models follows a normal distribution. In practice, the  The following abbreviations are used in this manuscript: show that the Guassian distribution assumption makes the post distribution of these three parameters 222 remains the same form of the priors.
In the kth iteration, samples are firstly drawn from the prior distributions 2 , τ (k) are initialized. After some algebraic yields: x, y) is the posterior probability given the samples of x, y, α 2 , and τ, p(x, y|α 1 , α (k) 2 , τ (k) ) is the likelihood in (2), and p(α 1 ) is the prior estimation following (3). Taking the log form on both sides of (A1) yields Taking log form helps convert the multiplications of the probabilies to summations, which can 224 significantly simplify calculations when updating the distributions of the posteriors. For a normal 225 distribution y ∼ N (µ, 1/τ), the log dependence on y is − τ 2 (y − µ) 2 ∝ − τ 2 y 2 + τµy. 226 The right hand side of equation (A2) can be further written as the following if the terms not related to α 1 are omitted. yields

).
A new α 1 is sampled from the estimated distribution N (µ . Following similar procedure α 2 can be derived. Define Therefore, the following can be derived: ).
(A8) the posterior β 3 |y E d , y E q , y ω , E q , E d , the posterior α b |y E d , y E q , y ω , E q , E d , I d , I q , U d , U q , τ (k)