# Distributing Load Flow Computations Across System Operators Boundaries Using the Newton–Krylov–Schwarz Algorithm Implemented in PETSc

^{1}

^{2}

^{3}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Power Flow

`mpc`). Demand and production profiles are known as a result of the hourly clearing of an electricity market. Once the power flow problem is solved, power losses, currents flows and power flows can be consequently determined easily). The power flow problem thus consists of the determination of $2(N-G-1)$ unknowns for the PQ buses and G unknowns for PV buses, resulting in a total number of unknowns of $2(N-1)-G$. An equal number of equations that do not introduce further unknowns must be defined to solve the power flow problem. These equations are namely the active and reactive power balances for PQ buses and reactive power balance for PV buses.

- Step 1:
- solve (approximately) $J({\mathbf{x}}^{(k)})\Delta {\mathbf{x}}^{(k)}=-\mathbf{F}({\mathbf{x}}^{(k)}),$
- Step 2:
- update ${\mathbf{x}}^{(k+1)}={\mathbf{x}}^{(k)}+{\lambda}^{(k)}\Delta {\mathbf{x}}^{(k)}.$

## 3. Linear Solvers

#### 3.1. Krylov Subspace Methods

#### 3.2. Preconditioned-Generalized Minimal RESidual Method

#### 3.3. Domain Decomposition

#### 3.4. Newton–Krylov–Schwarz

## 4. Distributed Power Flow

#### 4.1. Parallel Implementation Using Message Passing

#### 4.2. Software—PETSc

`PETSc`(see Appendix A for more details) is a collection of a variety of numerical libraries for solving efficiently complex mathematical problems. The hierarchy is shown in Figure 3, which highlight in red the PETSc objects used for the distributed calculation. For the context of this work,

`PETSc`has been selected for mainly three reasons: (1) The PETSc libraries are implemented to work in parallel and manage inter-process communication on their own through the underlying message passing interface (MPI) protocol [24]; (2) PETSc offers an implementation of the NKS as well as an already implemented code for solving AC power flow equations named

`power.c`. If the

`power.c`program is called sequentially (i.e., by typing from terminal

`./power`), by default,

`power`makes use of the nonlinear solver class SNES to carry out the nonlinear line-search Newton iteration (see Section 2) for solution of Equation (9). The power.c code must be first compiled to produce the executable

`power`. This can be done by simply typing into its containing directory

`make power`. Then, KSP and PC objects are called by SNES to solve the Jacobian’s linear system. If the user wants to use a different linear and nonlinear solvers, he can just specify them at run-time (this includes the NKS solver). Parallel run of

`power.c`can be done by means of MPI command

`mpiexec`. (To execute the parallel run the user must type in the folder containing the

`power`executable:

`mpiexec <np> ./power`where

`np`is the number of processes the user desires to use.) (3) PETSc offers specific sub-classes to handle structured and unstructured grid (in our context, electricity grids) for modelling and managing the topology and the physics for large-scale networks (an application example can be found in [25]). In this work, these are the DMPlex and DMNetwork sub-classes, built on top of DM class, to manage electricity grids actually used by

`power.c`. These abstractions provide several routines and features for network system composition and decomposition. In

`power.c`, the decomposition is managed by the function DMNetworkDistribute and it follows an e.g., partitioning among processors. The nodes and all the attached components are then split according to the e.g., ownership.

`PETSc`decomposes the grid in such a way that all the processors own approximately the same number of elements, hence not respecting the ownership of grid operators. Our application code (

`dpower.c`to reflect the improved distributed attitude of the code) uses

`power.c`as a template and has thus been developed to use a shell partitioner, meaning making the partition of network as desired and following topological criteria that reflect the ownership of zones of grid operators.

#### 4.3. Hardware—Cluster Computing

`machine1`) is an 8-node 32 GB RAM machine, each node being an Intel Xeon E3-1275 3.50 Ghz (Santa Clara, CA, USA) running on an Ubuntu 17.10 64-bit GNU/Linux distribution and mounting a network card of 1 Gbit/s. The second workstation (

`machine2`) is a 4-node 4 GB RAM machine, each node being an Intel Core i3-3110M 2,4 Ghz, running on a GNU/Linux Mint 18.3 64-bit distribution and mounting a network card of 100 Mbit/s. Notice that different GNU/Linux distributions have been used on purpose to guarantee portability of the model. Moreover, a quite low-performing network card is used on

`machine2`to purposely act as a bottleneck, to consider a not extremely high-performance environment. This is because, in reality, grid operators would likely coordinate through a lower performance wide area network (WAN).

## 5. Case Study

#### 5.1. Policy Motivation

#### 5.2. Test Case

`MATPOWER`case out of

`case9241pegase.m`(see Figure 4), where this latter is a case that can be found in the

`MATPOWER`libraries. Such case models the high voltage EU Transmission Network of 24 state members detailing 9241 buses, 1445 generators, 16049 branches and 1319 transformers. There are nine voltage levels: 750, 400, 380, 330, 220, 154, 150, 120, 110 kV.

`case9241pegase.m`case is divided into zones, each zone representing a national transmission grid. Our test case is built up by extracting two zones from the

`case9241pegase.m`, namely the Zone 4 and Zone 5 (which correspond to Zones D and E in Figure 4). In order to avoid confusion, Zone 4 will be simply called Zone 1 and Zone 5 will be called Zone 2.

## 6. Numerical Results

`shell`partitioner splits the network according to zone 1 and 2 ownership. Results in terms of simulation time are reported in Table 1 (s represents the number of overlapping variables). Notice that each simulation has been repeated up to 10 times and eventually averaged (see Section 4.3) to control the impact of network traffic.

## 7. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. PETSc

`PETSc`) is a set of data structures and routines that provide an interface for the implementation of large-scale application codes on parallel and serial architectures. Application codes can be written in Fortran, C, C++, Python and

`MATLAB`. Advantages and possibilities include combined electromechanical-electromagnetic transient simulations, combined transmission-distribution analysis, electromechanical transients simulation, power flow application, power system optimization, and electromagnetic transients simulation.

`PETSc`in order to minimize the inputs of the user in this sense. MPI package and BLASLAPACK (a package for basic matrix-vector operations) are the foundations of

`PETSc`, onto which are then built increasingly complex packages. The KSP and PC packages manage the iterative linear solvers (some of them: conjugate gradient (CG), generalized minimal residual method (GMRES), bi-conjugate gradient method (BICG), minimal residual method (MINRES) and many others) and preconditioners (some of them: block-jacobi (BJACOBI), additive schwarz (ASM), incomplete LU factorization (ILU) and many others), respectively. SNES class include nonlinear solvers instead (some of them include: line-search Newton [NEWTONLS], trust region Newton–Raphson (NEWTONTR) and many others), which, on their own, call the sub-classes (i.e., KSP and PC as needed).

## References

- Gerard, H.; Puente, E.I.R.; Six, D. Coordination between transmission and distribution system operators in the electricity sector: A conceptual framework. Util. Policy
**2018**, 50, 40–48. [Google Scholar] [CrossRef] - General Guidelines for Reinforcing the Cooperation between TSOs and DSOs. 2015. Available online: https://www.entsoe.eu (accessed on 1 October 2018).
- Zegers, A.; Brunner, H. TSO–DSO interaction: An overview of current interaction between transmission and distribution system operators and an assessment of their cooperation in Smart Grids. In ISGAN Discussion Paper, Annex 6 Power T&D Systems, Task 5; ISGAN (International Smart Grid Action Network): Madrid, Spain, 2014; Available online: http://www.iea-isgan.org (accessed on 1 October 2018).
- Hadush, S.Y.; Meeus, L. DSO-TSO cooperation issues and solutions for distribution grid congestion management. Energy Policy
**2018**, 120, 610–621. [Google Scholar] [CrossRef] - Tinney, W.F.; Hart, C.E. Power flow solution by Newton’s method. IEEE Trans. Power Appl. Syst.
**1967**, 11, 1449–1460. [Google Scholar] [CrossRef] - De León, F.; Semlyen, A. Iterative solvers in the Newton power flow problem: Preconditioners, inexact solutions and partial Jacobian updates. Proc. Inst. Electr. Eng. Gen. Transm. Distrib.
**2002**, 4, 479–484. [Google Scholar] [CrossRef] - Flueck, A.J.; Chiang, H.D. Solving the nonlinear power flow equations with an inexact Newton method using GMRES. IEEE Trans. Power Syst.
**1998**, 13, 267–273. [Google Scholar] [CrossRef] - Van den Eshof, J.; Sleijpen, G.L.G. Inexact Krylov Subspace Methods for Linear Systems. SIAM J. Matrix Anal. Appl.
**2004**, 26, 125–153. [Google Scholar] [CrossRef][Green Version] - Idema, R.; Lahaye, D.J.P.; Vuik, C.; Sluis, L. IEEE scalable Newton–Krylov Solver for very large power flow problems. IEEE Trans. Power Syst.
**2012**, 27, 390–396. [Google Scholar] [CrossRef] - Gander, M.J. Overlapping Schwarz for Parabolic Problems. In Proceedings of the Ninth International Conference on Domain Decomposition Methods, Hardanger, Norway, 4–7 June 1996; pp. 97–104. Available online: https://pdfs.semanticscholar.org/1498/15e123af0c06faca1e63abd57f7bba44549a.pdf (accessed on 1 October 2018).
- Balay, S.; Abhyankar, S.; Adams, M.; Brown, J.; Brune, P.; Buschelman, K.; Dalcin, L.D.; Eijkhout, V.; Gropp, W.; Kaushik, D.; et al. PETSc Users Manual; ANL-95/11—Revision 3.9; Argonne National Laboratory: Lemont, IL, USA, 2018.
- Balay, S.; Gropp, W.D.; McInnes, L.C.; Smith, B.F. Efficient management of parallelism in object oriented numerical software libraries. In Modern Software Tools in Scientific Computing; Birkhäuser Press: Boston, MA, USA, 1997; pp. 163–202. [Google Scholar]
- Zimmerman, R.D.; Murillo-Sánchez, C.E.; Thomas, R.J. Matpower: Steady-state operations, planning and analysis tools for power systems research and education. IEEE Trans. Power Syst.
**2011**, 26, 12–19. [Google Scholar] [CrossRef] - Murillo-Sánchez, C.E.; Zimmerman, R.D.; Anderson, C.L.; Thomas, R.J. Secure planning and operations of systems with stochastic sources, energy storage and active demand. IEEE Trans. Smart Grid
**2013**, 4, 2220–2229. [Google Scholar] [CrossRef] - Josz, C.; Fliscounakis, S.; Maeght, J.; Panciatici, P. AC Power Flow Data in MATPOWER and QCQP Format: iTesla, RTE Snapshots, and PEGASE. 2016. Available online: http://arxiv.org/abs/1603.01533 (accessed on 1 October 2018).
- Fliscounakis, S.; Panciatici, P.; Capitanescu, F.; Wehenkel, L. Contingency ranking with respect to overloads in very large power systems taking into account uncertainty, preventive, and corrective actions. IEEE Trans. Power Syst.
**2013**, 28, 4909–4917. [Google Scholar] [CrossRef] - Marconato, R. Electric Power Systems, 1; Comitato Elettrotecnico Italiano (CEI): Milano, Italy, 2002; ISBN 8843200143. [Google Scholar]
- Saad, Y. Iterative Methods for Sparse Linear Systems; Society of Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 2000; ISBN 0898715342. [Google Scholar]
- Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Comput.
**1986**, 7, 856–869. [Google Scholar] [CrossRef] - Zhang, Y.S.; Chiang, H.D. Fast Newton-FGMRES solver for large-scale power flow study. IEEE Trans. Power Syst.
**2010**, 25, 769–776. [Google Scholar] [CrossRef] - Chen, T.H.; Chung-Ping, C. Efficient Large-scale power grid analysis based on preconditioned Krylov-subspace iterative methods. In Proceedings of the 38th Annual Design Automation Conference, Las Vegas, NV, USA, 18–22 June 2001; ACM: New York, NY, USA, 2001; pp. 559–562. [Google Scholar]
- Zhang, J. Preconditioned Krylov subspace methods for solving nonsymmetric matrices from CFD applications. Comput. Methods Appl. Mech. Eng.
**2000**, 189, 825–840. [Google Scholar] [CrossRef][Green Version] - Cai, X. Overlapping domain decomposition methods. In Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar] [CrossRef]
- Message P Forum. MPI: A Message-Passing Interface Standard; University of Tennessee: Knoxville, TN, USA, 1994. [Google Scholar]
- Maldonado, D.A.; Abhyankar, S.; Smith, B.; Zhang, H. Scalable Multiphysics Network Simulation Using PETSc DMNetwork; Argonne National Laboratory: Lemont, IL, USA, 2017.
- Hendrickson, B.; Lelandy, R. The Chaco User’s Guide Version; Sandia National Laboratories: Albuquerque, NM, USA, 1995. [CrossRef]
- Karypis, G.; Schloegel, K.; Kumar, V. Parmetis: Parallel Graph Partitioning and Sparse Matrix Ordering Library; Department of Computer Science and Engineering, University of Minnesota: Minneapolis, MN, USA, 2013. [Google Scholar]
- KU Leuven Energy Institute. Cross-Border Electricity Trading: Towards Flow-Based Market Coupling; EI-FACT SHEET 2015-02; KU Leuven Energy Institute: Heverlee, Belgium, 2015; Available online: https://set.kuleuven.be/ei/factsheet9/at_download/file (accessed on 1 October 2018).
- Jacottet, A. Cross-Border Electricity Interconnections for a Well-Functioning EU Internal Electricity Market; Oxford Institute for Energy Studies: Oxford, UK, 2012. [Google Scholar]

**Figure 3.**Numerical libraries of

`PETSc`, with emphasis on those used for the distributed power flow (DPF) model [11].

**Figure 4.**The partitioning of the case9241pegase.m test case in zones. This test case was taken from

`MATPOWER`. The zones

**D**,

**E**can be identified. From [15].

Preconditioner | (a) | (b) | (c) |
---|---|---|---|

Block Jacobi | 1.40 | 8.1 | 2.95 |

ASM (s = 1) | 1.40 | 7.59 | 2.96 |

ASM (s = 2) | 1.39 | 7.42 | 2.95 |

ASM (s = 3) | 1.39 | 7.65 | 2.95 |

ASM (s = 4) | 1.40 | 7.66 | 2.96 |

ASM (s = 5) | 1.40 | 7.71 | 2.96 |

ASM (s = 6) | 1.40 | 7.96 | 2.98 |

ASM (s = 7) | 1.40 | 8.05 | 3.01 |

Preconditioner | (a) | (b) | (c) |
---|---|---|---|

Block Jacobi | 5.42 | 285 | 48.9 |

ASM (s = 1) | 1.76 | 41 | 7.9 |

ASM (s= 2) | 1.33 | 11.8 | 3.21 |

ASM (s = 3) | 1.29 | 9.96 | 3.02 |

ASM (s= 4) | 1.28 | 9.73 | 2.95 |

ASM (s = 5) | 1.27 | 9.03 | 2.87 |

ASM (s= 6) | 1.27 | 9.08 | 2.87 |

ASM (s = 7) | 1.28 | 9.16 | 2.87 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Rinaldo, S.G.; Ceresoli, A.; Lahaye, D.J.P.; Merlo, M.; Cvetković, M.; Vitiello, S.; Fulli, G.
Distributing Load Flow Computations Across System Operators Boundaries Using the Newton–Krylov–Schwarz Algorithm Implemented in PETSc. *Energies* **2018**, *11*, 2910.
https://doi.org/10.3390/en11112910

**AMA Style**

Rinaldo SG, Ceresoli A, Lahaye DJP, Merlo M, Cvetković M, Vitiello S, Fulli G.
Distributing Load Flow Computations Across System Operators Boundaries Using the Newton–Krylov–Schwarz Algorithm Implemented in PETSc. *Energies*. 2018; 11(11):2910.
https://doi.org/10.3390/en11112910

**Chicago/Turabian Style**

Rinaldo, Stefano Guido, Andrea Ceresoli, Domenico J. P. Lahaye, Marco Merlo, Miloš Cvetković, Silvia Vitiello, and Gianluca Fulli.
2018. "Distributing Load Flow Computations Across System Operators Boundaries Using the Newton–Krylov–Schwarz Algorithm Implemented in PETSc" *Energies* 11, no. 11: 2910.
https://doi.org/10.3390/en11112910