# An Integrated Energy System Operating Scenarios Generator Based on Generative Adversarial Network

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

- An adversarially trained IES data generator, which interprets real operating data and generates new data with the same distribution as the real data, is introduced. We hope to solve the problems of insufficient real operating scenarios in data-driven methods for IES by the proposed generator. Through this method, a large number of IES operating data could be generated to supplement the monotonous raw data set. Furthermore, we can obtain IES operating data on various operating states such as normal operating status, fault status, and edge status, so as to study control strategies that can be used for more operating conditions (i.e., Quick response strategies can be generated in the event of a system failure).
- A model-free control method for a renewable energy and storage system is proposed, where generated data is used to improve the control performance. Q-learning is a decision-making method that can cope with high randomness and uncertainty. In this way, the agent can learn to generate the control strategy with highest reward under different IES operations. In addition, we use a dataset that has been extended by the generator to train the agent and achieve better performance in diverse operating scenarios.
- We provide both qualitative and quantitative results on the IES operating scenarios generator and control strategy for renewable and storage system, showing its superiority over the original data.

## 2. Materials and Methods

#### 2.1. Objectives

**Generator:**The objective for $G$ is minimize ${E}_{z\sim {p}_{z}\left(z\right)}\mathit{log}\left(1-D\left(G\left(z\right)\right)\right)$. We hope the generated data from $G$ is as close to the real data as possible. The loss function for the generator is:

**Discriminator:**The objective for $D$ is to maximize ${E}_{x\sim {p}_{d}\left(x\right)}\left[logD\left(x\right)\right]+{E}_{z\sim {p}_{z}\left(z\right)}log\left(1-D\left(G\left(z\right)\right)\right)$, which denotes that the discriminator maximizes the probability gap between the generated data and the real data. The loss function for discriminator is:

- (1)
- Input a batch of random noise $z$ to $G$ to obtain a set of fake sample data $G\left(z\right)$;
- (2)
- Sample m points $\left\{{x}^{1},{x}^{2},\dots ,{x}^{m}\right\}$ randomly from the target data;
- (3)
- Calculate the values of the objectives for the discriminator and generator;
- (4)
- Back-propagate and update the network parameters $\theta $; and
- (5)
- Repeat steps 1–4 until convergence.

#### 2.2. Networks

#### 2.3. Q-Learning Algorithm Based on Extended Data

#### 2.4. Problem Statement

**Objective**: The objective function is given by maximizing the accumulative return during one period:

**Penalty costs**: A renewable energy system needs to pay penalty fees when the co-operation system cannot meet the demands of power grid dispatching. The following defines the penalty costs:

#### 2.5. Q-Learning

**Data:**There are many parameter settings involved in every training episode, such as photovoltaic and wind power or energy price. This means that sufficient scenarios are needed in the training set; otherwise, the agent will not learn control strategies for the missing scenarios. Thus, we expanded our training set by the data generation method presented in Section 2.

**Environment:**The state in the scenario includes the power generation ${P}_{t}$ of the renewable energy and the power storage ${E}_{t}$. First, ${P}_{t}$ is discretized into an N-dimensional interval: [0, $\Delta P$), [$\Delta P$, $2\Delta P$), …, [${P}_{new}^{max}-\Delta P$, ${P}_{new}^{max}$]. Then, ${E}_{t}$ is discretized into an M-dimensional interval: [0,$\Delta E$), [$\Delta E$, $2\Delta E$), …, [${E}^{max}-\Delta E$, ${E}^{max}$].

**Action:**Similar discretization processing is needed for action set, including the charging and discharging rate ${P}_{stored}$ and reserve capacities ${P}_{up}$ and ${P}_{down}$.

## 3. Results

#### 3.1. Data Set

#### 3.2. Data Generation

#### 3.2.1. Time-Series Over One Day.

#### 3.2.2. Long-Term Data Generation

#### 3.3. Q-Learning Based on Generated Data

#### Income of Energy Storage System

## 4. Discussion

## Author Contributions

## Funding

## Conflicts of Interest

## Nomenclature

${p}_{d}$ | Target data distributions |

$\theta $ | Neural network parameters |

$G$ | Generator |

$D$ | Discriminator |

$V$ | Value function |

$\mathsf{\gamma}$ | Hyperparameter |

$\lambda $ | Electricity price |

$\mu $ | Reserve price |

$C$ | Penalty Costs |

$s$ | environment |

$a$ | action |

${Q}^{k}$ | Q value at episode k |

$\beta $ | exploration facto |

${P}_{new,t}$ | Prediction of renewable power generation at timestep t |

${P}_{stored,t}$ | Charging or discharging power of energy storage at timestep t |

${P}_{net,t}$ | Interactive power with power grid at timestep t |

${P}_{new,t}^{real}$ | real value of renewable power at timestep t |

## References

- Wang, D.; Liu, L.; Jia, H.; Wang, W.; Zhi, Y.; Meng, Z.; Zhou, B. Review of key problems related to integrated energy distribution systems. CSEE J. Power Energy Syst.
**2018**, 4, 130–145. [Google Scholar] [CrossRef] - Sun, H.; Guo, Q.; Zhang, B.; Wu, W.; Wang, B.; Shen, X.; Wang, J. Integrated energy management system: Concept, design, and demonstration in China. IEEE Electr. Mag.
**2018**, 6, 42–50. [Google Scholar] [CrossRef] - Leo Kumar, S.P. State of the art-intense review on artificial intelligence systems application in process planning and manufacturing. Eng. Appl. Artif. Intell.
**2017**, 65, 294–329. [Google Scholar] [CrossRef] - Zhou, S.; Hu, Z.; Gu, W. Artificial intelligence based smart energy community management: A reinforcement learning approach. CSEE J. Power Energy Syst.
**2019**, 5, 1–10. [Google Scholar] [CrossRef] - Lee, C.H.; Wu, C.H. A novel big data modeling method for improving driving range estimation of EVs. IEEE Access
**2015**, 3, 1980–1993. [Google Scholar] [CrossRef] - Banaee, H.; Loutfi, A. Data–driven rule mining and representation of temporal patterns in physiological sensor data. IEEE J. Biomed. Health Inform.
**2015**, 19, 1557–1566. [Google Scholar] [CrossRef] - Wu, X.; Zhu, X.; Wu, G.Q.; Ding, W. Data mining with big data. IEEE Trans. Knowl. Data Eng.
**2014**, 26, 97–107. [Google Scholar] - Chen, H.; Chiang, R.H.L.; Storey, V.C. Business intelligence and analytics: From big data to big impact. MIS Q.
**2012**, 36, 1165. [Google Scholar] [CrossRef] - Ramachandran, S.; Wang, M. Near-real-time ocean color data processing using ancillary data from the global forecast system model. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 1485–1495. [Google Scholar] [CrossRef] - Sridhar, S.; Hahn, A.; Govindarasu, M. Cyber-physical system security for the electric power grid. Proc. IEEE
**2012**, 100, 210–224. [Google Scholar] [CrossRef] - Elamvazuthi, I.; Ahamed Khan, M.K.A.; Bin Shaari, S.B.; Sinnadurai, R.; Amudha, M. Electrical power consumption monitoring using a real-time system. In Proceedings of the 2012 IEEE Conference on Sustainable Utilization and Development in Engineering and Technology (STUDENT), Kuala Lumpur, Malaysia, 6–9 October 2012; pp. 295–298. [Google Scholar]
- Bertsimas, D.; Gupta, V.; Kallus, N. Data-driven robust optimization. Math. Program.
**2018**, 167, 235–292. [Google Scholar] [CrossRef] - Abdel-Nasser, M.; Mahmoud, K. Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput. Appl.
**2019**, 31, 2727–2740. [Google Scholar] [CrossRef] - Wang, H.Z.; Wang, G.B.; Li, G.Q.; Peng, J.C.; Liu, Y.T. Deep belief network based deterministic and probabilistic wind speed forecasting approach. Appl. Energy
**2016**, 182, 80–93. [Google Scholar] [CrossRef] - Cai, M.; Pipattanasomporn, M.; Rahman, S. Day-ahead building-level load forecasts using deep learning vs. traditional time-series techniques. Appl. Energy
**2019**, 236, 1078–1088. [Google Scholar] [CrossRef] - Bedi, J.; Toshniwal, D. Deep learning framework to forecast electricity demand. Appl. Energy
**2019**, 238, 1312–1326. [Google Scholar] [CrossRef] - Sun, Q.; Wang, D.; Ma, D.; Huang, B. Multi-objective energy management for we-energy in Energy Internet using reinforcement learning. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–6. [Google Scholar]
- Xiong, R.; Cao, J.; Yu, Q. Reinforcement learning-based real-time power management for hybrid energy storage system in the plug-in hybrid electric vehicle. Appl. Energy
**2018**, 211, 538–548. [Google Scholar] [CrossRef] - He, X.; Ai, Q.; Qiu, R.C.; Huang, W.; Piao, L.; Liu, H. A big data architecture design for smart grids based on random matrix theory. IEEE Trans. Smart Grid
**2015**, 8, 674–686. [Google Scholar] [CrossRef] - Alahakoon, D.; Yu, X. Advanced analytics for harnessing the power of smart meter big data. In Proceedings of the 2013 IEEE International Workshop on Inteligent Energy Systems (IWIES), Vienna, Austria, 14 November 2013; pp. 40–45. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv
**2014**, arXiv:14062661. [Google Scholar] - Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8789–8797. [Google Scholar]
- Zhao, J.; Mathieu, M.; LeCun, Y. Energy-based generative adversarial network. arXiv
**2016**, arXiv:160903126. [Google Scholar] - Almahairi, A.; Rajeswar, S.; Sordoni, A.; Bachman, P.; Courville, A. Augmented CycleGAN: Learning many-to-many mappings from unpaired data. arXiv
**2018**, arXiv:180210151. [Google Scholar] - Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv
**2015**, arXiv:151106434. [Google Scholar] - Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv
**2017**, arXiv:170107875. [Google Scholar] - Van Hasselt, H.; Guez, A.; Silver, D. Deep reinforcement learning with double Q-learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
- Open Power System Data. Renewable Power Plants. Available online: https://data.open-power-system-data.org/renewable_power_plants/2019-04-05 (accessed on 16 August 2019).

**Figure 11.**Cumulative Distribution Function and Autocorrelation Coefficient. (

**a**) Comparison between the CDF of the original data and the data generated at epoch 40; (

**b**) The autocorrelation coefficient of the generated and original data.

${\mathit{a}}_{\mathbf{1}}$ | ${\mathit{a}}_{\mathbf{2}}$ | ⋯ | ${\mathit{a}}_{\mathit{n}}$ | |
---|---|---|---|---|

${s}_{1}$ | $Q({s}_{1},{a}_{1}$) | $Q({s}_{1},{a}_{2}$) | ⋯ | $Q({s}_{1},{a}_{n}$) |

${s}_{2}$ | $Q({s}_{2},{a}_{1}$) | $Q({s}_{2},{a}_{2}$) | ⋯ | $Q({s}_{2},{a}_{n}$) |

⋯ | ⋯ | ⋯ | ⋯ | ⋯ |

${s}_{n}$ | $Q({s}_{n},{a}_{1}$) | $Q({s}_{n},{a}_{2}$) | ⋯ | $Q({s}_{n},{a}_{n}$) |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhou, S.; Hu, Z.; Zhong, Z.; He, D.; Jiang, M. An Integrated Energy System Operating Scenarios Generator Based on Generative Adversarial Network. *Sustainability* **2019**, *11*, 6699.
https://doi.org/10.3390/su11236699

**AMA Style**

Zhou S, Hu Z, Zhong Z, He D, Jiang M. An Integrated Energy System Operating Scenarios Generator Based on Generative Adversarial Network. *Sustainability*. 2019; 11(23):6699.
https://doi.org/10.3390/su11236699

**Chicago/Turabian Style**

Zhou, Suyang, Zijian Hu, Zhi Zhong, Di He, and Meng Jiang. 2019. "An Integrated Energy System Operating Scenarios Generator Based on Generative Adversarial Network" *Sustainability* 11, no. 23: 6699.
https://doi.org/10.3390/su11236699