1. Introduction
Microgrids (MGs) are considered as the major component of the future power system to deal with the proliferation of distributed generators (DGs) in low and medium voltage grids. In general, a MG is a system integrated by DGs, controllable and non-controllable loads, energy storage systems (ESS) and control and communication infrastructure. The MG can operate in islanded mode or grid-connected mode [
1]. By moving the generation closer to loads, MGs aid in reducing power transmission losses. MGs also improve the reliability of the system with the ability to switch to islanded mode during system disturbances and faults.
MGs introduce many advantages to both utility and consumers. However, control and management of MGs induce significant challenges in term of coordination and aggregation. Centralized schemes, which are common in conventional power systems, may no longer be suitable for significantly larger numbers of DG units due to many reasons [
2], e.g., excessive computation in the central unit due to numerous controllable loads and generators, reliability and security of the central controller, frequent mutation of grid due to installation of new DGs and loads, unwillingness to share data of participant actors, etc. Decentralized strategies are highly scalable and robust because controllers only need local information and ignore coordination with others. However, the system controlled in decentralized way can hardly reach network-wide optimum operation. Distributed approach is considered as the best alternative for the control and management of the next generation of power systems. In this approach, the central unit is eliminated and local controllers coordinate with nearby units to reach global optima. The main advantage of the distributed approach [
2,
3,
4] is that the MGs can avoid system failure because the single central unit for controlling whole system is neglected. Moreover, it presents enhanced cyber-security and reduced communication distances. Furthermore, with the ability to perform in parallel, the computational load can be shared and condensed significantly. Finally, the privacy of sensitive information of loads of DERs could be inherited in the global operation.
Multi-agent system (MAS) is an advanced technology that has been recently applied in various areas of science and engineering, including smart grids. In the power systems domain, agent-based approaches have been applied in a wide range of applications such as load shedding, secondary control or optimal power flow [
5,
6,
7]. Agent with properties as autonomous, social, reactive and proactive [
7] are ideally suitable for distributed control implementation. The focus of this paper is the distributed secondary frequency control strategy for islanded MGs to restore grid network to normal state under various disturbances using MAS. Distributed control algorithms with an upper layer of agents have been presented recently in refs. [
7,
8]. In refs. [
9,
10,
11,
12], novel distributed algorithms where proposed and validated using pure power systems simulation tools. Some other works have utilized MAS runtime environments (such as Java Agent Development Framework (JADE)) to implement the distributed control system [
13,
14].
The interactions between entities (controllers, agents, devices, etc.) in MGs can lead to unexpected behaviors, thus advanced platforms for testing are required to evaluate the performance of MGs before the real deployment. The Hardware-in-the-loop (HIL) is an effective methodology to investigate MGs [
15,
16,
17]. HIL enhances the validation of components in power systems by conducting controller Hardware-in-the-loop (CHIL) and power Hardware-in-the-loop (PHIL) [
18]. CHIL is performed to validate protection and controller devices. In the meanwhile, PHIL is used to validate the operation of power devices as well as the dynamic interactions between them. The HIL implementation has been used in refs. [
6,
19,
20] for the investigation in distributed control in MGs. However, the combination of CHIL and PHIL or the realistic communication environment is rarely reported.
In distributed control, communication plays an important role as system performance (e.g., local optimization and global convergence time) depends heavily on the information exchange among agents [
21]. In order to ensure seamless communications in MAS, it is required that the system possesses and maintains a high level of inter-agent interoperability. Interoperability allows the network to seamlessly and autonomously integrate all components of power, distribution, management, and communication while minimizing human intervention. It has a direct impact on the cost of installation and integration and also introduces the ability to easily connect and integrate new components and systems. It allows the substitution/improvement of a component in the network without any problem to the overall operation of the integrated system [
22]. It is however not an easy task due to the existence of a variety of vendors and communication interfaces in the framework of micro-grid. Standards or regulations can be used to bridge the gap but are not necessarily sufficient to ensure interoperability. In some cases, systems implementing the same standard may fail to interoperate because of the variability in the practical implementations.
Interoperability can be considered in several evaluation models and in terms of different technical and conceptual levels (e.g., semantic, syntactic, dynamic and physical) [
23]. As in the Smart Grid Interoperability Maturity Model (SGIMM) [
24], ultimate goal of interoperability is the concept of “plug-and-play”: the system is able to configure and integrate a component into the system by simply plugging it in. An automatic process determines the nature of the connected component to properly configure and operate. Achieving plug-and-play is not easy, and in the particular context of distributed control in micro-grid with MAS, several important challenges are highlighted:
Firstly, in MGs, infrastructure may be supplied by different vendors and may be compliant to different protocols. Agents are required to be able to transfer data with local controllers and measurement system through various standardized or commercialized industrial protocols, while on the other hand, has to comply with the inter-agent communication protocols.
Secondly, in distribution network of MG, the structure of grid and the total capacity of ESSs may change/be upgraded progressively along with the increase of loads and renewable energy sources. Furthermore, ESS is an element which requires regular maintenance and replacement. The corresponding agent has to be activated or deactivated accordingly to the state of the ESS. The local control algorithm (intra-agent) needs to be flexible enough to adapt to this frequent alteration of structure and capacity without major re-configuration.
Not only at local level, the alteration of topology is also a critical obstacle that needs to be solved to achieve "Plug and Play" capacity at system level. The micro-grid operation is based on the consensus processes of the agents which tries to find a global solution based on limited information acquired from the neighbourhood. Consensus algorithms are introduced mathematically and often adapted to a certain network topology. Therefore, the integration or removal of an agent in the network (or alteration of topology) requires a throughout re-configuration or adaptation of the entire network.
Last but not least, the asynchronous interaction (inter-agent) under influence of various type of uncertainties in a real communications network is much more complex and is not yet covered in the mathematical model. The performance of the real system may be derived from the theoretical one if this aspect is not considered during the design and validation process. However, in aforementioned research, the communication network is typically ignored. In ref. [
25], the data transfer latency is considered, as deterministic time delays which does not accurately reflect realistic communications networks. Furthermore, the design of agents and the interactions among the agents as well as with controllers and devices were ambiguous and unspecific.
The above challenges are tackled in this paper. Particularly, we propose a method to implement interoperability within a MG with plug and play feature at the agent layer of distributed control scheme.The main contributions of this paper are twofold:
We develop a multi-agent system with “plug and play” capacity for distributed secondary control of frequency in islanded MGs. Firstly, a multi-layer structure is proposed to describe thoroughly the MG system operating with agents. The structure consists of three layers: Device layer, Control layer and Agent layer. The agent, which is an autonomous program with server/client structure, is designed to process an average consensus algorithm and send proper signal to inverter controller in a distributed scheme. The agent is also equipped with the ability of collecting and broadcasting messages via the industrial protocol IEC 61850. The “Plug and Play” capacity is realized at the agent layer, as the system will automatically adapt to the alteration of topology (integration of new agent or removal of an agent) and react accordingly to maintain seamless operation.
The proposed distributed secondary control is implemented in a laboratory platform based on the propose in [
18] with controller and power-hardware-in-the-loop (C/PHIL) setup, incorporating realistic communications network with the impact of uncertainties considered. The performance of system under realistic condition shows that the agents are able to resist to disturbances and to self-configure under alteration of grid topology.
The paper is organized as following:
Section 2 presents layer structure of a MG and average consensus algorithm.
Section 3 describes the design of agents with plug-and-play feature operating as highest layer in the structure.
Section 4 provides a laboratory platform with controller and power HIL setup to simulate a test case autonomous MG. A testing procedure and experimental results is also presented to validate the operation of agent system in a physical communication network.
Section 5 concludes the paper while also highlighting some aspects worthy of consideration in future.
2. MAS Based Multi-Layer Architecture for Distributed Secondary Control in MG
The hierarchical structure, which is comprised of primary, secondary and tertiary level, is commonly used to control in MG [
26,
27,
28]. The primary control level is used to stabilize frequency and voltage when disturbances occur by using only local measurements. The system in this control level has a fast response to reach the steady-state. However, there are deviations of frequency and voltage compared with the nominal values. The secondary control with global information is implemented to restore the frequency and voltage. At the top layer, the power flow to the main grid and optimized operation within grids is managed in the tertiary control level. This paper deals with the problem of secondary control in MG. In particular, we propose a three layer structure based on MAS for distributed secondary control in MG.
The architecture consists of three layers: Device layer containing the physical components and electrical connections, Control layer corresponding to alterations in the system operation, and providing control signals to the Device layer and finally Agent layer which receives measurements from corresponding devices, communicates, calculates and then returns proper signals to controllers. This architecture shows distinctly the relationship between agents and power system components while emphasizes on the communications network, which introduces increasingly important impact to the modern grid.
Figure 1 illustrates the three-layer structure, where devices, controllers and agents are shown.
Depending upon the objective and requirement of the device under control, the controller in Control layer either needs information from its agent and device or only its device. For instance, MPPT controller for PV source or primary control for inverter require local measurements from Device layer. However, with the secondary and tertiary controls of inverters, the additional signals from Agent layer are mandatory. In this paper, Control layer comprises controllers of inverters operating in parallel as grid-forming sources. The parallel configuration of voltage source inverters (VSIs) allows connection of multiple VSIs to form a MG thus facilitating the scalability and improving the redundancy and reliability of system. The control of each inverter is responsible for sharing the total load demand according to the rated power and resisting the instability of frequency and voltage in MGs. When a disturbance in the MG happens, the primary control of DGs will active immediately and the system is stabilized. Then, agents in Agent layer will send signals to compensate any errors with respect to the nominal condition through the secondary control. The local controller of DG in Control layer includes primary and secondary control level in the hierarchical control structure. The primary control adjusts the references of frequency and voltage to provide to the inner control loop of the inverter. The secondary control is conducted to fix the frequency and voltage to their nominal values after any changing of the system. This paper will focus only on controlling frequency, the proportional integral (PI) controller is used to deal with the steady-state error.
Classical approaches employ the MG’s central controller which receives from the measurement of a single point of the grid. The setpoint of secondary control unit is then distributed to local controllers of primary control. However in the distributed control manner, this centralized unit is eliminated. In our proposal, each local controller corresponds to an agent. The measurement devices are required to provide local frequency deviation to the connected agent which communicates with others to return the global . An agent-based consensus algorithm is applied in the Agent layer and the processes in all agents converge at a same consensus value after a number of iterations. This consensus value is also the average frequency deviation transferred to PI controller.
In the proposed distributed frequency control system, each agent needs only local information but could return global results by using the average consensus algorithm. The algorithm also ensures that the signals are sent to the local controllers concurrently and those signals have the identical values as in the case of the centralized strategy.
A consensus algorithm is an interaction rule for a specific objective. The rule describes the information exchange between a entity and its neighbors in the communication network. It is assumed that each agent receives an input value at initial. The average consensus problem is the distributed computational problem of finding the average of the set of initial values by using only local and adjacent information. Consider a network with
N nodes, the initial value at node
i is
. Node
i only communicate with node
in a constraint network. The goal of the algorithm is: firstly, each node compute the average of initial values,
and secondly, all nodes reach consensus on this value at the same time.
Equation (
2) introduces a standard algorithm to solve the average consensus problem following the iteration update.
where
are the iteration steps and
is the weight matrix. Each node uses only local and neighborhood information, hence,
if
and
. To simplify the expression of the algorithm, let us define the column vector of
Then Equation (
2) can be rewritten as:
Assuming that consensus state is achieved at iteration
, from Equation (
3) we can imply that
. The necessary and sufficient condition for the convergence is:
where
1 is the vector consisting of only ones. There exists various ways to determine the weight matrix. In this work, we choose the Metropolis rule [
29] because of its stability, adaptability to topology changes and near-optimal performance. The element of weight matrix is found as Equation (
5)
where
,
.
3. Design of Agent with the Plug and Play Feature
In this section, we introduce the design of agents with plug and play feature for distributed secondary control in an islanded MG with multiple grid-forming inverters. The MG includes a number of ESSs with power electric inverter interface operated in parallel. All of the ESSs participate in regulating frequency and voltage to keep the grid in the steady state. In this work, we focus on frequency control. Due to multi-master strategy, the coordination between inverters in the grid is mandatory. The operation of an ESS, which is connected to grid through an inverter based interface, is separated into three parts in the proposed layer structure, as described in
Figure 2. The PI controllers of inverters requires setpoints from agents to recover the frequency to normal once disturbances occur in the MG. The agent in this work is designed to implement the average consensus algorithm presented in previous section. The process of the algorithm is iterative. The state in initial iteration of an agent is the input of the agent, which is frequency deviation sensed locally from the device layer. Agent output, serving as feedback to the controller, is the average of inputs of all agents in the system. The output is collected after a specific number of iterations.
The iterative process in an agent is described in Algorithm 1. The agent conducts consecutive consensus loop. A loop is begun from Iteration 0 when the agent receives the measurement from devices and is finished at Iteration . Upon reaching the consensus state at Iteration , the agent sends its final state to the corresponding controller and immediately jumps to a new loop at Iteration 0 again.
Algorithm 1 The average consensus process in Agent i. |
- 1:
▹ begin a loop at initial iteration - 2:
▹ list of neighborhood agents - 3:
▹ number of neighbors - 4:
▹ obtain initial state from Device layer, this state is value of frequency deviation measured locally at node i - 5:
distribute the initial value and number of neighbors to all neighbors - 6:
collect the initial value and number of neighbors from neighbor agents - 7:
▹ calculate elements of weight matrix involved in Agent i and its neighbors using Metropolis rule - 8:
collect the initial value of neighbors - 9:
▹ move to Iteration 1 - 10:
whiledo ▹ is the number of iteration needed to reach the consensus state - 11:
▹ update the state at Iteration t - 12:
distribute the updated state at Iteration t to all neighbors - 13:
collect the state of all neighbors at Iteration t - 14:
▹ move to next iteration - 15:
return ▹ consensus value which is the average of measured frequency deviation - 16:
send the consensus value to Control layer (to PI controller) ▹ finish the current loop - 17:
redo from step 1 ▹ start a new loop
|
Intuitively, what happens in agents is separated into three phases:
Initialization phase: Each agent receives initial state which is its local frequency deviation. Data are transferred from Device layer to Agent layer.
Updating state phase: States of next iterations in each agent are updated using the agent current state and neighbors’ states following Metropolis rule. An agent will move from Iteration t to Iteration if and only if it collects information from all neighbors at Iteration t. Data are then transferred internally within the Agent layer.
Returning value phase: At a specific iteration, all agents finish consensus process loop and send the same average value of frequency deviation to controllers. Data are transferred from Agent layer to Control layer.
The calculation for each iteration relies on information being received from neighbors. The consensus processes in agents are therefore almost at the same iteration (not always at the same iteration due to minor differences introduced by time taken to exchange data amongst the agents). It can be imagined that all agents are on a line and all elements in this line march ahead from an iteration to the next iteration together in a "lock-step" manner. If an issue occurs with any element, this line will stop moving on until the issue is fixed.
In MGs, the topology and the total capacity may change subject to the increase in load and fluctuations in renewable energy sources. The global consensus based operation of the MG has to be capable of adapting to this frequent alteration of structure and capacity without major re-configuration. In our research, we design the agent system with the capability of plug and play operation, i.e., the network and the algorithm need to automatically detect and adapt to addition and/or removal of agents.
Figure 3 describes the logic implemented within agents when Agent
i is shut down owing to its corresponding ESS
i being out of service. We also consider agents who are connected with Agent
i. Agent
j is one of the neighbors of Agent
i. When obtaining signal from Device layer and knowing that the ESS it handles was tripped out, Agent
i triggers its process of shutting down. It sends signals to all neighbors to inform its status before stopping. In term of Agent
j (as well as other neighbors of Agent
i), when receiving the alert from Agent
i at Iteration
t, it will pause the process of updating state and start the reconfiguration process. Because Agent
j lost one neighbor, the neighbors of Agent
j also have to recompute the weight matrix elements. The agent system are paused at Iteration
t until all involved agents finish modifying and return to the updating process.
Figure 4 presents the mechanism of an Agent
i and its neighbors when the Agent
i is added into the operating multi-agent system (an ESS is installed to MG). The unknown integration of Agent
i in the agent network may cause the disturbance to the involved agents. Agent
j is one neighbor of Agent
i. The task of all involved agents in this case is more complicated because Agent
i has no information about current iteration of agent system that may break the synchronization and accuracy in computation of agents. We propose a way to overcome this challenge as follows: Once Agent
i is notified that its corresponding ESS (ESS
i) is connected to the MG, it will inform its neighbors about its appearance in the agent system and require the current Iteration
t of system in reverse. Simultaneously, Agent
i takes parameters from its neighbors to compute weight matrix elements. Neighbor
j deals with this scenario in the similar way when Agent
i is removed. An additional step in this case is only that Agent
j broadcasts the current iteration to Agent
i. The multi-agent system after that moving to next iterations and operating normally.
This proposal minimizes human intervention in network operation upon alterations of topology due to addition or removal of agents. While infrastructure of MG can be supplied by various vendors and uses various multiple protocols, the proposed system can ensure interoperability at Agent layer and therefore facilitates the integration and coordination of assets in MG. To demonstrate the proposed architecture and its plug and play feature, in the following section, a case study of distributed frequency control in MG is presented. The case-study is implemented on a laboratory platform using controller and power HIL environment incorporating real communications network.