Next Article in Journal
A Class-E Amplifier for a Loosely Coupled Inductive Power Transfer System with Multiple Receivers
Next Article in Special Issue
Active Power Dispatch for Supporting Grid Frequency Regulation in Wind Farms Considering Fatigue Load
Previous Article in Journal
A Novel Approach to Stabilize Foam Using Fluorinated Surfactants
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Data-Driven Decentralized Algorithm for Wind Farm Control with Population-Games Assistance

Learning & Game Theory Laboratory, New York University Abu Dhabi, Saadiyat Campus, P.O. Box 129188, Abu Dhabi 129188, UAE
Automatic Control Department, Universitat Politècnica de Catalunya, Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Llorens i Artigas, 4-6, 08028 Barcelona, Spain
Instituto Balseiro and CONICET, Av. Bustillo 9500, San Carlos de Bariloche 8400, Argentina
Departamento de Ingeniería Eléctrica y Electrónica, Universidad de los Andes, Carrera 1A No 18A-10, Bogotá 111711, Colombia
Authors to whom correspondence should be addressed.
Energies 2019, 12(6), 1164;
Original submission received: 5 February 2019 / Revised: 15 March 2019 / Accepted: 21 March 2019 / Published: 26 March 2019
(This article belongs to the Special Issue Control Schemes for Wind Electricity Systems)


In wind farms, the interaction between turbines that operate close by experience some problems in terms of their power generation. Wakes caused by upstream turbines are mainly responsible of these interactions, and the phenomena involved in this case is complex especially when the number of turbines is high. In order to deal with these issues, there is a need to develop control strategies that maximize the energy captured from a wind farm. In this work, an algorithm that uses multiple estimated gradients based on measurements that are classified by using a simple distributed population-games-based algorithm is proposed. The update in the decision variables is computed by making a superposition of the estimated gradients together with the classification of the measurements. In order to maximize the energy captured and maintain the individual power generation, several constraints are considered in the proposed algorithm. Basically, the proposed control scheme reduces the communications needed, which increases the reliability of the wind farm operation. The control scheme is validated in simulation in a benchmark corresponding to the Horns Rev wind farm.

1. Introduction

Nowadays, it is quite strange to find wind turbines operating isolated into a geographical scheme. Conversely, they are conveniently arranged in groups known as wind farms, which inject power into the electrical grid in a way and magnitude comparable to non-renewable energy sources.
Initially, control strategies designed for wind farms were mainly based on aggregate models, which represent those arrays as a large equivalent wind turbine [1,2,3,4]. The drawback of this modeling approach relies on the lack of interactions among turbines caused by the wakes. A turbine located in the path of wakes produced by up-stream turbines suffers from the reduction of arriving wind speed and will probably be exposed to a more turbulent air flow [5]. Consequently, using a unique control signal for all the turbines into the farm will not result as effective as computing a single signal for each turbine depending on its operating conditions. In particular, when the main objective is to maximize the energy capture, control strategies aimed at reducing the wake effects have shown promising results [6].
Control strategies considering non-aggregate models have been an active research topic in the last years (see [7] and reference therein). One of their main difficulties is the consideration of complex dynamics from all the physical phenomena involved in the turbine interactions and the wake effects. High-fidelity models can only be used to validate strategies, and simplification must be used to obtain control-oriented models in order to use model-based control techniques, see for instance [8,9]. For this reason, some reported approaches propose the use of data-driven control techniques. Moreover, since reliability is also an important feature to take into account, control strategies should also consider the reduction of the communication channel usage. Among others, the use of safe experimentation dynamics to design a distributed control strategy aimed at maximizing the energy capture considering local information and the total power amount produced by the wind farm is reported in [2]. In this proposed approach, control variables are randomly perturbed in order to optimize such total power. The resultant value of these variables should be updated only in case the amount of the total generated power was improved. Another viewpoint is reported in [10], where the control strategy is fed with local information only and its convergence is improved by adding information about the objective function gradient, no matter whether the resultant optimum was local. In [11], a Bayesian ascent algorithm is proposed including two different phases, i.e., a learning and an optimization phase. More recently, Ciri et al. [12] have applied extremum-seeking control to improve the power production by looking for the optimal gain of the torque controller at each individual wind turbine.
Thus, the implementation of a population-dynamics approach with time-varying set of strategies in a gradient-estimation algorithm, altogether as a data-driven decentralized control scheme, is the main contribution of this paper. The scheme comprises of a single controller per each wind turbine. The proposed approach is inspired in a population game [13], where the strategies are aligned to the domain-space coordinates of an unknown function to be maximized. Hence, the gradient estimation is performed from a unique (or multiple) measurement(s) such as in [14,15], corresponding to the population game strategies. Finally, the set of strategies are updated in function of the gradients estimates and the entire routine is run again. Furthermore, the proposed algorithm allows also to establish set-points for the desired power generation. Moreover, different from other works related to evolutionary dynamics, e.g., [16,17,18,19], where the evolutionary games work as the optimization algorithm, this paper proposes to take advantage of this game-theoretical approach to assists an heuristic algorithm.
One of the main features of the proposed approach is the use of local information only, i.e., each turbine only requires information about its own control signal apart of the total power amount from the whole wind farm (following the approach reported in [2]). However, different from [2], the proposed control algorithm avoids the random testing towards obtaining improvements in the total generated power. Instead, gradient estimation is used [10,20], but highlighting significant differences:
  • the algorithm proposed in this paper uses historical information of the system evolution to compute multiple directions of the gradient estimations in a decentralized fashion and for every single wind turbine, i.e., it is not necessary to share information among turbines, and
  • the proposed approach produces global solutions due to the availability of the total generated power amount.
Moreover, different from [2,20,21], the individual saturation of the power generation in the turbines is considered, and it is shown that the proposed data-driven algorithm can handle with this situation maximizing the total power. Preliminary results related to this work were reported in [21]. Similarly as in [11,12], the key aspect in the algorithm is given by the appropriate computation of an ascent direction.
This paper is structured as follows. In Section 2, the problem of controlling wind farms and the wake modeling are briefly presented. Preliminary concepts used in the proposed algorithm design are stated in Section 3, while Section 4 describes in-depth the proposed algorithm. Section 5 presents and explains the data-driven decentralized control scheme for wind farms. Next, in Section 6, the main results obtained are presented and discussed, which have been obtained with a case study of 80 wind turbines maximizing the total power generation by using the axial induction factors. Finally, some conclusions are drawn in Section 7.


Calligraphy letters are used to denote the sets, e.g., S . The column vectors are denoted with bold font, e.g., y . Every sub-index refers to elements corresponding to strategies in a population, e.g., s i , k refers to a vector associated to the i th strategy in a time instant k. Besides, | · | denotes the cardinality of a set. Hence, R 0 denotes the set of all the non-negative real numbers and R > 0 denotes the set of all the strictly positive real numbers. Finally, both continuous-time and discrete-time frameworks are used throughout this paper denoted by k and t, respectively. Moreover, discrete time is denoted as a sub-index, e.g., S k , whereas continuous time is denoted as an argument, i.e., p i ( t ) .

2. Problem Statement

The power generated by a turbine i W is given by
P i ( a i , V i ) = 1 2 ϱ A C P ( a i ) V i 3 ,
being ϱ the air density, A the area swept by the rotor, V i the wind speed experienced by the turbine, and W = { 1 , , m } a set indexing the turbine into the farm. The power coefficient C P depends on the axial induction factor a i according to
C P ( a i ) = 4 a i 1 a i 2 .
In wind farm control applications, the axial induction factor a i is used as control signal to regulate the power generated by each turbine. The maximum individual power production is achieved when a i = 1 / 3 .
In case of wind turbines within a farm, the wind speed faced by each turbine is a combination of the free-stream speed and the wakes produced by nearby turbines. Both the wind farm layout and the predominant wind speed direction determine the interactions caused by the wake effect. An illustration of the wake produced down-stream in a simple layout of m turbines aligned with the free-stream wind speed V can be seen in Figure 1.
The speed deficit caused by the wake effect can be modeled as proposed in [5]. When all turbines are aligned with V , the wind speed faced by the turbine i W is given by
V i = V 1 2 j W : x j < x i ( a j c j i ) 2 ,
c j i = ( D j / ( D j + 2 β ( x i x j ) ) ) 2 ,
with D j the diameter of the wind rotor, x j the position the turbine and β a coefficient indicating the expansion of the wake.
The control objective considered in this paper is to maximize the total generated power [2,20], i.e.,
max a P T ( a ) = i W P i ( a ) .
This objective consists in finding the vector of axial induction factors a = [ a 1 a 2 a m ] R m that maximizes the total power P T produced by the entire wind farm. In general, a greedy strategy, in which every turbine works at a i = 1 / 3 , does not achieve the optimal value due to the wind speed deficit caused by the wake effects. In order to maximize the total energy capture, the most upstream turbines must reduce the generation. Thus, the wind speed deficit faced by downstream turbines is lower and the total power is higher. As the quantification of the wake effect is quite complex and depending on uncertain parameters, it is advisable the use of control algorithms not relying on wake modeling. This is the purpose of the algorithm proposed in this paper.

3. Preliminary Concepts

The proposed strategy uses a data set, which is composed by two main elements: (i) the estimation of the gradient; and (ii) the computation of the relevance of each datum. Figure 2 presents the main steps of the proposed heuristic algorithm, which are explained in detail next along Section 3 and Section 4.

3.1. Gradient Estimation

Consider an unknown function f ( y ) , where y R m , i.e., f : R m R . The objective is to find the appropriate vector y such that f is maximized. For that, a data-driven algorithm based on gradient estimation (which uses a distributed gradient estimation proposed in [14,15]), and evolutionary game-theoretical concepts is proposed. In particular, several gradients are estimated and a qualification regarding each one of them is performed in order to determine a direction for the vector y .
Let c , d R m be two arbitrary vectors in the domain of the function f. It is assumed that the function values f ( c ) and f ( d ) are available through measurements, and that information is known in the coordinates c and d , respectively. Therefore, the estimation of an increasing rate and direction over the function f between the vectors c and d is given by
g ( c , d ) = f ( d ) f ( c ) d c | f ( d ) f ( c ) | .
g i ( f ( c ) , f ( d ) , c i , d i ) = f ( d ) f ( c ) d i c i | f ( d ) f ( c ) |
be the i th component of g ( c , d ) , i.e., g = [ g 1 g m ] .
The function g : R m × R m R m is a modification of the original one presented in [14]. Basically, there is a need in this problem to capture the information regarding the increasing directions, and that is why the magnitude has been slightly modified.
Remark 1.
In order to estimate the gradients for the unknown function f ( y ) , it is necessary
to be able to capture measurements of the unknown function f, and
to know the correspondence of the measurement with the element in the domain of f, i.e., for a measurement f ( d ) the element d in the domain of f is known.

3.2. Population-Game Role

The proposed algorithm uses multiple estimated gradients based on measurements. The decision variables are updated by computing a superposition of the gradients. Thus, an appropriate increasing direction for the unknown function is identified in order to maximize it. One key step within the algorithm is to classify the multiple measurements according to their quality (those elements in the domain with higher-value measurements are considered to be of better quality, see Section 4). Such classification is performed by using a population-games-based algorithm whose setting changes every iteration. Next, the evolutionary dynamics and population games background is introduced.
Consider a population with a finite and large number of rational agents. Within the population, there are n available strategies every discrete time k Z 0 (with a sampling time given by τ t f ) associated to a coordinate. At time instant k and during a fixed period of time denoted by τ , agents have the possibility to choose among n available strategies. Let I = { 1 , , n } be the set of indexes corresponding to the n available strategies, and S k = { s k 1 , , s k n } the set of strategies, where s k i R m , for all i I , and S k R m . It should be noticed that the set of strategies S k varies along the discrete time with a sampling time given by τ .
At each discrete-time step k, a strategic interaction in continuous time occurs for a period of time equivalent to τ (sampling time). In the previously mentioned interaction and at time instant k, the scalar value p i ( t ) R 0 , with 0 t τ , corresponds to the proportion of agents that are selecting the strategy s k i S k .
All the proportions of agents selecting the different strategies in the population compose a strategic distribution or a population state, which is denoted by p ( t ) R n , 0 t τ . The set of all possible population states evolves in a simplex denoted by Δ = p ( t ) R 0 n : i I p i ( t ) = 1 . Agents have incentives to select among available strategies in the population according to a fitness function given by f i ( p i ( t ) ) = ( f ( s k i ) π ) p i ( t ) , 0 t τ , for all i I , where π R > 0 is an upper bound such that π > f ( s k i ) , for all i I . The fitness function for the whole population is given by f ( p ) = [ f 1 ( p 1 ) f n ( p n ) ] . Notice that the constant value π affects all the fitness functions to ensure that f i ( p i ( t ) ) is decreasing with respect to p i ( t ) . Then, it is ensured that the population game is stable according to Definition 1 (adapted from [13]).
Definition 1.
A population game f : Δ R n is a stable game if z D f z 0 , for all z Δ T , p Δ , where Δ T is the tangent space of the simplex given by Δ T = { z R n : i I z i = 0 } .
The objective within the population is to achieve a Nash equilibrium that is denoted by p Δ . Formally, the set of Nash equilibrium is defined next.
Definition 2.
Let p Δ in the population be a Nash equilibrium if each used strategy entails the maximum benefit for the proportion of agents selecting it, i.e., the set { p Δ : p i > 0 f i ( p ) f j ( p ) } , for all i , j I , corresponds to the Nash equilibria.
Additionally, suppose that the possible interaction among agents choosing different strategies is given by an undirected non-complete communication graph denoted by G = ( I , E ) , where I is the set of vertices representing the strategies and E { ( i , j ) : i , j I } is the set of edges or links determining possible communication and information sharing among strategies. The set of neighbors of the node i I is given by N i = { j : ( i , j ) E } . It should be clear that each strategy estimates a gradient based only on information from its neighbors.
For the proposed algorithm, the proportion of agents p i ( τ ) represents a quality assigned to the strategy s k i evaluated at time τ , which is a tuning parameter of the proposed algorithm. In other words, the proportion provides information about how well the strategy s k i maximizes the function f with respect to the other available strategies S k \ { s k i } at time instant k Z 0 . Afterwards, the set of strategies S k has an update based on gradient estimations over the function f, and the different qualities for all the strategies are given by p R n .
The Nash equilibrium for the population game f is obtained by solving the maximization of the potential function using the distributed population dynamics presented in [18]. In this case, the distributed projection dynamics (DPD) are chosen, and are defined as
p ˙ = L f ( p ) .
Here, L is the Laplacian matrix corresponding to the connected graph G . Clearly, another distributed population dynamics could have been used (distributed replicator dynamics, distributed Smith dynamics). If the sampling time τ is big enough, then p ( τ ) = p Δ is the Nash equilibrium of the game. Otherwise, the quality of strategies at time k Z 0 corresponds to a transitory trend of the proportion of agents in the population.
Next, in Section 4 the combination of these two concepts is shown in order to derive a data-driven optimization algorithm.

4. Algorithms According to Information Availability

The proposed approach can be implemented in different manners depending on the availability of information. These approaches are selected depending on the information scheme of the application.

4.1. Using Multiple Measurements at Each Iteration

Consider the smallest population games involving only two strategies S k = { s k 1 , s k 2 } , and with an associated communication graph G as introduced in Section 3. The initial condition for the strategic set S 0 consists of two arbitrary strategies denoted by s 0 1 R m and s 0 2 R m . Hence, the initial population state is arbitrarily selected in the relative interior of the simplex set, i.e., p ( 0 ) int Δ . Furthermore, the fitness functions corresponding to the strategies are given by f 1 ( p 1 ( t ) ) = ( f ( s k 1 ) π ) p 1 ( t ) and f 2 ( p 2 ( t ) ) = ( f ( s k 2 ) π ) p 2 ( t ) , being π R > 0 an upper bound for the fitness functions such that π > f ( s k i ) , for all i I .
Remark 2.
The value f ( s k i ) is measurable in the corresponding node i I , for all k Z 0 .
Due to the fact that both strategies s k 1 S k and s k 2 S k have information about each other, then the gradients g ( s k 1 , s k 2 ) and g ( s k 2 , s k 1 ) can be computed in the nodes 1 I , and 2 I , respectively. Therefore, agents have the required information to make a decision within the population.
The update of the strategic set S k is performed as follows. For instance, assume that f ( s k 2 ) > f ( s k 1 ) , then the strategy s k 1 evolves towards the strategy s k 2 in order to be closer in the domain. In contrast, s k 2 is updated in the opposite direction of the strategy s k 1 , getting farther from it in the domain. Thus, according to the strategic update procedure, those strategies with better quality (i.e., whose image in the function f : R m R is better) have a smoother change along the time, whereas those with lower values suffer bigger changes. Then, the update rate for the strategy s k i , for all i I is denoted by
θ i , k = ( 1 p i ( τ ) ) γ ,
where γ R > 0 is a common tunable parameter for all the strategies, which determines the update rate according to (7), for all i I . For instance, consider the case involving two strategies i , j I , and without loss of generality, let f ( s k i ) > f ( s k j ) . Therefore, it is expected that p i ( τ ) > p j ( τ ) , i.e., the strategy s k i has a better quality than the strategy s k j . Consequently, a bigger update rate is assigned to the strategy s k j , i.e., it is obtained that θ i ( τ ) < θ j ( τ ) . In addition, it is proposed to take into account an exploration term that allows to identify potentially better quality strategies in the domain of the function f. The aforementioned exploration parameter is applied within the strategic update by means of a random value δ [ ε ε ] m , where ε R > 0 . Hence, the strategic update is performed as follows:
s k + 1 i = s k i + | N i | 1 θ i , k j N i g ( s k i , s k j ) + δ , i I .
In order to illustrate how the strategic update is performed according to a superposition of gradients, Figure 3 shows an example for a function f in the R 2 domain. Figure 3a shows a population case involving four strategies whose information interaction is determined by a star graph communication, i.e., G = ( I , E ) , where I = { 1 , , 4 } , and E = { ( 1 , 2 ) , ( 1 , 3 ) , ( 1 , 4 ) } . For the example presented in Figure 3a, f ( s k 1 ) > f ( s k 3 ) , f ( s k 2 ) > f ( s k 1 ) and f ( s k 4 ) > f ( s k 1 ) , where s k 1 , s k 2 , s k 3 , s k 4 R 2 , and f : R 2 R . The superposition of the three gradients can be seen at the node 1 I .
Notice that the proposed approach requires the availability of n measurements corresponding to n different images of the function f, which should be captured at the same time instant k Z 0 , i.e., it is necessary to capture the measurements corresponding to the values f ( s k i ) , for all i I . Figure 4 presents a general scheme for the algorithm. Nevertheless, a subtle modification can be applied to the algorithm such that less information is required at each time instant as discussed in the next section.

4.2. Using a Single Measurement at Each Iteration

Consider the same population game that has been introduced in Section 3, i.e., there is a set of n strategies at each time instant k Z 0 denoted by S k = { s k 1 , , s k n } . However, different from Section 3, there are n 1 preserved strategies within the set S k in the strategy update ( n 1 strategies do not change at each time instant), and only one of them is modified. Formally, it is said that { S k + 1 } \ { S k } = { s k + 1 1 } , which means that there is only one new strategy comparing the strategic sets S k + 1 and S k .
It is quite important to highlight that even though there is only one new datum at each iteration, the estimation of all the gradients is different.
To initialize the algorithm, it is assumed that the measurements of the values corresponding to the function f for the n initial strategies are known, i.e., f ( s 0 1 ) , , f ( s 0 n ) are initially known. Notice that, the condition to initialize the algorithm can be ensured within at least a time n τ , i.e., it is possible to capture the n required initial measurements in n iterations with sampling time τ .
Although there is only one new measurement at each time instant, all strategies can be updated. Therefore, the information limitation is treated by means of the following algorithm:
s k + 1 1 = s k 1 + | N 1 | 1 θ 1 , k j I g ( s k 1 , s k j ) + δ , s k + 1 i = s k i 1 , i I \ { 1 } .
To illustrate the procedure in Equation (8), Figure 3b presents an example with only one available measurement at each time instant, i.e., the new measurable information is given by f ( s k 1 ) . Note that, at the next time instant k + 1 , the values for the function f ( s k + 1 2 ) = f ( s k 1 ) , f ( s k + 1 3 ) = f ( s k 2 ) , and f ( s k + 1 4 ) = f ( s k 3 ) are already known by the algorithm.

5. Data-Driven Decentralized Control of Wind Farms

The power generated by the wind turbine i W , denoted by P i ( a ) with a = [ a 1 a m ] R m , depends on the behavior of other turbines as shown in Equation (3). Therefore, it is obtained that the unknown function f in Equation (5) corresponds to the total power function P T = i W P i in Equation (4). On the other hand, each strategy belonging to the domain of the unknown function is given by s k i = a k i R n . More specifically, a k i = [ a 1 i a n i ] . Therefore, for this specific control problem, there is only one available measurement at each time instant k Z 0 , i.e., the only available information is the total generated power for the current established axial induction factors. In consequence, the appropriate approach to deal with this data-drive control problem is the one introduced in Section 4.2. The control scheme corresponds to a decentralized architecture since individual decisions are made at each wind turbine without requiring communication with other turbines.
Figure 5 shows the traditional decentralized control scheme, where the information about the total generation power P T ( a ) is provided to each wind turbine i W . In addition, each turbine also has information about its current axial coefficient. Then, the algorithm in Equation (8) can be written for each turbine as follows:
a j , k + 1 1 = a j , k 1 + φ k , a j , k + 1 i = a j , k i 1 , i I \ { 1 } ,
φ k = θ 1 , k | N 1 | N 1 g i ( P T ( a k 1 ) , P T ( a k ) , a j , k , a j , k 1 ) + δ i ,
for all j W . Notice that in the algorithm in Equation (9), each turbine has the required information since P T ( a k 1 ) is measured and P T ( a k ) has been stored for all N 1 .
Now suppose that it is necessary to limit the individual power generation under a value P i s , i.e., the admissible power generation for each wind turbine is
P i ( a i , V i ) = min P i s , 1 2 ϱ A c P ( a i ) V i 3 , i W .
Therefore, it is necessary to verify the admissibility of axial coefficient a i if P i ( a i , V i ) > P i s . This procedure is performed by computing Equations (1) and (2) in a decentralized manner to find the new admissible value for a i , i.e., an axial coefficient a i s such that P i ( a i s , V i ) = P i s is established.

6. Case Study and Simulation Results

The effectiveness of the proposed control algorithm was shown by using the Horns Rev wind farm, which has 80 turbines of 2 MW each, and with rotor diameter of 80 m. This farm is arranged in a 8 × 10 layout with a separation among turbines of five rotor diameters (400 m) (see Figure 6). The model parameters were chosen as β = 0.075 and ϱ = 1.225 kg/m 3 .
The proposed decentralized algorithm considered three data at each iteration (one from a current measurement, and two from historical data), that means n = 3 , being τ = 0.01 s, ε = 0.001 and γ = 1 the the tuning parameters selected. Besides, the communication graphs considered for each algorithm were G = ( I , E ) , where I = { 1 , 2 , 3 } , and E = { ( 1 , 2 ) , ( 1 , 3 ) } . The decentralized scheme was composed of 80 different algorithms as in Equation (9), i.e., each turbine had its individual algorithm. Two scenarios were tested for four different wind speed directions, i.e., 0 ° , 15 ° , 30 ° , and 45 ° :
  • Scenario 1: the free-stream wind speed was below the rated value and all wind turbines are working in maximization of the energy capture.
  • Scenario 2: the free-stream wind speed was above the rated value and some turbines are working in power limitation (at 2 MW).
It is important to state that, when the wind speed faced by a turbine was above the rated value, the axial coefficient (for the corresponding turbine) was imposed by the turbine controller and not by the wind farm control. The control algorithm then computed the axial induction factors for the rest of wind turbines in order to generate maximum power for the wind speed conditions.
Figure 7a shows the total power the wind farm produces related to scenario 1 considering four different wind speed directions. Dashed lines indicates the total power using a greedy control strategy ( a i = 1 / 3 i ) for the four directions, respectively. The proposed algorithm was capable of increasing the generation with respect to the greedy case in the four directions, although it was more marked in 0 ° and 45 ° . This was a consequence of the wind farm layout, in which the air flow disturbance caused by up-stream turbines is more notorious. In Figure 7b,c, it can be seen that the power and the axial induction factors for wind turbines 1–10 (in case of wind speed direction 0 ° ). These figures show that the algorithm reduces the axial coefficients in the up-stream wind speed and increases the one in last turbines. This produces an increase of the wind power in the last turbines and the total power generated by the entire wind farm.
Results corresponding to scenario 2 are shown in Figure 8. In this scenario, the free-stream wind speed was above rated value and therefore the total power production should be close to rated values (160 MW). It can be observed in Figure 8 that this is the case for the wind speed directions of 15 ° , 30 ° and 45 ° , but not for 0 ° . In this last case, the wake effects induce the reduction the wind speed faced by the set of last wind turbines, causing a total power lower than 160 MW. Nevertheless, observing the powers and axial induction factors for wind turbines 1–10, it can be seen that the proposed algorithm in this situation still seeks to maximize the power generation by reducing the generation in the first turbines and increasing it in the last ones.

7. Conclusions

A data-driven control for wind farms under a decentralized scheme has been presented. In the proposed algorithm, multiple gradient estimations are utilized in order to improve and speed up the maximization of the total generated power. The control is organized in a decentralized scheme, in which m algorithms produce one control variable for each wind turbine. In spite of the limited communications among agents, the proposed control is able to converge to the global solution. The control strategy was evaluated by simulation in the the well known Horns Rev wind farm layout under low and high wind speed conditions (in the latter the control actions saturate). In both cases, the algorithm was capable of increasing the total power compared with the greedy case and for different wind directions (see Figure 7 and Figure 8).
Even though it has been presented for this specific application, this algorithm may be useful in other engineering problems in which data-driven approaches (i.e., only measured information is available) and maximization of a given objective are required. Moreover, it has been shown the versatility of the algorithm to consider any parameter as the decision variable within the optimization problem. Besides, dealing with the variance minimization of the power generation remains an appealing open problem. Finally, since the proposed algorithm uses estimated gradients based on measurements for known parameter values, i.e., in the maximization of an unknown function f, it is known that the measurement f ( a ) corresponds to the variable a. In this regard, other decision variables could be easily considered by following the same procedure.

Author Contributions

All the authors have equally contributed in the research presented in this manuscript as well as in its preparation.


This work has been partially funded by the Spanish projects SCAV (Ref. DPI2017-88403-R) and DEOCS (Ref. DPI2016-76493-C3-3-R) and the Colombian project ISAGEN Solución Energética Piloto La Guajira. J. Barreiro-Gomez gratefully acknowledges support from U.S. Air Force Office of Scientific Research under grant number FA9550-17-1-0259.

Conflicts of Interest

The authors declare no conflict of interest.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.


  1. Marden, J.; Ruben, S.; Pao, L.Y. Surveying Game Theoretic Approaches for Wind Farm Optimization. In Proceedings of the 50th AIAA Aerospace Sciences Meeting, Nashville, TN, USA, 9–12 January 2012; pp. 1–10. [Google Scholar]
  2. Marden, J.R.; Ruben, S.D.; Pao, L.Y. A Model-Free Approach to Wind Farm Control Using Game Theoretic Methods. IEEE Trans. Control Syst. Technol. 2013, 21, 1207–1214. [Google Scholar] [CrossRef]
  3. Buccafusca, L.; Beck, C.; Dullerud, G. Modeling and Maximizing Power in Wind Turbine Arrays. In Proceedings of the IEEE Conference on Control Technology and Applications (CCTA), Kamuela, HI, USA, 27–30 August 2017; pp. 773–778. [Google Scholar]
  4. Zhong, S.; Wang, X. Decentralized Model-Free Wind Farm Control via Discrete Adaptive Filtering Methods. IEEE Trans. Smart Gird 2018, 9, 2529–2540. [Google Scholar] [CrossRef]
  5. Jensen, N.O. A Note on Wind Generator Interaction; Technical Report; Riso National Laboratory: Roskilde, Denmark, 1983. [Google Scholar]
  6. Fleming, P.; Lee, S.; Churchfield, M.; Scholbrock, A.; Michalakes, J.; Johnson, K.; Moriarty, P. The SOWFA Super-Controller: A High-Fidelity Tool for Evaluating Wind Plant Control Approaches. In Proceedings of the EWEA Annual Meeting, Vienna, Austria, 4–7 February 2013. [Google Scholar]
  7. Boersma, S.; Doekemeijer, B.; Gebraad, P.; Fleming, P.; Annoni, J.; Scholbrock, A.; Frederik, J.; van Wingerden, J.W. A tutorial on control-oriented modeling and control of wind farms. In Proceedings of the American Control Conference, Seattle, WA, USA, 24–26 May 2017. [Google Scholar] [CrossRef]
  8. Annoni, J.; Gebraad, P.M.O.; Scholbrock, A.K.; Fleming, P.A.; van Wingerden, J. Analysis of axial-induction-based wind plant control using an engineering and a high-order wind plant model. Wind Energy 2016, 19, 1135–1150. [Google Scholar] [CrossRef]
  9. Vali, M.; van Wingerden, J.; Boersma, S.; Petrović, V.; Kühn, M. A predictive control framework for optimal energy extraction of wind farms. J. Phys. Conf. Ser. 2016, 753, 052013. [Google Scholar] [CrossRef]
  10. Gebraad, P.M.O.; van Wingerden, J.W. Maximum power-point tracking control for wind farms. Wind Energy 2015, 18, 429–447. [Google Scholar] [CrossRef]
  11. Park, J.; Law, K.H. Bayesian Ascent: A Data-Driven Optimization Scheme for Real-Time Control With Application to Wind Farm Power Maximization. IEEE Trans. Control Syst. Technol. 2016, 24, 1655–1668. [Google Scholar] [CrossRef]
  12. Ciri, U.; Rotea, M.; Santoni, C.; Leonardi, S. Large-eddy simulations with extremum-seeking control for individual wind turbine power optimization. Wind Energy 2017, 20, 1617–1634. [Google Scholar] [CrossRef]
  13. Sandholm, W.H. Population Games and Evolutionary Dynamics; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  14. Simić, S.N.; Sastry, S. Distributed gradient estimation using random sensor networks. In Proceedings of the 1st ACM International Workshop on Wireless Sensor Networks and Applications (WSNA), Memorandum UCB/ERL M02/25, Atlanta, GA, USA, 28 September 2002. [Google Scholar]
  15. Simić, S.N.; Sastry, S. Distributed Environmental Monitoring Using Random Sensor Networks. In Information Processing in Sensor Networks: Second International Workshop; Zhao, F., Guibas, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 582–592. [Google Scholar]
  16. Tembine, H.; Altman, E.; El-Azouzi, R.; Hayel, Y. Evolutionary games in wireless networks. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2010, 40, 634–646. [Google Scholar] [CrossRef] [PubMed]
  17. Quijano, N.; Ocampo-Martinez, C.; Barreiro-Gomez, J.; Obando, G.; Pantoja, A.; Mojica-Nava, E. The role of population games and evolutionary dynamics in distributed control systems. Control Syst. Mag. 2017, 37, 70–97. [Google Scholar]
  18. Barreiro-Gomez, J.; Obando, G.; Quijano, N. Distributed Population Dynamics: Optimization and Control Applications. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 304–314. [Google Scholar] [CrossRef]
  19. Barreiro-Gomez, J.; Ocampo-Martinez, C.; Quijano, N. Time-varying Partitioning for Predictive Control Design: Density-Games Approach. J. Process Control 2019, 75, 1–14. [Google Scholar] [CrossRef]
  20. van Dam, F.C.; Gebraad, P.M.O.; van Wingerden, J. A Maximum Power Point Tracking Approach for Wind Farm Control. In Proceedings of the Science of Making Torque from Wind, Oldenburg, Germany, 9–11 October 2012. [Google Scholar]
  21. Barreiro-Gomez, J.; Ocampo-Martinez, C.; Bianchi, F.; Quijano, N. Model-free control for wind farms using a gradient estimation-based algorithm. In Proceedings of the European Control Conference (ECC), Linz, Austria, 15–17 July 2015; pp. 1516–1521. [Google Scholar]
Figure 1. Example of wind farm layout and the corresponding wake effect.
Figure 1. Example of wind farm layout and the corresponding wake effect.
Energies 12 01164 g001
Figure 2. General steps of the heuristic proposed approach.
Figure 2. General steps of the heuristic proposed approach.
Energies 12 01164 g002
Figure 3. Example gradient estimation with four strategies S ( k ) = { s k 1 , s k 2 , s k 3 , s k 4 } , i.e., n = 4 , and f : R 2 R , i.e., m = 2 . Vectors illustrate the direction for the strategies update and the superposition of influences over strategy with index 1. (a) Various available measurements every iteration, (b) one available measurement every iteration.
Figure 3. Example gradient estimation with four strategies S ( k ) = { s k 1 , s k 2 , s k 3 , s k 4 } , i.e., n = 4 , and f : R 2 R , i.e., m = 2 . Vectors illustrate the direction for the strategies update and the superposition of influences over strategy with index 1. (a) Various available measurements every iteration, (b) one available measurement every iteration.
Energies 12 01164 g003
Figure 4. General scheme for the gradient-estimation-based algorithm with population-games assistance.
Figure 4. General scheme for the gradient-estimation-based algorithm with population-games assistance.
Energies 12 01164 g004
Figure 5. Typical decentralized control scheme. Each wind turbine has information about the total generated power and its own axial induction factor.
Figure 5. Typical decentralized control scheme. Each wind turbine has information about the total generated power and its own axial induction factor.
Energies 12 01164 g005
Figure 6. Horns Rev wind farm of 80 turbines facing a main wind speed with 45 ° direction.
Figure 6. Horns Rev wind farm of 80 turbines facing a main wind speed with 45 ° direction.
Energies 12 01164 g006
Figure 7. (a) Total powers for scenario 1 (free-stream wind speed of 10 m/s) for four wind speed directions. (b) Power generated by wind turbines 1–10 and with wind direction of 45 ° . (c) Axial coefficients for wind turbines 1–10 and with wind direction of 45 ° .
Figure 7. (a) Total powers for scenario 1 (free-stream wind speed of 10 m/s) for four wind speed directions. (b) Power generated by wind turbines 1–10 and with wind direction of 45 ° . (c) Axial coefficients for wind turbines 1–10 and with wind direction of 45 ° .
Energies 12 01164 g007
Figure 8. (a) Total powers for scenario 2 (free-stream wind speed of 12 m/s) for four wind speed directions. (b) Power generated by wind turbines 1–10 and with wind direction of 45 ° . (c) Axial coefficients for wind turbines 1–10 and with wind direction of 45 ° .
Figure 8. (a) Total powers for scenario 2 (free-stream wind speed of 12 m/s) for four wind speed directions. (b) Power generated by wind turbines 1–10 and with wind direction of 45 ° . (c) Axial coefficients for wind turbines 1–10 and with wind direction of 45 ° .
Energies 12 01164 g008

Share and Cite

MDPI and ACS Style

Barreiro-Gomez, J.; Ocampo-Martinez, C.; Bianchi, F.D.; Quijano, N. Data-Driven Decentralized Algorithm for Wind Farm Control with Population-Games Assistance. Energies 2019, 12, 1164.

AMA Style

Barreiro-Gomez J, Ocampo-Martinez C, Bianchi FD, Quijano N. Data-Driven Decentralized Algorithm for Wind Farm Control with Population-Games Assistance. Energies. 2019; 12(6):1164.

Chicago/Turabian Style

Barreiro-Gomez, Julian, Carlos Ocampo-Martinez, Fernando D. Bianchi, and Nicanor Quijano. 2019. "Data-Driven Decentralized Algorithm for Wind Farm Control with Population-Games Assistance" Energies 12, no. 6: 1164.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop