Multiobjective Optimal Control of Wind Turbines: A Survey on Methods and Recommendations for the Implementation

: advanced control system design for large wind turbines is becoming increasingly com ‐ plex, and high ‐ level optimization techniques are receiving particular attention as an instrument to fulfil this significant degree of design requirements. Multiobjective optimal (MOO) control, in par ‐ ticular, is today a popular methodology for achieving a control system that conciliates multiple de ‐ sign objectives that may typically be incompatible. Multiobjective optimization was a matter of the ‐ oretical study for a long time, particularly in the areas of game theory and operations research. Nev ‐ ertheless, the discipline experienced remarkable progress and multiple advances over the last two decades. Thus, many high ‐ complexity optimization algorithms are currently accessible to address current control problems in systems engineering. On the other hand, utilizing such methods is not straightforward and requires a long period of trying and searching for, among other aspects, start parameters, adequate objective functions, and the best optimization algorithm for the problem. Hence, the primary intention of this work is to investigate old and new MOO methods from the application perspective for the purpose of control system design, offering practical experience, some open topics, and design hints. A very challenging problem in the system engineering application of power systems is to dominate the dynamic behavior of very large wind turbines. For this reason, it is used as a numeric case study to complete the presentation of the paper.


Introduction
Control engineering is obligated to evolve in line with world progress, where the mastery of engineering systems is becoming more and more difficult. One characteristic problem is that advanced systems need to fulfill optimality in several ways, where the stated objectives could simultaneously be opposing, conflicting, or complementary. Hence, Multiobjective Optimal Control (MOOC, see e.g., [1][2][3][4] and their references) arose primarily in the last two decades as a helpful instrument to handle these types of control cases. Ongoing survey works about the subject are, for instance, [5] and [6].
A significant expectation at that point was the thought of finding a general-purpose optimizer able to manage several objectives, with the capacity to address a wide range of control configurations and control operation problems. Nowadays, it is realized that such ideal MOO tools are very difficult to create and that MOO techniques can only undertake a minor number of problems and, in addition, under concrete situations. Furthermore, some algorithms that work correctly in the case of a concrete set of optimization problems are not able to provide acceptable solutions in other cases, where other algorithms perform better [7]. Moreover, experience shows that earlier information about the control problem, the tuning parameters, the numerical behavior of the optimization approach, as well as the objective functions supplemented by much working time is essential before a While the research in the field of MOO is focused on the obtention of new algorithms, whose aim is to find more complex Pareto frontiers or more accurate solutions by utilizing specially constructed test objective functions, MOO users, for example, control engineers, work with realistic cost functions. In such circumstances, the forms of Pareto fronts are unknown in advance, and consequently, adjusting and tuning of optimization methods is not straightforward. Despite MOO being an extremely effective instrument for solving very complex problems in control system design, its application is not simple.
Furthermore, current design issues appear to be fast gaining in complexity, where the inclusion of several systems with subsystem levels and additional objective functions must also be considered in the optimization procedure. Thus, Pareto methods cannot provide satisfactory results, and bilevel multiobjective optimization can facilitate the possibility of obtaining the desired outcome (see, for instance, [8]). An application of bilevel MOO to a control problem is given in [9].
Hence, the aim of this work, following the previous study [10], is to depict MOO control from a practitioner's perspective. The remainder of the work will be presented in the following form: The next section, Section 2, is devoted to introducing the concept of multiobjective optimization for the sake of completeness and describing the most important algorithms. Typical objective functions for control are the subject of Section 3, as well as the evaluation procedures analyzed in Section 4. In Section 5, aspects related to decision-making are shown, followed by the application example and the corresponding results in Sections 6 and 7, respectively. Finally, conclusions are portrayed in Section 8.

Some Fundamentals on Multiobjective Optimization
Multiobjective optimization can be found in the literature under several different names, such as, for instance, multiperformance, multicriteria, or vector optimization. It can be described as the activity of obtaining a vector of parameters or decision variables as the outcome of an optimization process carried out on a vector field of objective functions with constraints that have to be satisfied during the operation. These objective functions normally correspond to mathematical descriptions of design specifications that are often in conflict with each other.

Definitions
The multiobjective optimization problem can be formally stated as to find where J  ℝ nf is a vector of objective functions, uU  ℝ l is a vector of decision variables,   ℝ np is a vector of parameters, nf is the number of objective functions, ng is the number of inequalities constraints and nh is the number of equality constraints. By optimization, it means either minimization or maximization, depending on the problem to be addressed. Contrarily to single-objective optimization (SOO), where only a unique global optimal solution exists, there are many optimal equivalent solutions for a MOO problem. Therefore, a definition of the optimum has to be defined in advance. In the present work, the multiobjective Pareto optimality formulated in [11] is assumed. Definition 1: A point, °A  ℝ l , is said to be Pareto optimal with respect to A iff no other point, A, satisfies the conditions J(u,) ≤ J(u,°) and Ji(u,) < Ji(u,°) for at least one i. This means that it is impossible to enhance a Pareto optimal point without worsening the value of at least one of the other objective functions.
In some cases, it is helpful to reckon with a definition for a suboptimal point that can be reached more easily by the algorithm, and, at the same time, it is an acceptable solution from the practical point of view. This is provided, e.g., by the definition of weakly Pareto optimality. Definition 2: A point, ° A  ℝ l , is said to be weakly Pareto optimal if no other point A, satisfies J(u,) < J(u,°).
In other words, a point is weakly Pareto optimal if no other point can improve all objective functions at the same time. Thus, Pareto optimal points are always weakly Pareto optimal, but not the contrary case. All Pareto optimal points constitute the Pareto optimal set defined as The image f of the Pareto optimal set   U is called Pareto front and is defined in the objective space .
Definition 3: given a vector objective function J(u,) and the corresponding Pareto optimal set, the Pareto front is defined by ( Three points in the objective space are very important. The former is the utopia point, also known as ideal point, the second one is the threat point (disagreement point or nadir point), and the latter is the worst point, which is described in the following definition.
Normally, the utopia point does not belong to f. All definitions are formulated, including the threat point and the worst point, which are defined using the max function, in the sense of minimization. However, they can be modified accordingly to express the maximization case. Moreover, all definitions can be stated in the sense of the vector of decision variables u instead of the parameter vector . All definitions are geometrically explained in Figure 1 for a two-dimensional Pareto front. Nowadays, numerical implementations of the multiobjective optimization theory became a well-known design instrument, where a significant number of different methods is available. A priori, two groups must be distinguished: algorithms for discrete, binary, or combinatorial optimization problems and methods for continuous optimization problems. The attention in the present research is limited to the latter category.
In turn, MOO methods for continuous problems can be organized according to different viewpoints (see e.g., [12]). A simple and useful classification is proposed in [13], where three main groups can be distinguished: scalarization methods (a priori articulation of preference), nonscalarization/non-Pareto methods, and Pareto methods (i.e., a posteriori articulation of preference). On the other hand, Pareto methods can be divided into two subgroups: those based on mathematical programming and those based on metaheuristic programming. Methods for solving continuous problems are summarized in Figure 2. However, this work considers only the Pareto methods, in particular, the methods included in the gray box. From a historical perspective, three stages can be recognized in the development of Pareto methods. Today, well-grounded and often used 20-year-old methods are placed in the first stage. Some of these methods are, for example, NBI (Normal Boundary Intersection, [14]), NSGA II (Nondominated Sorting Genetic Algorithm, [15]), MOPSO (Multiobjective Particle Swarm Optimization, [16]) and SPEA 2 (Strength Pareto Evolutionary Algorithm, [17]). Modified, derived, and improved versions of the first stage methods can be placed in a second stage. Typical methods to be included here are NBIm (modified NBI, [18]), DSD (Directed Search Domain, [19]), MOACO (Multiobjective Artificial Ant Colony Optimization, [20]), MOABC (Multiobjective Artificial Bee Colony Algorithm, [21]) and MOBA (Multiobjective Bat Algorithm, [22]). Finally, most recent algorithms could be DSD II (second version of DSD, [19]), NSGA-III (third generation of NSGA, [23]), MOGWO (Multiobjective Grey Wolf Optimization, [24]), MOMVO (Multiobjective Multiverse Optimization, [25]), MOALO (Multiobjective Ant Lion Optimization, [26]) constitute the third stage.

Methods Founded on the Mathematical Programming
The NBI is one of the first algorithms to compute the Pareto front by using mathematical programming. Later, several algorithms were proposed to improve its performance and to overcome its drawbacks. In such a sense, the Normal Constraint (NC) [27], Physical Programming (PP) [28], Successive Pareto Optimization (SPO) [29], and the Directed Search Domain (DSD) [19] can be cited.
The above-mentioned methods transform the multiobjective problem into many single-objective constrained subproblems. Thus, the optimization is carried out by using a single-objective solver subjected to imposed restrictions. The standard solver is the activeset algorithm, which works reasonably well if the objective functions are smooth and well scaled. These approaches produce a Pareto front with equally spaced points within the framework of a fast convergence, which are important properties for control applications.

Methods Founded on the Metaheuristic Programming
The methods based on metaheuristic programming can be grouped into evolutionary algorithms and particle swarm intelligence. Multiobjective evolutionary algorithms (MOEA) define first an initial set of solutions (initial population) and then attempt to refine the set of solutions by means of a random selection from the solution space until the optimal Pareto set is obtained.
The population is renewed by the action of several genetic operators, which are known as recombination (a new point is generated from the other point of the population, e.g., averaging), mutation (a recently created point is randomly chosen and replaced with another obtained via the realization of a random variable) and selection (newly created points are taken from the new population considering the best fitness and used to replace points from the old population). These three operations are implemented by many different evolutionary algorithms (for a comparative study, see [30]).
Particle swarm optimization is another stochastic optimization method. It starts with an initial population of particles, which evolve and survive until the last generation. This characteristic distinguishes particle swarm intelligence from evolutionary algorithms, where the population changes.
Particle swarm algorithms search the space of variables by using knowledge from old generations and going at a specifically determined speed within the direction of the global best particle. Many other algorithms were created following this principle. The common idea is to imitate the behavior of various swarms or colonies of animals, such as bats, bees, or ants. However, it should be distinguished from algorithms like MOALO and MOGWO, which emulate the hunting activities of antlions and grey wolves and their interaction with prays. These are based on the Predator-Prey formalism [31].

Methods for Bilevel Multiobjective Optimization Problems
Bilevel multiobjective optimization consists of two multiobjective optimization algorithms running at two different levels, where one algorithm runs inside the other one. The internal algorithm solves the low-level optimization problem while the external algorithm processes the upper-level problem. This is a nested operation, where the outer algorithm calls the inner one at every upper-level point. Hence, the computational burden of the bilevel optimization algorithm is very high and, therefore, it can only be used by applications that require optimization of low complexity. It is common to find metaheuristic programming at the external level and mathematical programming at the internal one. Nonetheless, both levels can be satisfied for the same class of algorithms. This optimization concept is not considered in the current study, but it is an ongoing research topic.

Selecting Methods for the Application
Optimization algorithms for MOO problems work properly for a limited number of applications [7] and, therefore, it is difficult to suggest one. Thus, recommendations are exceptionally formulated in the literature. As an indication of general purposes, it is pointed out in [32] that methods that guarantee necessary and sufficient conditions for Pareto optimality should be tested first. In the second term, methods with only guaranteed sufficient conditions may be studied, followed at the end by other methods.
From a practical standpoint, having numerous algorithms may be beneficial in terms of being able to select the most appropriate one according to the application. Hence, the designer can prioritize the choice where accuracy of the solution, computational load, regular distribution of Pareto points on the front, and speed of convergence are just a few examples.

Objective Functions for MOO Control Problems
Using advanced optimization techniques, the appropriate choice of the objective functions is crucial for an effective control system design. This is particularly relevant in the case of MOO, since several objectives must be compromised at the same time. Moreover, the objective functions must not only be a useful indicator for the operation of the control system, but they also have to fulfill the corresponding mathematical properties imposed by the optimizer.

Typical Performance Indices
Performance indices are widely used as objective functions in the classic optimization problem of control systems. They are normally expressed in the form of a function of the control error in the form of for the continuous and discrete time cases, respectively. Function f can be, for instance, In general, several other variables and their derivatives can be added as soft constraints for control signals to obtain more complex objective functions of the form

Performance Indices for Time-Limited Problems
The performance indices of the previous subsection consider infinite time. However, these indices can be solved only in a few cases, where the Laplace transform is used to leave the time domain. However, this is not possible in the case of nonlinear systems. Another procedure is the evaluation of performance indices by using simulation data. In such a case, the time series are truncated and, as a consequence, the integrals must be averaged in time, as for instance, Ts is the sampling time. Time-averaged integral can be used for all performance indices proposed in Section 3.1.

Performance Indices Formulated Using Fractional Order Calculus
Fractional-order analysis was developed in the 19th century as a generalization of integer-order integral-differential calculus to the real-order case. This work was undertaken by several well-known mathematicians, for example, Cauchy, Euler, Grünwald, Letnikov, Liouville, and Riemann [33].
Another application field for the fractional-order calculus is the system theory and control, where many new developments were carried out in the past 15 years (see, among others, [34]). Integral performance indices as described in 3.1 can also be formulated in the framework of the fractional calculus. Thus, it is pointed out in [35] that a control application presents a better response in the case of oscillatory signals when it is designed using a fractional order cost function. An application of MOO control using fractional order performance indices is reported in [36].
Moreover, the classic performance indices presented at the beginning of this section were generalized in [37] for fractional-order integrals. The continuous time performance index (4) is expressed in the sense of the fractional integral by where D (1−k) is the fractional derivative and k the fractional order.
The fractional definite integral of Equation (7) can be solved by means of the fractional Barrow formula if an admissible fractional derivative such as the Grünwald-Letnikov or Liouville formula is used [38]. The implementation of the fractional integral can be done by using, for instance, N-Integer [39] or FOTF [40].

Objective Functions for Specific Applications
In the case of control system design, time domain specifications like maximum rise time, minimum settling time, and minimum overshoot or frequency domain specifications like bandwidth, gain margin, phase margin, and resonant peak can be used as metrics for MOO control. However, since such measures are not convex, MOO algorithms can fail during the optimization process. The construction of solid and mathematically sound objective functions based on the above-mentioned metrics for MOO control is still an open subject.
Following this idea, a convex objective function including fatigue damage was proposed in [41] for the particular cases of wind turbine control. The metric is designed to formulate a compromise between the pitch actuation and the reduction of blade fatigue produced by the individual pitch control.
Wind turbine systems are also characterized by periodic signals as a consequence of the permanent rotation. Hence, the use of such variables in the objective functions is difficult because integrals do not converge. A possible way to overcome the limitation is to evaluate the functions in a finite period or to construct a piecewise signal for the objective function that considers only some periods of the original signal and zero for the rest. This procedure is often used to build an objective function including three 120-degree shifted coupled moments (M1, M2, M3). The cost function is then defined as

Evaluation of Objective Functions
The evaluation of the objective functions is carried out by the solver several times per iteration during the numerical optimization process. This evaluation means, for example, the calculation of the definite integrals (3)(4)(5). The values of the objective functions can be computed in two different ways: the evaluation based on models and the evaluation based on simulation data. Both are explained in the following.

Evaluation of Objective Functions Based on Dynamic Models
Objective functions are normally related to output variables of a system, whose behavior is represented by a dynamic model. If the model is linear and the objective function is simple, it is possible to find a closed formula to compute the objective function. In the case of [42], models are given in the form of transfer functions, and the infinite integrals (3)-(5) are computed using the Parseval formula (or its discrete-time counterparts) combined with the Åström-Jury-Agniel algorithms [43]. This approach is also used here in the numerical study. The study case in Section 6 demonstrates how to use this formula from a practical point of view.

Evaluation Based on Simulation Data
As previously seen, the evaluation based on models is restricted to linear systems with specific objective functions. However, the approach cannot be used in the case of highly complex objective functions. Fractional order objective functions present a similar difficulty because an extension of the Parseval formula for fractional integrals and a numerical procedure to compute them are still unavailable at present. An alternative methodology is to compute the objective functions numerically as part of the simulation.
The benefit of this approach is that almost every type of objective function can be computed. The disadvantage is that simulations must be ended at a finite point in time, and consequently, steady-state values can only be obtained by approximation. In such a situation, time-averaged objective functions (see Section 3.2) should be applied.
Another weakness of the simulation-based approach is the need for long simulation times. It is remarked in [44] that the numerical evaluation of objective functions by using simulation data may take from minutes to days. Practical experience shows that a fast MOPSO algorithm requires about 45 days to generate a three-dimensional Pareto surface in a control design problem, including three objective functions and a simulation time of 60 s. The simulation-based approach is schematized in Figure 3.

Decision-Making
All points of the Pareto front are equally optimal and valid solutions to the vector optimization problem. Although all points can be selected for final control implementation, not all points provide the same performance. Hence, the final solution is carried out by a decision maker. Two main concepts can be applied to decision-making. The first one is to introduce additional criteria, as, for example, particular specifications for the closed loop control system design, and the other one is to establish a point in the Pareto front that represents a good balance for all objective functions.

Approach Using Additional Control Criteria
The idea here is to introduce a second optimization round with search space in the optimal Pareto set and a particular control system specification that has to be satisfied as an objective function (for instance, minimum overshoot, minimum settling time, maximum bandwidth, etc.). For example, the minimum structured singular value is used in [45] to select the controller with the best robustness contained within the Pareto set.
This second round only selects the best possibility as an additional feature for the analyzed property inside the finite Pareto set. Therefore, the solution is normally suboptimal with regard to an optimal solution obtained for this property as a direct optimization in the first round.

Approach Using a Compromise between the Criteria
This approach does not require evaluation of supplementary objective functions to select the solution. This technique can be implemented by means of cooperative negotiation [46] or bargaining games [47]. The latter is a helpful and simple mechanism that is explained in the following.
Beginning with the shortest distance between the utopia point and the Pareto front, which is known as the compromise solution (CS), bargaining games offer various possible solutions. The shortest distance is computed from The Nash bargaining game provides the solution (NS) as the point on the Pareto front that maximizes the n-volume In the case of a two-dimensional problem, it is the area of the rectangle (c, B, NS, A) in which becomes the intersection point between the Pareto front and a 45°-ray passing through the threat point if the problem includes only two dimensions. All cases of two objective functions are illustrated in Figure 4.

Description of the Application and the Control Problem
A numerical example of the wind turbine control systems is introduced in the following to study the behavior of the multiobjective optimization algorithms. The application is the generator speed control of a wind turbine operated in the case of overrated wind speed. The control variable is the blade pitch angle acting through the pitch actuator. A characteristic of this control system is the fact that the pitching activity introduces disturbances to the tower and, consequently, the fatigue increases.
Thus, the control objective is to maintain a constant rotational speed independently of variations in the overrated wind speed and, at the same time, to increase the tower damping to reduce the amplitude of oscillations. The control system, as presented in [48,49], includes two control loops, namely the collective pitch control and the active tower damping control. The control system configuration is presented in Figure 5, where y1, y2 and w are the rotational speed of the generator, the fore-aft tower-top acceleration and the rated rotational speed of the generator, respectively. Both control loops have the same control variable, and therefore, the coupling between control loops is evident. Transfer functions from control errors to the references are described by

N s P B A Q r A A P B Q r e s D s A P B Q A P B Q B B Q Q
respectively, where the Laplace variable s is removed for simplicity. The interdependence between both control loops is also observable from (14) and (15), where both controllers appear in the transfer functions. An important problem at the beginning of the controller tuning occurs when there is no information available to start the search for parameters and there is no reference to the interdependence between them. Hence, an automatic search to find an adequate starting reference is useful. Consequently, a combined search for the optimal parameters of both controllers is an illustrative application to assess Pareto optimization algorithms.

Simplified Model of the System
The model-based approach is used in the study for the evaluation of the objective functions. Hence, a dynamic model of the wind turbine is necessary. However, the wind turbine is a very complex system, and therefore a simplified model including the rotational dynamics of the powertrain and the fore-aft dynamics of the tower is considered. The state-space equations are given by , and where x1 = r, x2 = g, x3 = r − nxg, x4 =xt, and . Moreover, m, J, K, D, and nx are mass, mass second moments of inertia, stiffness coefficients, damping coefficients, and the gearbox ratio. Furthermore, , ,F, T and  are variables, which denote rotation angle, rotation speed, force torque, and pitch angle, respectively. Ta, Tg and Ft are the inputs and g and , the outputs. The lowercase letters dt, r, g, a, t and x represent the subscripts used to describe the drivetrain, rotor, generator, aerodynamic, tower, and gearbox, respectively. Parameters for the model (16) are obtained from the reference wind turbine proposed in [50], which was analyzed in [51] from the control perspective. The most important parameters are summarized in Table 1. Since the inputs of (16) where Δ= -0, Δ= -0, respectively. The collective pitch control loop is implemented by means of a PID controller and the active tower damping control by a P controller, whose polynomials are P1(s) = 1/s, P2 = 1, Q1(s) = q0 s 2 + q1 s + q2 and Q2 = K. The parameter vector , which has to be found by the MOO algorithms, is consequently  = (q0 q1 q2 K).
It is pointed out in [52] that the ISE performance index often exhibits oscillatory behavior as a consequence of the large errors occurring at low time intervals, which contribute significantly to the performance index. This disadvantage is avoided here by using a time-weighted ISE performance index (ITSE) defined by

Mechanization of the Optimization Procedure
The first step is to generate an objective function that can be evaluated by the MOO algorithms. From the general Parseval Formula (9) and the ITSE index (20), functions f(t) and g(t) can be defined as respectively. According to (14) and (15) (22) and the Laplace transform of g(t) is which can also be expressed as the polynomial rational function The derivatives are then obtained by polynomial differentiation, that is, If the functions F and G are rational, i.e., the complex integral of Equation (16) can be solved using the Åström-Jury-Agniel algorithm [43] modified by [53]. Hence, the evaluation of the objective functions is completed, defining

( ) ( )[ ( ) ] ( )[ ( ) ] C s N s dD s ds D s dN s ds  
, (29) where N and D are either N1 and D1 from (14) for the first control loop or N2 and D2 from (15) for the second control loop. A Matlab implementation of the generalized algorithm to compute (27) can be found in [42].
It is important to remark that the condition for the existence of the solution is that polynomials D1 and D2 are stable, which is satisfied by the controller design. Thus, closedloop stability is checked for every choice of controller parameters in the search space during the optimization process.

Optimization Results
To carry out a quantitative assessment of the optimization outcomes, the effective computation time for a whole Pareto front of 70 points, the number of evaluations of the objective functions during the optimization process, the inverted generational distance (IGD), the spread (SP), and epsilon  (see [54,55]) are considered.

Evaluation Procedure for the MOO Algorithms
The computational burden of the algorithms is obtained by time measurement of the complete optimization run. The final numbers correspond to the average of ten runs for each algorithm. In addition, the required number of evaluations of the objective functions is included in the assessment.
Several other indicators for the quality of the obtained Pareto front are considered. The Inverted Generational Distance (IGD, [56]) is an indicator of the Euclidean distance between the computed and the true Pareto fronts, which is considered a reference. The IGD is computed by using the Euclidean distance d for each point of the Pareto front according to equation for n as the total number of points in the Pareto front. When the IGD is equal to zero, the computed Pareto front is equal to a real one. The spread [57], also known as distribution, is computed as where Ji,max and Ji,min are the individual maximum and minimum values of the ith objective function. Lower values indicate better coverage. Lastly, epsilon determines if one estimation set is worse than another. In the current work, the real Pareto front is compared with the results of all algorithms. As a result, a lower value for one algorithm implies that it is a better estimation than the other.

Assessment of Results
The optimization results are only true for the application under consideration and cannot be assumed to be generally valid. The results provided by the metrics are shown in Table 2. The best values are emphasized in bold and italic, and the second-best values in bold. Overall, NBI, as well as NSGA-II and MOPSO, appear to be good options for these types of control problems. It is interesting to analyze the results produced by MOBA. It belongs to the swarm intelligence-based algorithms, which is a group whose performance can be placed in a second position. However, the numbers produced by MOBA are closer to those of the first category.  Figure 6 depicts the outcomes produced by the decision maker based on bargaining games for the Pareto front created by NBI. The regularly distributed Pareto front, which is a characteristic of NBI, simplifies the solution finding process. At long last, Figure 7 shows the Pareto fronts obtained from all studied algorithms.

Important Issues Emerging from Practical Experience
Several issues associated with MOO control have come to light during the implementation process that ought to be addressed. For example, there is today a tendency toward using a large number of basic objective functions. Despite evidence that the performance of MOO algorithms for more than two objective functions has improved significantly in recent years, optimization times for problems with three objective functions in the order of hours or days suggest that the effort is still insufficient. Furthermore, decisions-making in such situations becomes a difficult task.
Thus, it is still preferable from the practical point of view to scale down the problem to two complex objective functions, clustering multiple basic objectives into two classes. This idea can be realized by using the concept of cooperative and non-cooperative team games combined with a weighted sum objective function per team, where each summed objective function includes a collection of noncontradictory characteristics. As a result, conflicting criteria are assigned to different teams.
Another open aspect is the initialization of the algorithms by setting start values. Depending on these values, convergence usually takes more or less time. At present, there are no optimal and automatic procedures to initiate the algorithms, and therefore, experience is necessary to minimize the time for trial and error. In particular, algorithms need a search space at the start, which is typically not free in numerous control problems. For instance, if the stability region for the controller parameters is unknown a priori and the search ranges for the parameters are chosen incorrectly, the closed-loop system may become unstable at start, and the algorithm will take a long time to converge to a stability region. It is also possible that the whole optimization takes places outside the parameter stabilizing region and the optimization becomes infeasible. Thus, previous work should be to determine first the stabilizing parameter region to define the start search range inside it.
Related to the previously described issue is the fact that there are often several unconnected stabilizing search spaces. Since MOO algorithms are restricted to work within a fixed search space, the global optimum could never be reached because the corresponding parameters are in a different search space. Furthermore, the value of one parameter often extends or contracts the stability range of other parameters, causing the effect that the search spaces of other parameters change all the time. This effect is illustrated in Figure  8, where it can be observed that the search space for (q0 q1 q2) depends on the value of parameter p1.  All these cases are not considered in the existing MOO algorithms. Thus, algorithms with variable, conditioned, and discontinuous search ranges are needed, but they are currently unavailable.
Finally, current MOO algorithms are not deterministic in the computer science sense. Therefore, there is no guarantee that MOO control approaches can work in a real-time environment where the optimization must be finished inside a sampling period to meet the deadlines.

Conclusions
In this paper, multiobjective optimal (MOO) control is investigated from the user perspective. Multiobjective optimization is introduced shortly. In particular, aspects related to the control application, such as, for example, the selection and evaluation of objective functions as well as the decision-making process, are examined. Several old and relatively new MOO algorithms are studied from the control viewpoint by using an example from wind energy control systems. The performances of the algorithms are compared quantitatively by using standard indicators for MOO algorithms.
Results show that old-stablished algorithms like NBI, NSGA-II, and MOPSO are solid and still maintain the state of the art, at least for control applications. From the latest algorithms, MOBA stands out, but in general, they all need to be improved for real-life control applications. In addition, several aspects arising from user experience were reported and limitations regarding the stability of the closed loop system and search spaces are highlighted.
In general, multiobjective optimization is a sophisticated tool that greatly aids in the effort to master the control system design of very complex applications. On the other hand, the current state of the art of MOO algorithms only allows a limited use.
Finally, the work is focused on the comparison of MOO methods for solving multiple control loops with multiple controllers and the associated problems. Since all methods solve the same objective functions with the same parameters, it is also expected that they provide the same results for the same decision-making. Thus, a study of the parametric sensibility and robustness of results is necessary before concluding whether the methods can be thrust for application with real wind turbines in real-time operation. Such aspects are currently being analyzed and will be reported in a future work.
Funding: This work is financed by the Federal Ministry of Economic Affairs and Energy (BMWi).
Institutional Review Board Statement: Not applicable.