Hierarchical Population Game Models of Coevolution in Multi-Criteria Optimization Problems under Uncertainty

: The article develops hierarchical population game models of co-evolutionary algorithms for solving the problem of multi-criteria optimization under uncertainty. The principles of vector minimax and vector minimax risk are used as the basic principles of optimality for the problem of multi-criteria optimization under uncertainty. The concept of equilibrium of a hierarchical population game with the right of the ﬁrst move is deﬁned. The necessary conditions are formulated under which the equilibrium solution of a hierarchical population game is a discrete approximation of the set of optimal solutions to the multi-criteria optimization problem under uncertainty.


Introduction
One of the main problems that arise in the development of modern control systems is the problem of ensuring the required quality of their functioning in a wide range of changes in operating conditions. The effectiveness of control systems design methods is determined by the possibilities of taking into account uncertain factors, such as the multicriteria nature of control goals and the uncertainty of environmental conditions. Thus, formally, the problem of control systems design is a problem of multi-criteria optimization under uncertainty (MOU).
The analysis of numerous bibliographies shows that currently the approaches that generalize the guaranteed result principle of Germeyer [1] for the class of MOU problems are actively developing and are the most promising. In [2,3], the principles of vector minimax and vector minimax risk which are multi-criteria generalizations of the well-known Wald and Savage principles, respectively, are developed. Generalizations of the principles of vector minimax and vector minimax risk for models of binary preference relations in the form of convex dominance cones are also considered. Mathematic methods for solving dynamical MOU problems based on vector minimax principle and its generalizations are being developed in [4]. In [5][6][7] a more general concept of operator minimax is introduced. The necessary and sufficient conditions for its existence in functional spaces are investigated. In [3,8,9], the interpretation of the vector minimax principle from the standpoint of game theory is given, and the relationship between the concepts of vector minimax and the saddle point is investigated.
However, the application of these approaches to solving applied multi-criteria problems of control optimization under uncertainty faces a number of problems. The frequently occurring need to implement control algorithms in real time requires the representation of control actions in the general case in the form of parameterized program-corrected control laws. Such cases are characterized by a high dimension of the criterion space and the space of control parameters, non-linearity, non-convexity, and the presence of discontinuous points in the components of vector performance indicators. These features of the problem statements, combined with the problem of global optimization, make it difficult or impossible to use known methods and algorithms for solving MOU problems.
Thus, there is a need to develop a new, more efficient computing technology that combines the advantages of global and local multi-criteria search and allows the implementation of control algorithms in real time. The developed computing technology should be compatible with promising architectures of distributed computing systems [10,11], models and methods of distributed computing [12][13][14].
Currently, a qualitatively new approach to solving optimization problems with high computational and structural complexity is being intensively developed, based on the development of co-evolutionary algorithms. In [13], models, forms of coevolution, types of interaction of populations, and models of the distribution of computing resources between subpopulations are discussed. Depending on the nature of the interaction of coevolving populations, two forms of coevolution are studied: cooperative coevolution and competitive coevolution.
Cooperative coevolution involves the decomposition of a set of parameters and/or an objective function of the optimization problem being solved. The most widespread are the following types of cooperative coevolution. Soft Grouping Cooperative Coevolution (SGCC) implements a "soft" distribution of variables across several groups with control of the degree of belonging of variables to groups using the probability distribution function [15]. Differential Grouping Cooperative Coevolution (DGCC) implements a decomposition strategy that minimizes the interdependence between groups of variables [16]. Multi-Level Cooperative Coevolution (MLCC) [17] uses the size of a group of parameters as an optimized parameter. Hierarchical Coevolution Model, a model of coevolution of symbiotic species [18], takes into account homogeneous and heterogeneous aspects of coevolution to maintain diversity, accelerates convergence, preserves diversity and prevents premature convergence of the process of finding optimal solutions.
Competitive coevolution uses the following types of interactions of subpopulations: interaction according to the "host-parasite" scheme; interaction of subpopulations with different search areas; and interaction of subpopulations that differ in search strategies (algorithms or algorithm settings). The latter type of coevolution is used to adapt the parameter settings of search algorithms that ensure the dominance of algorithms with the best settings. In particular, [19] considers a co-evolutionary "cultural" particle swarm algorithm that implements the concept of improving population algorithms based on taking into account the experience gained during solving the problem. Examples of using co-evolutionary particle swarm algorithms for solving optimal design problems with constraints and minimax problems are considered in [20,21]. In addition, competitive evolution can be used as a tool for effective dynamic distribution of computing resources between subpopulations [22,23] in the process of solving the problem.
In [24][25][26], co-evolutionary technologies for solving multi-criteria problems are developed using cooperative, competitive and combined coevolution schemes. It is shown that the combined coevolution schemes look more preferable, since they allow solving quite complex problems of multi-criteria optimization, providing a representative approximation of the Pareto set and adaptive configuration of the algorithm for a specific task. A co-evolutionary algorithm for solving the multi-criteria optimization problem under uncertainty is considered in [27]. However, the assumption of the probabilistic nature of the uncertainty limits the possibilities of using this algorithm.
Thus, co-evolutionary optimization technologies make it possible, in general, to solve the problem of finding a set of globally optimal solutions under multimodality and multicriteriality quite effectively.
At the same time, the complexity of the MOU problem is that when solving it, it is fundamentally necessary to take into account the presence and conflicting nature of the interaction of two types of uncertain factors: the uncertainty of the goal (interpreted as multi-criteria) and the uncertainty of the environment. It is assumed that the uncertainty is known only that it belongs to a certain area, and there are no statistical characteristics. Therefore, the spread of co-evolutionary technology to the MOU tasks requires the develop-ment of new models of coevolution that take into account the conflict nature of the problem being solved and, as a result, the conflict nature of the interaction of subpopulations.
The purpose of this article is to develop hierarchical game models of coevolution that take into account the conflict nature of the MOU problem, as well as the structure of the optimality principles used to find a set of optimal solutions.

A Problem Statement
Consider the problem of multi-criteria optimization under uncertainty (MOU) in the form In the problem (1) U ⊂ E r -the set of valid solutions, u ∈ U; Z ⊂ E k -the set of possible values of an undefined factor, z ∈ Z; J(u, z) ∈ E m -vector efficiency indicator defined on the Descartes product, U × Z; Ω ⊂ E m -a convex dominance cone of that defines a binary strict preference relation on the set of achievable vector estimates, It is necessary to determine the set of optimal solutions to the problem (1) under uncertainty Z. It should be noted that the guaranteeing properties of optimal solutions depend on the optimality principle used to solve problem (1).
The most well-known optimality principles used to solve problem (1) are the principles of vector minimax, vector minimax risk, and their generalizations for models of binary relations in the form of convex dominance cones [2,3].

The Ω-Minimax Principle
Definition 1. Ref. [2] Vector estimation V Ω (G) ∈ E m is called the point of extreme pessimism with respect to the dominance cone Ω (extreme Ω-pessimism), on a set G, if it has the following properties:

The Ω-Minimax Risk Principle
Definition 3. Ref. [2]. The vector estimate P Ω (G) ∈ E m is called the "ideal" point (the "utopia" point») with respect to the dominance cone Ω (Ω-ideal point) on the set G, if it has the following properties: for anyP = P Ω (G) such that G ⊂P − Ω, there is an inclusion of Definition 4. Ref. [3]. A vector function defined on U × Z, is called the vector risk function, and the value R(u, z) under given {u, z} is called the vector risk when choosing an alternative x ∈ X and implementing uncertainty z ∈ Z.
We formulate an auxiliary MOU problem: R(u, z), Ω where U, Z have the same meaning as in problem (1), R(u, z)-a vector risk function of the form (7), Ω ⊂ E m -a convex dominance cone that sets a binary relation of strict preference on the set of achievable vector estimates R(U, Z) = ∪ u ∈ U z ∈ Z R(u, z). Definition 5. Ref. [3]. The Ω -minimax solution u * ∈ U of the MOU problem (8) is called the solution that guarantees the Ω-minimax risk (R Ω -minimax) in the MOU problem of the form (1).

Hierarchical Population Game Model for Finding a Set of Optimal Solutions to the MOU Problem
Based on the statement of the problem (1), we will form a hierarchical population game with the right of the first move It is assumed that two players take part in the game (9) In relation to the hierarchical population game (9), the well-known multi-stage mechanism for forming an equilibrium solution can be implemented on the basis of hierarchical coevolution algorithms. In this case, the equilibrium solution can be interpreted as Ωminimax or R Ω -minimax of the MOU problem (1), depending on the type of functions

Algorithm of Hierarchical Coevolution Search for Set of -Minimax Solutions to the MOU Problem
The proposed algorithm includes the following main steps.
Step 1. In the hierarchical game (9), the first move is made by the coordinating center-it tells the lower-level players its population strategyŨ ⊂ U.
Step 2. With a fixed population strategyŨ ⊂ U the lower-level player solves the problem where In problem (11) Step 3. The coordinating center evaluates the effectiveness of the population strategỹ U ⊂ U by calculating the value of the function F 0 Ũ . To do this, a function is calculated for each oneũ i ∈Ũ: where ψ is a free parameter that determines the selection rules in the evolutionary algorithm; b i -the number of pointsũ j ∈Ũ, j = i, for which the condition is met. After that, the function F 0 Ũ is calculated in the form Step 4. The coordinating center solves the problem The optimal solutionŨ max to problem (17)  (3)Ũ max is the optimal solution of the problem (17), where the objective function F 0 Ũ is calculated in accordance with the rules (14)- (16).
Then the population strategyŨ max ⊂ U Ω , where U Ω is the set of Ω-minimax solutions of the MOU problem of the form (1).

Algorithm of Hierarchical Coevolutionary Search for the Set of R Ω -Optimal Solutions to the MOU Problem
Step 1. Formulate an auxiliary MOU problem of the form (8).
Step 2. Form a hierarchical population game (10). The first move is made by the coordinating center C 0 -it tells the lower-level player its population strategyŨ ⊂ U.
Step 3. With a fixed population strategyŨ ⊂ U, the lower-level player solves problem (10), where In (18), the player's C vector criterion is given as The optimal solution to problem (18) is a population strategyZ i Λ ⊂ Z, that maximizes the components of the vector criterion (19).
Step 4. The coordinating center evaluates the effectiveness of the population strategỹ U ⊂ U by calculating the value of the function F 0 Ũ in accordance with rules (14)- (16).
Step 5. The coordinating center solves the problem (17). An optimal solution to the problem (17)Ũ max is called the R Ω -guaranteeing population strategy of the coordinating center. (3)Ũ max is the optimal solution to the problem (17), where the objective function F 0 Ũ is calculated in accordance with the rules (14)- (16).
Then the population strategyŨ max ⊂ U R Ω , whereU R Ω is the set of R Ω -minimax solutions to the MOU problem of the form (1).
The peculiarity of algorithm 2 is that the vector risk function R(u, z) is used to solve problem (18). In this case, the calculation of the function R(u, z) is a separate problem, for the solution of which the following coevolution algorithm is proposed.

Coevolution Algorithm for Calculating the Vector Risk Function
Calculations are performed at the player C level with a fixed population strategỹ U ⊂ U. For fixedũ i ∈Ũ solve the problem of calculating the vector risk function R ũi , z .
The algorithm includes the following basic steps.
Step 3. For each onez j ∈Z i , the problem of constructing an ideal point is solved The optimal solution to problem (21) is a population strategyÛ

Discussion
The formalization of a control system design problem in the form of a MOU problem is relevant because it reflects the conflicting nature of the design task, which manifests itself in the need to take into account several types of uncertain factors: the uncertainty of the goal and the uncertainty of the environment. The application of the principles of vector Ω-minimax and vector Ω-minimax risk allows us to find solutions to the MOU problem that have guaranteeing properties.
The developed hierarchical population game models of co-evolutionary algorithms represent a new type of mathematical models of co-evolutionary algorithms that take into account the conflicting nature of populations interaction, as well as the structure of the principles of optimality in the MOU problem.
Hierarchical population game models of co-evolutionary algorithms have a universal character and allow implementing practically all the principles of optimality used to solve the MOU problems on a single methodological basis. The formed hierarchical algorithmic structures are an effective tool for building parallel architectures of co-evolutionary algorithms for solving high-dimensional MOU problems. The main functional blocks of the developed co-evolutionary algorithms are implemented on the basis of libraries of evolutionary algorithms for multi-criteria optimization in conditions of conflict and uncertainty [28,29].
In the near future, an article will be published in which the applied problem of multicriteria synthesis parameters of an unmanned aerial vehicle neuro-stabilization system under extreme environmental changes is considered. In this case, the problem of training an artificial neural network is formalized in the form of an MOU problem, for which a parallel version of the hierarchical co-evolutionary algorithm for finding the equilibrium of a hierarchical population game with the right of the first move is used.
In the near future, an article will be published in which the applied problem of multicriteria synthesis parameters of an unmanned aerial vehicle neurostabilization system under extreme environmental changes is considered. In this case, the problem of training an artificial neural network is formalized in the form of an MOU problem, for which a parallel version of the hierarchical co-evolutionary algorithm for finding the equilibrium of a hierarchical population game with the right of the first move is used.

Conclusions
The MOU problem statement was formalized in the form of a hierarchical population game with the right of the first move. The concept of a population strategy was defined, and methods for evaluating the effectiveness of population strategies using functions of the form (12) and (16) were proposed. The definitions of Ω-equilibrium and R-equilibrium of a hierarchical population game were formulated.
A hierarchical co-evolutionary algorithm for solving a hierarchical population game with the right of the first move on the basis of Ω-equilibrium was developed.
The necessary conditions were formulated under which the Ω-equilibrium of a hierarchical population game with the right of the first move is a discrete approximation of the set of Ω-minimax solutions of the original MOU problem.
A hierarchical co-evolutionary algorithm for solving a hierarchical population game with the right of the first move based on R-equilibrium and a co-evolutionary algorithm for calculating the vector risk function was developed.
The necessary conditions were formulated under which the R-equilibrium of a hierarchical population game with the right of the first move is a discrete approximation of the set of R Ω -minimax solutions of the original MOU problem.