Using Shapley Values and Genetic Algorithms to Solve Multiobjective Optimization Problems

: This paper proposes a new methodology to solve multiobjective optimization problems by invoking genetic algorithms and the concept of the Shapley values of cooperative games. It is well known that the Pareto-optimal solutions of multiobjective optimization problems can be obtained by solving the corresponding weighting problems that are formulated by assigning some suitable weights to the objective functions. In this paper, we formulated a cooperative game from the original multiobjective optimization problem by regarding the objective functions as the corresponding players. The payoff function of this formulated cooperative game involves the symmetric concept, which means that the payoff function only depends on the number of players in a coalition and is independent of the role of players in this coalition. In this case, we can reasonably set up the weights as the corresponding Shapley values of this formulated cooperative game. Under these settings, we can obtain the so-called Shapley–Pareto-optimal solution. In order to choose the best Shapley–Pareto-optimal solution, we used genetic algorithms by setting a reasonable ﬁtness function.


Introduction
The purpose of an optimization problem is to search for a minimum or a maximum of a real-valued function that is also called an objective function. When the objective function is a vector-valued function instead of a real-valued function, the optimization problem turns into the so-called multiobjective optimization problem. The variables of objective functions are also called decision variables, which are usually assumed to be nonnegative variables, that is the values of the variables are assumed to be nonnegative real numbers. When the decision variables are assumed to be in a predefined search space, we have a particular kind of optimization called the constrained optimization problem.
As we mentioned above, multiobjective optimization problems consider the vectorvalued objective functions, which can also be regarded as considering several conflicting objectives. The solution concepts of multiobjective optimization problems are usually based on partial orderings in which the Pareto-optimal solution is usually taken into account. In this case, the set of all Pareto-optimal solutions is frequently large in the sense that it is always an uncountable set. Solving multiobjective optimization problems usually requires the decision-makers to provide some preference relations among the set of all Paretooptimal solutions. When several decision-makers participate, the aspect of negotiation and consensus striving among the decision-makers should be considered.
The pioneering work by von Neumann and Morgenstern [17] initiated game theory in economics, which mainly deals with the behavior of players whose decisions affect each other. The topic of cooperative games is a kind of game considering coalitions. The cooperation means that players have complete freedom of communication and comprehensive information to form the different coalitions. Nash [18] studied a general two-person cooperative game. On the other hand, the monotonicity of cooperative games means that, when a game is changed such that the contributions of some player compared to all coalitions increases or stays the same, the allocation of those players should not decrease. Young [19] studied the monotonic solutions of cooperative games. The well-known Shapley value that is a unique symmetric solution is also monotonic. We may also refer to the monographs of Barron [20], Branzei et al. [21], Curiel [22], González-Díaz et al. [23], and Owen [24] for more details on the topic of game theory.
The fusion of multiobjective optimization problems and game theory has been studied by many researchers. These approaches formulated the original multiobjective optimization problems as a noncooperative or cooperative game in which the objectives were treated as the corresponding players, and the solutions of this formulated game were taken to be the solutions of the original multiobjective optimization problems. In this paper, we propose a different approach by transforming the original multiobjective optimization problems into a weighting problem in which the weights are taken to be the Shapley value of this formulated cooperative game, which can be the first attempt for solving multiobjective optimization problems.
Jing et al. [25] considered a bi-objective optimization problem in which the multibenefit allocation constraints were modeled and inspired by cooperative game theory. Theconstraint approach was used to convert the bi-objective optimization problem into a singleobjective optimization problem. Lokeshgupta and Sivasubramani [26] also considered a bi-objective optimization problem in which the two objective functions were treated as two players by incorporating the cooperative game. In order to generate the best compromise solution of the proposed bi-objective problem, the form of the so-called super-criterion was considered, and a mixed-integer nonlinear programming was applied to maximize the super-criterion.
Lee [27] considered a bi-objective optimization problem in which two objectives were treated as two players and suggested a noncooperative game corresponding to this biobjective optimization problem. The Nash equilibrium was obtained from this two-player noncooperative game without using the heuristic algorithms. Li et al. [28] considered a three-objective optimization problem that was formulated as a three-player game. The best solution was obtained by using the genetic algorithm and Tabu search among the Nash equilibrium solutions. Chai et al. [29] considered a four-objective optimization problem, and Cao et al. [30] considered a bi-objective optimization problem that were solved by using the genetic algorithm. Since the selection process in the genetic algorithm is usually comparative and competitive, they adopted the noncooperative game theory to design the selection process and directly obtained the Pareto-optimal solutions without converting the four-objective optimization problem into a single-objective optimization.
Yu et al. [31] and Zhang et al. [32] considered a three-objective optimization problem in which three objectives were treated as three players and the Nash equilibrium among these three players were taken into account. Zhang et al. [32] considered the subgame perfect Nash equilibrium to be the solutions of the models, and Yu et al. [31] incorporated the genetic algorithm to obtain the solutions without converting the three-objective optimization problem into a single-objective optimization problem. Meng and Xie [33] considered a bi-objective optimization problem in which a competitive-cooperative game method was proposed to obtain the optimal preference solutions.
The approach proposed in this paper is completely different from the above approaches, where the original multiobjective optimization problem is converted into a single-objective optimization using the weighting approach in which the corresponding weights are inspired by the Shapely value of cooperative games. It is well-known that the Pareto-optimal solutions of multiobjective optimization problems can be obtained by solving the corresponding weighting problems that are formulated by assigning some suitable weights to the objective functions. There is no usual way to determine the weights for establishing the weighting problems. In this paper, we formulated a cooperative game from a multiobjective optimization problem in which the ith objective is treated as player i. The payoff function of this formulated cooperative game involves the symmetric concept, which means that the payoff function only depends on the number of players in a coalition and is independent of the role of the players in this coalition. According to the Shapley values of this formulated cooperative game, we can reasonably set up the weights for the corresponding weighting problems. Under these settings, we can obtain the so-called Shapley-Pareto-optimal solutions. Usually, the family of all Shapley-Pareto-optimal solution is large. In order to choose the best Shapley-Pareto-optimal solution, we used genetic algorithms by setting a reasonable fitness function.
Using genetic algorithms to solve the multiobjective optimization problems has been studied for a long time by referring to the monographs of Deb [34], Osyczka [35], Sakawa [36], and Tan et al. [37]. However, this paper did not intend to invoke genetic algorithms to directly solve the multiobjective optimization problems. Instead, we used genetic algorithms to obtain the best Shapley-Pareto-optimal solution from the family of all Shapley-Pareto-optimal solutions, where the so-called best Shapley-Pareto-optimal solution is based on a reasonable fitness function.
In Section 2, the concept of Shapley values and the basic properties of multiobjective optimization problems are presented. In Section 3, a multiobjective optimization is formulated as a cooperative game by considering objective functions as the corresponding players in which the payoff function involving the symmetric concept is taken into account. In Section 4, the Shapley values of the formulated cooperative game are taken to define the so-called Shapley-Pareto-optimal solution. In Section 5, a genetic algorithm is designed to find the best Shapley-Pareto-optimal solution. A numerical example is also provided to demonstrate the usefulness of the proposed methodology.

Formulation of the Cooperative Game
Consider the following multiobjective optimization problem: where F is a feasible region of problem (MOP) and each f i is a real-valued function defined on R p for i = 1, · · · , n. A decision vector x * ∈ F is called a Pareto-optimal solution of the problem (MOP) when there does not exist another decision vector x ∈ F satisfying The weighting method is usually used to obtain the Pareto-optimal solution. The weighting problem for a multiobjective optimization problem assigns weights to each objective in which the weights represent the importance of objective functions. Therefore, we can consider the following weighting problem: where w i ≥ 0 for all i = 1, · · · , n and: Then, we have the following well-known results.

•
If w i > 0 for all i = 1, · · · , n, then the optimal solution of the WP is a Pareto-optimal solution of the MOP; • The unique optimal solution of the WP is a Pareto-optimal solution of the MOP; • Suppose that the problem (MOP) is convex. If x * is a Pareto-optimal solution, then there exist nonnegative weights w i ≥ 0 for i = 1, · · · , n such that x * is an optimal solution of the WP.
In order to obtain the Pareto-optimal solution, it suffices to solve the corresponding weighting problem. Therefore, we have many Pareto-optimal solutions according to different weighting problems.
The determination of the weights for establishing the weighting problem depends on the viewpoint of the decision-makers. This means that there is no usual way to set up the weighting problems. This paper used the Shapley value to determine the weights. The main reason is that the ith objective function f i can be regarded as the payoff of player i. In this case, we can formulate a cooperative game such that its Shapley values are taken to be the weights for creating the corresponding weighting problem.
Consider the MOP with n objective functions f i for i = 1, · · · , n and feasible set F. We first maximize each objective function f i on the feasible set F and let: We may call the vector z * = (z * 1 , z * 2 , · · · , z * n ) the ideal objective value of the MOP. We associated the vector-valued objective function f with a cooperative game. The ith objective value f i is regarded as the payoff of player i. The payoff function v can be predetermined by the decision-makers. Since z * i is the ideal payoff of player i, it follows that the payoff of player i must satisfy v({i}) ≤ z * i . Let N = {1, · · · , n} be the set of all players, and let S be a subset of N, which is regarded as a coalition. Under this coalition, S = {i 1 , i 2 , · · · , i s } with s = |S|, where |S| denotes the number of players of coalition S, the payoff of this coalition S will be greater than the total payoffs of players in S. In other words, we must have: On the other hand, the payoff of coalition S cannot be greater than the total ideal payoffs on S. More precisely, we have: We define a payoff function on the family of all subsets of N (i.e., on all coalitions) via the symmetric concept. Given any coalition S with |S| ≥ 2, we define: where b s is an extra payoff that benefits from the coalition via the symmetric concept, which means that it only depends on the number of players in a coalition without taking into account the roles of the players in a coalition. For example, the extra benefit can be taken as: which is a discount of the original payoff ∑ s m=1 v({i m }) by a discount factor κ s , where the discount factor κ s can be regarded as a symmetric constant that is independent of the players in the coalition S with |S| = s and is dependent on the number s of the players in the coalition S. In other words, the symmetric constant κ s will be the same for |S| = s regardless of the players in the coalition S. In this paper, the symmetric constant κ s is taken as κ s = c s /s. Therefore, the payoff of coalition S is given by: where c s is a nonnegative constant and is independent of the players in the coalition S with |S| = s, which says that, under the coalition S, the extra payoff can be obtained by taking c s multiplying the average of individual payoffs.
Since the upper bound of v(S) is given in (1), the constant c s should satisfy: which says that Then, we have: for s = 2, · · · , n. For convenience, we also define c 1 = 0.

Shapley Values
Let N = {1, · · · , n} be a set of all players. Any nonempty subset S ⊆ N is called a coalition. A cooperative game is an ordered pair (N, v), where the characteristic function v is a function from the family of all subsets of N into R satisfying v(∅) = 0. The number v(S) can be regarded as the worth of coalition S in the game (N, v).
The carrier of cooperative game (N, v) is a coalition T satisfying v(S) = v(S ∩ T) for any coalition S ⊆ N. This definition states that the player i ∈ T is a dummy player, that is to say, the player i has nothing to contribute to any coalitions.
Let π be a permutation of N, i.e., a one-to-one function π : N → N. Given a coalition is interpreted as the payoffs received by player i under an agreement. This correspondence φ is taken to satisfy the following Shapley axioms: The function φ from the family of all cooperative games into the n-dimensional Euclidean space R n defines a vector φ(v), which is called the Shapley value of cooperative game (N, v).
It is well known that there exists a unique function φ defined on the family of all cooperative games and satisfying Axioms (S1), (S2), and (S3). More precisely, for i ∈ N, we have: Let (N, v) be a cooperative game formulated from the MOP with the characteristic function given in (2). In order to use the weighting method to solve the problem (MOP), the weights are determined by the vector w(v) = (w 1 (v), · · · , w n (v)) in which the ith component w i (v) is interpreted as the fair payoffs received by player i under an agreement. We see that the weights w depend on the cooperative game (N, v), where the weights w satisfy the following agreement that is set by the Shapley axioms: The function w by (N, v) → w(v) defines a vector w(v), which is the Shapley value of cooperative game (N, v). Therefore, given a corresponding cooperative game (N, v), we can solve the WP by using the Shapley value w(v) = (w 1 (v), · · · , w n (v)) as the weights. More precisely, the weights are given by the following formula: From (2), we see that: where s = |S|.
For the subsequent discussion, we also assumed that the following conditions are satisfied: • For all i = 1, · · · , n, we assumed z * i > 0 and 0 ≤ v({i}) < z * i . In particular, we can define v({i}) = k i · z * i for some constant 0 < k i < 1. Under this assumption, from (4), we see that c s > 0 for all s = 2, · · · , n, which also says that v(S) ≥ 0 for all S ⊆ N; • Recall that c 1 = 0 for convenience. For s = 2, · · · , n, we assumed: Under this assumption, it is clear to see w i (v) ≥ 0 for all i = 1, · · · , n by referring to (6) and (7). Now we consider the normalized weights as follows which says that 0 <w i (v) < 1 for all i = 1, · · · , n. The weighting problem for a multiobjective programming problem assigns weights to each objective in which the weights represent the importance of the objective functions. Since we treated the multiobjective programming problem as a cooperative game, the payoffs of players correspond to the importance of the objective functions. In other words, the reasonable importance can be taken as the fair payoffs of the players. Here, we take the Shapley value w(v) to represent the importance.
Since the optimal solutions of the WP are Pareto-optimal solutions of the problem (MOP), when the weights are taken to be normalized Shapley values given in (9), the optimal solution of the weighting problem is then called a Shapley-Pareto-optimal solution.
From (2), we see that the cooperative game (N, v) depends on the nonnegative constants c i for i = 1, · · · , n, that is the payoff function v depends on the vector c = (c 1 , · · · , c n ). From (6) and (7), we also see that the weights and normalized weights depend on c. Therefore, we write w i (v) ≡ w i (c) andw i (v) ≡w i (c) for i = 1, · · · , n. The purpose is to obtain the Shapley-Pareto-optimal solution by solving the following weighting problem: The Shapley-Pareto-optimal solution is denoted by x * (c), which depends on the vector c. Let X be the set of all Shapley-Pareto-optimal solutions, i.e., where U s can refer to (3). In the sequel, we use the genetic algorithm to obtain the best Shapley-Pareto-optimal solution in X by evolving vectors c.

Designing the Genetic Algorithm
The main purpose of this paper was to find a best Shapley-Pareto-optimal solution from X by maximizing the following objective function: The Shapley-Pareto-optimal solution x * (c) depends on the vector c. We say that x * (c * ) is the best Shapley-Pareto-optimal solution when η(c * ) ≥ η(c). Obtaining the best Shapley-Pareto-optimal solution is a hard problem. Therefore, we used the genetic algorithm to obtain the approximated best Shapley-Pareto-optimal solution. The chromosome was taken to be the vector c of real codes in R n . The purpose was to find the best chromosome according to the fitness function given in (10).
In the sequel, we present a recursive procedure to generate the nonnegative constants c s for s = 2, · · · , n such that the inequalities (4) and (8) are satisfied, that is the nonnegative constants c s must satisfy c 1 = 0, for s = 2, · · · , n. In other words, the chromosome c has the form of c = (0, c 2 , · · · , c n ), where c s satisfies the inequalities (11) for s = 2, · · · , n.

Proposition 1.
Suppose that each c n is initially generated as a random number in the closed interval [0, U n ], that is c n is a uniform distribution ranging over [0, U n ], where U n is given in (3). Let: · c s+1 for s = 2, · · · , n − 1.
For s = n − 1, n − 2, · · · , 2, we generated each c s as a random number in the closed interval [0, V s ], that is c s is a uniform distribution ranging over [0, V s ] for s = 2, · · · , n − 1. Then, we have: 0 ≤ c s ≤ U s and c s s ≥ c s−1 s − 1 for s = 2, · · · , n.

Remark 1.
According to Proposition 1, the nonnegative constants c s can be randomly and sequentially generated as follows for s = 2, · · · , n. We first generated c n as a random number in the closed interval [0, U n ]. Then, we generated c n−1 as a random number in the closed interval: In this case, we have: Similarly, we generated c n−2 as a random number in the closed interval: In this case, we have: 0 ≤ c n−2 ≤ U n−2 and c n−1 n − 1 ≥ c n−2 n − 2 .
Recursively, we can generate the nonnegative constants c s satisfying: The next result is used for the crossover operation.
Proof. It is clear to see that 0 ≤ c s ≤ U s for s = 2, · · · , n. Now, for s = 2, · · · , n, we have: This completes the proof.
In this paper, the standard deviation σ n was taken to be the following form: where β is a constant of proportionality to scale η(c) and z n represents an offset. The new mutated individual c n is defined by: where U n is given in (3). Then, c n ∈ [0, U n ]; • Let: Generate a random Gaussian number with mean zero and standard deviation σ n−1 , and assign: c n−1 =c n−1 + N(0, σ 2 n−1 ) =c n−1 + σ n−1 · N(0, 1), where the standard deviation σ n−1 is taken to be the following form: The new mutated individual c n−1 is defined by: Generate a random Gaussian number with mean zero and standard deviation σ n−2 , and assign: c n−2 =c n−2 + N(0, σ 2 n−2 ) =c n−2 + σ n−2 · N(0, 1), where the standard deviation σ n−2 is taken to be the following form: The new mutated individual c n−2 is defined by: Recursively, for s = n − 3, n − 4, · · · , 2, we can define: Generate a random Gaussian number with mean zero and standard deviation σ s , and assign: c s =c s + N(0, σ 2 s ) =c s + σ s · N(0, 1), where the standard deviation σ s is taken to be the following form The new mutated individual c s is defined by: Then, c s ∈ [0, V s ]. Proposition 3. Given a vectorc = (0,c 2 , · · · ,c n ) satisfying the inequalities (11), we considered the mutation c ofc obtained in the way of (14). Then, c satisfies the inequalities (11).
Proof. Since c n ∈ [0, U n ] and c s ∈ [0, V s ] for s = 2, · · · , n − 1, the argument in the proof of Proposition 1 is still valid. This completes the proof. Proposition 3 says that if the chromosomec = (0,c 2 , · · · ,c n ) satisfies the inequalities (11), then its mutation c = (0, c 2 , · · · , c n ) satisfies the inequalities (11) to keep the feasibility, where c s is given by: for s = 2, · · · , n. Therefore, the mutation c ofc can be obtained by randomly generating a standard normal distribution N(0, 1).

Computational Procedure
Initially, the population size was assumed to be m. Therefore, we have m chromosomes c (j) for j = 1, · · · , m. Each chromosome c (j) is generated according to Proposition 1 and Remark 1. More precisely, each chromosome c (j) has the form of c (j) = (0, c s satisfies the inequalities (11) given by: for s = 2, · · · , n. Each chromosome c (j) is assigned a fitness value η(c (j) ) for j = 1, · · · , m. The m chromosomes c (j) for j = 1, · · · , m are ranked in descending order of their corresponding fitness values η(c (j) ) for j = 1, · · · , m, where the first one is saved to be the (initial) best fitness values, namedη 0 .
The mutation is based on Proposition 3. Each chromosome c (j) = (0, c n ) is mutated and assigned to c (j+m) = (0, c (j+m) 2 , · · · , c (j+m) n ) in the way of (15) for j = 1, · · · , m. After this mutation step, we can obtain 2m chromosomes. Now, we generated a standard normal distribution N(0, 1) and set: From (14) or (15), the mutated chromosome is given by: for s = 2, · · · , n and j = 1, · · · , m. After the mutation step, we have 2m chromosomes c (j) for j = 1, · · · , 2m. Then, the crossover operation is based on Proposition 2. Randomly select two chromosomes c (j) and c (k) for j, k ∈ {1, · · · , 2m} with j = k. In order to calculate their convex combination according to (13), we generated a random number λ ∈ (0, 1); the new chromosome is given by c (2m+1) = λc (j) + (1 − λ)c (k) . More precisely, we have: s for s = 2, · · · , n. After this crossover step, we have 2m + 1 chromosomes c (j) for j = 1, · · · , 2m + 1. Now, we have m old chromosomes c (j) for j = 1, · · · , m and m + 1 new chromosomes c (m+j) for j = 1, · · · , m + 1 that were generated from the mutation and crossover steps. Therefore, we can calculate the m + 1 new fitness values η(c (j+m) ) for j = 1, · · · , m + 1. In this case, we have in total 2m + 1 fitness values that can be used to select the m best chromosomes to be the next generation. The 2m + 1 chromosomes are ranked in descending order of their corresponding fitness values, and the first m chromosomes are treated as the m best chromosomes and saved to be the next generation. The computational procedure is presented below: • Step 1 (initialization). The population size is assumed to be m. The initial population is determined by setting c (j) = (c (j) n is a random number in [0, U n ] for all j = 1, · · · , m, and c (j) s are random numbers in [0, V s ] for all s = 2, · · · , n − 1 and j = 1, · · · , m, where V s is given in Remark 1. Then, c (j) satisfies the inequalities (11) for j = 1, · · · , m. Given each c (j) , calculate the normalized Shapley valuew i (c (j) ) according to (6) and (9) for i = 1, · · · , n and j = 1, · · · , m. Each c (j) is assigned a fitness value given by: i (c (j) ) · f i (x * (c (j) )) for j = 1, · · · , m. The m chromosomes c (j) for j = 1, · · · , m are ranked in descending order of their corresponding fitness values η(c (j) ) for j = 1, · · · , m, where the first one is saved to be the (initial) best fitness values, namedη 0 . Save c (j) as an old elite c ( * j) given by c s for s = 1, · · · , n and j = 1, · · · , m. Regarding the stopping criterion, set the tolerance , and set the maximum times of iterations as m * for satisfying the tolerance . Set k = 0, which means the initial generation, and k * = 1, which means the first time for satisfying the tolerance ; • Step 2 (mutation). Set k ← k + 1, which means the kth generation. Each c (j) is mutated and assigned to c (j+m) in the way of (14). Generate a random Gaussian number with mean zero and standard deviation σ s . In this paper, the standard deviation σ s was taken to be the following form: where β is a constant of proportionality to scale η(c (j) ) and z s represents an offset. Therefore, we obtain the mutated chromosome c ∈ [0, V s ] for s = 2, · · · , n. After this step, we have 2m chromosomes c (j) for j = 1, · · · , 2m; • Step 3 (crossover). Randomly select c (j) and c (k) for j, k ∈ {1, · · · , 2m} with j = k. Generate a random number λ ∈ (0, 1); the new chromosome is given by ∈ [0, U s ] for s = 1, · · · , n. After this step, we have 2m + 1 chromosomes c (j) for j = 1, · · · , 2m + 1. Proposition 2 says that c (2m+1) satisfies the inequalities (11); • Step 4 (calculate new fitness). For each c (j+m) , calculate the normalized Shapley valuē w i (c (j+m) ) according to (6) and (9) for i = 1, · · · , n and j = 1, · · · , m + 1. Each c (j+m) is assigned a fitness value given by: for j = 1, · · · , m + 1; • Step 5 (selection). The m + 1 new chromosomes c (j+m) for j = 1, · · · , m + 1 obtained from Steps 2 and 3 and m old elites c ( * j) for j = 1, · · · , m are ranked in descending order of their corresponding fitness values η(c (j+m) ) and η(c ( * j) ). The first m chromosomes are saved to be the new elites c ( * j) for j = 1, · · · , m, and the first one is saved to be the best fitness value named asη k for the kth generation; • Step 6 (stopping criterion). It may happen thatη k =η k−1 . In order not to be trapped in the local optimum, we proceeded with more iterations for m * times even though η k −η k−1 < . If 0 ≤η k −η k−1 < and reach the times of iterations m * , then the algorithm is halted and returns the Shapley-Pareto-optimal solution. Otherwise, the new elites c ( * j) for j = 1, · · · , m are copied to be the next generation c (j) for j = 1, · · · , m. Set k * ← k * + 1, and the algorithm proceeds to Step 2.
The genetic algorithm is a randomized algorithm. The final results for different runs may be different. However, in the long run, the final results will converge to a desired result. On the other hand, it can happen that the genetic algorithm can be trapped in the local optimum. In order to avoid this difficulty, we may try many runs to make sure the results are almost the same. The above Step 6 also provides a criterion to avoid being trapped in the local optimum.
The computer code was implemented using Microsoft Excel VBA in which a built-in optimization tool can be used. Since this is a randomized algorithm, we tried many runs in order to obtain the convergent results. Therefore, after many runs, the best fitness value was around 10.16666411, and the best Shapley-Pareto-optimal solution was: (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ) = (0, 0, 0, 0, 0, 1, 0).
We also remark that this simple numerical example can also be solved by using the other heuristic algorithms such as ant colony optimization, artificial immune systems, particle swarm optimization, simulated annealing, Tabu search etc. The purpose of this paper was not to provide a new genetic algorithm to compare the efficiency among the different heuristic algorithms. The genetic algorithm adopted in this paper was the standard one. Therefore, its efficiency can be realized from the existing article. The main purpose of this paper was to propose a new methodology to solve the multiobjective optimization problems by incorporating the Shapley value of a formulated cooperative game and solving its corresponding single-objective weighting problem by using the well-known numerical techniques of optimization for the single-objective function to collect a family of so-called Shapley-Pareto-optimal solutions. The intention of the genetic algorithms adopted in this paper was to obtain the best Shapley-Pareto-optimal solution from the large set of Shapley-Pareto-optimal solutions. In other words, other kinds of heuristic algorithms can also be used to obtain the best Shapley-Pareto-optimal solution.

Conclusions
A new methodology by applying the Shapley values of cooperative games was proposed to solve the multiobjective optimization problems. There are many ways that have been proposed to solve the multiobjective optimization problems in the literature. One efficient way is to convert a multiobjective optimization problem into a single-objective optimization problem by summing all the objective functions as a single objective with some suitable weights. Usually, those weights are determined by decision-makers by intuition. In this paper, we adopted the Shapley value of a formulated cooperative game to be the weights of this weighting problem, which can avoid the biased assignment of weights directly determined by the decision-makers.
We can solve the single-objective weighting problem to obtain the so-called Shapley-Pareto-optimal solution by using the well-known numerical techniques of optimization for the single-objective function. For example, in the linear case, this single-objective weight problem will be a linear programming problem that can be solved by using the simplex method. Since the payoff function of this formulated cooperative game depends on the symmetric constant κ s = c s /s for s = 1, · · · , n, as shown in (2), this means that the different symmetric constants will determine the different payoff functions, which also obtain the different Shapley values. In other words, the Shapley-Pareto-optimal solution will depend on the vector c = (c 1 , · · · , c n ) of nonnegative constants, which also means that we may obtain a large set of Shapley-Pareto-optimal solutions. In order to obtain the best Shapley-Pareto-optimal solution from the set of Shapley-Pareto-optimal solutions, a genetic algorithm was adopted in this paper by evolving the nonnegative vector c of constants.
It is well known that genetic algorithms can converge to the desired solution with probability one. In other words, we can obtain the approximated optimal solutions in the long run. The numerical example presented in this paper considered a three-objective linear programming problem with one constraint and seven decision variables. Although only one constraint was considered, this does not mean that this numerical example is too simple to demonstrate the methodology proposed in this paper. The reason is that seven decision variables were taken into account in this numerical example. The difficulty of optimization problems always depends on the number of decision variables rather than the number of constraints.
Although the genetic algorithm was adopted in this paper to obtain the best Shapley-Pareto-optimal solution from the set of Shapley-Pareto-optimal solutions, the other heuristic algorithms such as ant colony optimization, artificial immune systems, particle swarm optimization, simulated annealing, tabu search, etc., can also be used to obtain the best Shapley-Pareto-optimal solutions. The purpose of this paper was not to provide a new genetic algorithm to compare the efficiency among the different heuristic algorithms. The main purpose of this paper was to propose a new methodology to solve the multiobjective optimization problems by incorporating the Shapley value of a formulated cooperative game to obtain the best Shapley-Pareto-optimal solution using the genetic algorithm. The main issue of the proposed genetic algorithm in this paper was to provide a recursive procedure to generate the vector c = (c 1 , · · · , c n ) of nonnegative constants, as shown in Proposition 1, such that they were feasible to be used in the proposed genetic algorithms. The genetic algorithm adopted in this paper was the standard one. Its efficiency compared with the existing heuristic algorithms can be realized from the literature. In the future research, we may design a new genetic algorithm and compare its efficiency with the existing heuristic algorithms using statistical analysis.
This paper considered the Shapley values of cooperative games. There are many other solution concepts of cooperative games that can also be used to set up the weights of the single-objective weighting problem, which can be the future research. On the other hand, the theory of noncooperative games is another research topic of game theory, which can also be used to set up the weighting problems in the future research.