Next Article in Journal
Indoor Air Quality Analysis Using Recurrent Neural Networks: A Case Study of Environmental Variables
Previous Article in Journal
Global Dynamics in an Alcoholism Epidemic Model with Saturation Incidence Rate and Two Distributed Delays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Fuzzy Optimization Problems Using Shapley Values and Evolutionary Algorithms

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan
Mathematics 2023, 11(24), 4871; https://doi.org/10.3390/math11244871
Submission received: 30 October 2023 / Revised: 24 November 2023 / Accepted: 1 December 2023 / Published: 5 December 2023
(This article belongs to the Section D2: Operations Research and Fuzzy Decision Making)

Abstract

:
The fusion of evolutionary algorithms and the solution concepts of cooperative game theory is proposed in this paper to solve the fuzzy optimization problems. The original fuzzy optimization problem is transformed into a scalar optimization problem by assigning some suitable coefficients. The assignment of those coefficients is frequently determined by the decision-makers via their subjectivity, which may cause some biases. In order to avoid these subjective biases, a cooperative game is formulated by considering the α -level functions of the fuzzy objective function. Using the Shapley values of this formulated cooperative game, the suitable coefficients can be reasonably set up. Under these settings, the transformed scalar optimization problem is solved to obtain the nondominated solution, which will depend on the coefficients. In other words, we shall obtain a bunch of nondominated solutions depending on the coefficients. Finally, the evolutionary algorithms are invoked to find the best nondominated solution by evolving the coefficients.

1. Introduction

The research topic of fuzzy optimization was initiated by Bellman and Zadeh [1]. Inspired and motivated by this, a lot of articles dealing with fuzzy optimization problems have come out. The early works by Buckley [2], Herrera et al. [3], Julien [4], Inuiguchi [5], Luhandjula et al. [6], Verdegay [7], Tanaka et al. [8] and Zimmermann [9,10] mainly considered the fuzzified constraints and objective functions. This kind of approach fuzzifies the crisp (conventional) optimization problems. For example, the following (crisp) constraints
a i 1 x 1 + a i 2 x 2 + + a i n x n b i   f o r   i = 1 , , m ,
where a i j and b i are real numbers, are fuzzified to be
a i 1 x 1 + a i 2 x 2 + + a i n x n ˜ b i   f o r   i = 1 , , m ,
which were formulated by the aspiration level using the membership functions to describe the degree of violation for the original crisp constraints given in (1). Also, some of the real numbers a i j and b i can be fuzzified by imposing the possibility distributions.
There is another approach to studying fuzzy optimization problems by considering the fuzzy coefficients. In other words, the coefficients in optimization problems are assumed to be fuzzy numbers. For example, we can consider the following fuzzy linear programming problem with fuzzy objective functions and real constraint functions
( F L P ) max a ˜ 1 1 ˜ { x 1 } a ˜ 2 1 ˜ { x 2 } a ˜ q 1 ˜ { x n } s u b j e c t   t o b i 1 x 1 + b i 2 x 2 + + b i 2 x 2 c i   f o r   i = 1 , , n ; x i 0   f o r   i = 1 , , n ,
where a ˜ i are fuzzy numbers for i = 1 , , n . Chalco-Cano et al. [11] and Wu [12] study the Karush–Kuhn–Tucker optimality conditions for optimization problems with fuzzy coefficients. Li et al. [13] and Wu [14] also studied the different types of the optimality conditions. The duality theorems for optimization problems with fuzzy coefficients were studied by Wu [15,16] using the so-called Hukuhara derivative. On the other hand, Chalco-Cano et al. [17] and Pirzada and Pathak [18] proposed the Newton method to solve optimization problems with fuzzy coefficients. In general, solving optimization problems with fuzzy decision variables is a difficult task; a long paper by Wu [19] provides an efficient way to solve the fuzzy linear programming problems with fuzzy decision variables.
The so-called fully fuzzified linear programming problem was solved by Buckley and Feuring [20] in which the evolutionary algorithm was used. The particle swarm optimization method was used by Baykasoglu and Gocken [21] to solve the fuzzy optimization in which only triangular fuzzy numbers were considered. The fully fuzzy linear programming problem was studied by Ezzati et al. [22] in which the original fuzzy problem was converted into a multiobjective linear programming problems. The fully fuzzy linear programming problem was also studied by Ahmad et al. [23], Jayalakshmi and Pandian [24], Khan et al. [25], Kumar et al. [26], Lotfi et al. [27], Najafi et al. [28] and Nasseri et al. [29] in which the triangular fuzzy numbers were used. On the other hand, the fully fuzzy linear programming problems were studied by Kaur and Kumar [30] in which the trapezoidal fuzzy numbers were considered. Considering the triangular fuzzy numbers or trapezoidal fuzzy numbers can simplify the formulation. However, the proposed methods based on the triangular fuzzy numbers or trapezoidal fuzzy numbers will be invalid when the fuzzy quantities are taken to be the general type of bell-shaped fuzzy numbers. Regarding the engineering problems, the fuzzy transportation problem was studied by Chakraborty et al. [31], Jaikumar [32] and Baykasoglu and Subulan [33] in which the fuzzy quantities are also taken to be the triangular fuzzy numbers. The fuzzy transportation problem was also solved by Ebrahimnejad [34] and Kaur and Kumar [35] in which the so-called generalized trapezoidal fuzzy numbers were considered. The unbalanced fully fuzzy minimal cost flow problem was studied by Kaur and Kumar [30] in which the fuzzy quantities are taken to LR fuzzy numbers.
von Neumann and Morgenstern [36] initiated the study of game theory in economics in which the behavior of players is mainly concerned such that their decisions affect each other. This is also called the noncooperative game theory. On the other hand, the cooperative game is regarded as a game in coalitional form such that the cooperations among the different players are concerned. Nash [37] studied the concept of a general two-player cooperative game and provided a solution concept of such cooperative game. The cooperations mean that the players can have complete freedom of communication and comprehensive information on the structure of the game. After this inspiration, many solution concepts of cooperative games were proposed. The monotonic solution of cooperative games was studied by Young [38]. The idea of monotonicity says that if a game changes such that some player’s contribution to all coalitions increases or stays the same, then the player’s allocation should not decrease. The well-known Shapley value of cooperative game is a unique symmetric and efficient solution concept, which is also a monotonic solution. This paper will adopt the Shapley value to study fuzzy optimization problems. Moreover, the monographs by Barron [39], Branzei et al. [40], Curiel [41], González-Díaz et al. [42] and Owen [43] also address more details on the topic of game theory.
In this paper, we study the optimization problems with fuzzy coefficients. We first transform the original fuzzy optimization problem into a scalar optimization problem. An ordering on the family of all fuzzy numbers is proposed. Then, we can use this ordering to define the so-called nondominated solution of the original fuzzy optimization problem. Under these settings, we establish a relationship showing that each optimal solution of the transformed scalar optimization problem is also a nondominated solution of the original fuzzy optimization problem. In this situation, we can just solve the transformed scalar optimization problem.
In order to formulate this scalar optimization problem, we need to assign different weights to the objective functions. Therefore, in this paper, we introduce a cooperative game that can be formulated using objective functions. In this case, the weights of the objective functions are assigned to be the corresponding Shapley values of this formulated cooperative game. After the weights have been assigned, we can solve this scalar optimization problem to obtain the nondominated solutions of the original fuzzy optimization problems. Since we can have a lot of different scalar optimization problems according to the different weights, the set of all nondominated solutions is frequently large in the sense that it is always an uncountable set. In this paper, we shall apply the evolutionary algorithms to find the best nondominated solution.
In Section 2, we formulate an optimization problem with fuzzy coefficients and define its solution concepts called nondominated solutions. We also transform this fuzzy optimization problem into a scalar (crisp) optimization problem. We can show that the optimal solutions of this scalar optimization problem are also the nondominated solutions of the original fuzzy optimization problem. In Section 3, we introduce the concept of Shapley value in cooperative games. In Section 4, in order to solve the scalar optimization problem, we are going to formulate its objective functions to be a cooperative game. In Section 5, using the Shapley value of the formulated cooperative games to set up the corresponding scalar optimization problem. From the different scalar optimization problems, we can generate a bunch of different nondominated solutions. Therefore, in Section 6, we shall use the evolutionary algorithms to find the best nondominated solution. A concise numerical example is presented in Section 7.

2. Formulation

The fuzzy subset A ˜ of R is defined by a membership function ξ A ˜ : R [ 0 , 1 ] . The α -level set of A ˜ , denoted by A ˜ α , is defined by
A ˜ α = { x R : ξ A ˜ ( x ) α }
for all α ( 0 , 1 ] . The 0-level set A ˜ 0 is defined as the closure of the set { x R : ξ A ˜ ( x ) > 0 } . It is clear to see that A ˜ α A ˜ β for α > β .
Any subset A of R can also be treated as a fuzzy set 1 ˜ A in R by taking the membership function as follows
ξ 1 ˜ A ( x ) = 1   f o r   x A 0   f o r   x A .
When A is a singleton { a } , we also write 1 ˜ { a } , which also means that each real number a R is treated as 1 ˜ { a } .
Let A ˜ and B ˜ be two fuzzy subsets of R . According to the extension principle, the addition and multiplication between A ˜ and B ˜ are defined by
ξ A ˜ B ˜ ( z ) = sup { ( x , y ) : z = x + y } min ξ A ˜ ( x ) , ξ B ˜ ( y )
and
ξ A ˜ B ˜ ( z ) = sup { ( x , y ) : z = x y } min ξ A ˜ ( x ) , ξ B ˜ ( y ) .
Let a ˜ be a fuzzy subset of R . We say that a ˜ is a fuzzy interval when the following conditions are satisfied:
  • a ˜ is normal, i.e., ξ a ˜ ( x ) = 1 for some x R ;
  • a ˜ is convex, i.e., the membership function ξ a ˜ ( x ) is quasi-concave;
  • The membership function ξ a ˜ is upper semicontinuous;
  • The 0-level set a ˜ 0 is a closed and bounded subset of R .
It is well known that each α -level set a ˜ α of a fuzzy interval a ˜ is a bounded closed interval in R , which is also denoted by
a ˜ α = [ a ˜ α L , a ˜ α U ] .
We denote by F c c the family of all fuzzy intervals, and consider the fuzzy-valued function f ˜ : X F c c defined on a nonempty set X. In this case, we can generate the real-valued functions f ˜ α L and f ˜ α U for α [ 0 , 1 ] defined by
f ˜ α L ( x ) = f ˜ α ( x ) α L   a n d   f ˜ α U ( x ) = f ˜ α ( x ) α U .
We consider the following fuzzy optimization problem (FOP) with fuzzy coefficients a ˜ 1 , , a ˜ n and real decision variables x 1 , , x n in R :
( F O P ) max f ˜ a ˜ , x s u b j e c t   t o x X ,
where X is a feasible region in R n and f ˜ ( a ˜ , x ) denotes the fuzzy objective function of (FOP). For example, we can consider the following fuzzy linear programming problem with fuzzy objective function and real constraint functions:
( F L P ) max a ˜ 1 1 ˜ { x 1 } a ˜ 2 1 ˜ { x 2 } a ˜ q 1 ˜ { x n } s u b j e c t   t o b i 1 x 1 + b i 2 x 2 + + b i 2 x 2 c i   f o r   i = 1 , , n ; x i 0   f o r   i = 1 , , n .
In this paper, the fuzzy coefficients a ˜ 1 , , a ˜ n are taken to be fuzzy intervals. In order to interpret the meaning of the optimal solution of the problem (FOP), we need to introduce an ordering among the set of all fuzzy intervals.
Definition 1. 
Let a ˜ and b ˜ be two fuzzy intervals. We define an ordering “≺” between a ˜ and b ˜ as follows. We write a ˜ b ˜ when the following conditions are satisfied:
  • a ˜ α L b ˜ α L and a ˜ α L b ˜ α U for all α [ 0 , 1 ] ;
  • There exists α * [ 0 , 1 ] satisfying a ˜ α * L < b ˜ α L for all 0 α α * or a ˜ α * U < b ˜ α U for all α * α 1 .
Transitivity is an import issue of an ordering. It is clear to see that the ordering proposed in Definition 1 indeed owns the transitivity.
Definition 2. 
We say that x * is a nondominated optimal solution of fuzzy optimization problem (FOP) when there does not exists another feasible solution x X satisfying
f ˜ ( a ˜ , x * ) f ˜ ( a ˜ , x ) .
Let Λ = { 0 = α 1 , α 2 , , α m = 1 } be a partition of [ 0 , 1 ] . We consider the following scalar optimization problem
( S O P ) max w 1 f ˜ α 1 L a ˜ , x + + w m f ˜ α m L a ˜ , x + w m + 1 f ˜ α 1 U a ˜ , x + , w 2 m f ˜ α m U a ˜ , x s u b j e c t   t o x X ,
where w i 0 for all i = 1 , , 2 m satisfying
w 1 + w 2 + + w 2 m = 1 .
The following theorem will be useful to find the nondominated solution of the problem (FOP).
Theorem 1. 
Suppose that w i > 0 for all i = 1 , , 2 m . If x * is an optimal solution of scalar optimization problem (SOP), then it is also a nondominated solution of fuzzy optimization problem (FOP).
Proof. 
Suppose that x * is not a nondominated optimal solution of the problem (FOP). Then, there exists x satisfying f ˜ ( a ˜ , x * ) f ˜ ( a ˜ , x ) . Therefore, the following conditions are satisfied.
  • We have f ˜ α L ( a ˜ , x * ) f ˜ α L ( a ˜ , x ) and f ˜ α U ( a ˜ , x * ) f ˜ α U ( a ˜ , x ) for all α [ 0 , 1 ] .
  • There exists α * [ 0 , 1 ] , satisfying f ˜ α * L ( a ˜ , x * ) < f ˜ α L ( a ˜ , x ) for all 0 α α * or f ˜ α * U ( a ˜ , x * ) < f ˜ α U ( a ˜ , x ) for all α * α 1 .
We are going to claim that x satisfies the following conditions:
  • f ˜ α i L ( a ˜ , x * ) f ˜ α i L ( a ˜ , x ) and f ˜ α i U ( a ˜ , x * ) f ˜ α i U ( a ˜ , x ) for all i = 1 , , m ;
  • There exists α r Λ such that f ˜ α r L ( a ˜ , x * ) < f ˜ α r L ( a ˜ , x ) or f ˜ α r U ( a ˜ , x * ) < f ˜ α r U ( a ˜ , x ) .
It is obvious that f ˜ α i L ( a ˜ , x * ) f ˜ α i L ( a ˜ , x ) and f ˜ α i U ( a ˜ , x * ) f ˜ α i U ( a ˜ , x ) for all i = 1 , , m . Now, we consider the following cases.
  • Suppose that f ˜ α * L ( a ˜ , x * ) < f ˜ α L ( a ˜ , x ) for all 0 α α * . There exists α r Λ such that α r α * . In this case, we have
    f ˜ α r L ( a ˜ , x * ) f ˜ α * L ( a ˜ , x * ) < f ˜ α r L ( a ˜ , x )
  • Suppose that f ˜ α * U ( a ˜ , x * ) < f ˜ α U ( a ˜ , x ) for all α * α 1 . There exists α r Λ such that α r α * . In this case, we have
    f ˜ α r U ( a ˜ , x * ) f ˜ α * U ( a ˜ , x ) < f ˜ α r U ( a ˜ , x * ) .
The above two cases say that there exists α r Λ satisfying f ˜ α r L ( a ˜ , x * ) < f ˜ α r L ( a ˜ , x ) or f ˜ α r U ( a ˜ , x * ) < f ˜ α r U ( a ˜ , x ) . Since each w i > 0 for i = 1 , , 2 m , it follows that
w 1 f ˜ α 1 L a ˜ , x * + + w m f ˜ α m L a ˜ , x * + w m + 1 f ˜ α 1 U a ˜ , x * + , w 2 m f ˜ α m U a ˜ , x * < w 1 f ˜ α 1 L a ˜ , x + + w m f ˜ α m L a ˜ , x + w m + 1 f ˜ α 1 U a ˜ , x + , w 2 m f ˜ α m U a ˜ , x ,
which contradicts the fact that x * is an optimal solution of the scalar optimization problem (SOP). This completes the proof. □
The determination of coefficients w i for i = 1 , , 2 m depends on the viewpoint of decision makers. It means that there is no usual way to set up the scalar optimization problems. This paper will follow the solution concept of game theory to determine the coefficients w i for i = 1 , , 2 m . The main reason is that the objective functions f ˜ α i L and f ˜ α i U can be regarded as the payoff of players i and m + i for i = 1 , , m . In this case, we can formulate a cooperative game. The Shapley values are the usual solution concepts of a cooperative game, which will be taken to be the coefficients w i for i = 1 , , 2 m for creating the scalar optimization problem.

3. Shapley Values

Given a set N = { 1 , , n } of players, any nonempty subset S N is called a coalition. Let P ( N ) denote the family of all subsets of N. Equivalently, P ( N ) is a family of all coalitions. We consider a function v : P ( N ) R defined on P ( N ) such that it satisfies v ( ) = 0 . Then, the ordered pair ( N , v ) is called a cooperative game. Given any S P ( N ) , the number v ( S ) is treated as the worth of coalition S in the game ( N , v ) .
Let x = ( x 1 , , x n ) R n be a payoff vector or an allocation, where each x i represents the share of the value of v ( N ) received by player i for i = 1 , , n . Then, we have the following concepts.
  • The vector x R n is called a pre-imputation when it satisfies the following equality
    v ( N ) = i N x i .
    This also says that the group rationality is satisfied.
  • The vector x R n is called an imputation when it is pre-imputation and satisfies the following individual rationality
    x i v ( { i } )   f o r   i N .
The individual rationality condition (3) says that each member of coalition receives at least the same amount that the player can obtain by acting alone without any support from other players. The group rationality means that any increase in reward to a player must be matched by a decrease in reward for one or more other players. The main objective of cooperative game theory is to determine the imputation that results in a fair allocation of the total rewards, which will depend on the definition of fair. In this paper, we shall consider the Shapley values of cooperative games, which can be treated as the fair allocation of the total rewards.
The carrier of the cooperative game ( N , v ) is a coalition T satisfying
v ( S ) = v ( S T )   f o r   a n y   c o a l i t i o n   S N .
This definition states that the player i S is a dummy player; that is to say, the player i has nothing to contribute to any coalitions.
Let π : N N be a one-to-one function. Given a coalition S N with | S | = s , we can write S = { i 1 , i 2 , , i s } . Then, we have π ( S ) = { π ( i 1 ) , π ( i 2 ) , , π ( i s ) } . In this case, we can define a new cooperative game ( N , v π ) by v π ( S ) = v ( π 1 ( S ) ) for any S P ( N ) .
Given a cooperative game ( N , v ) , we consider a corresponding vector
ϕ ( v ) = ( ϕ 1 ( v ) , , ϕ n ( v ) ) ,
where the ith component ϕ i ( v ) is interpreted as the payoffs received by player i under an agreement. This function ϕ is taken to satisfy the following Shapley axioms.
  • (S1) If S is any carrier of the game ( N , v ) , then we have i S ϕ i ( v ) = v ( S ) .
  • (S2) For any one-to-one function π : N N and any i N , we have ϕ π ( i ) ( v π ) = ϕ i ( v ) ;
  • (S3) If ( N , v 1 ) and ( N , v 2 ) are any cooperative games, then we have ϕ i ( u + v ) = ϕ i ( u ) + ϕ i ( v ) for all i N .
The function ϕ from the family of all cooperative games into the n-dimensional Euclidean space R n defines a vector ϕ ( v ) that is called the Shapley value of the cooperative game ( N , v ) .
The well-known result is given as follows. There exists a unique function ϕ defined on the family of all cooperative games and satisfies axioms (S1), (S2) and (S3). Moreover, we have
ϕ i ( v ) = { S : i S N } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { i } ) )
In this paper, we are going to use the Shapley values to transform the optimization problem with fuzzy objective functions into a scalar optimization problem by setting up a reasonable weight.

4. Formulation of Corresponding Cooperative Game

Based on the objective function in the scalar optimization problem (SOP), we are going to formulate a cooperative game. First of all, we assume to have 2 m players with N = { 1 , , 2 m } . The objective functions f ˜ α i L are regarded as the payoff of players i for i = 1 , , m , and the objective functions f ˜ α i U are regarded as the payoff of players m + i for i = 1 , , m .
The ideal payoff of each player is to maximize its corresponding payoff function. Let
ζ i * = sup x X f ˜ α i L ( x )   a n d   ζ m + i * = sup x X f ˜ α i U ( x )   f o r   i = 1 , , m .
In this case, ζ i * are the ideal payoffs of players i = 1 , , 2 m , respectively. Since v ( { i } ) is treated as the payoff of player i in a cooperative game, it is reasonable to assume
v ( { i } ) ζ i *   f o r   i = 1 , , 2 m ,
which means that the payoff v ( { i } ) must be less than the maximum payoff. In a perfect cooperative game, the payoff v ( { i } ) of player i may possibly reach its maximum payoff ζ i * .
Let S be a subset of N, which is regarded as a coalition. Under this coalition S = { i 1 , i 2 , , i s } with s = | S | , the payoff of this coalition S will be greater than the total payoffs of each player in S. In other words, we must have
v ( S ) v ( { i 1 } ) + v ( { i 2 } ) + + v ( { i s } ) ,
which shows the effect of cooperation. Also, the payoff of coalition S cannot be greater than the total ideal payoffs on S. More precisely, we have the following inequalities
v ( { i 1 } ) + v ( { i 2 } ) + + v ( { i s } ) v ( S ) ζ i 1 * + ζ i 2 * + ζ i s *
Now, we are going to define the payoff v ( S ) of any coalition S with | S | 2 . We define
v ( S ) = l = 1 s v ( { i l } ) + κ s s l = 1 s v ( { i l } ) ,
where κ s is a nonnegative constant and is not related with any coalition S with | S | = s . The second term in (6) says that, under the coalition S, the extra payoff can be obtained by taking κ s that multiplies the average of individual payoffs.
Since the upper bound of v ( S ) is given in (5), the constant κ s must satisfy the following inequality
l = 1 s v ( { i l } ) + κ s s l = 1 s v ( { i l } ) l = 1 s ζ i l * .
After some algebraic calculations, we obtain
0 κ s s · l = 1 s ζ i l * v ( { i l } ) l = 1 s v ( { i l } ) = s · l = 1 s ζ i l * l = 1 s v ( { i l } ) s   f o r   s = 2 , , 2 m .
Let
Ω s ( S ) = s · l = 1 s ζ i l * l = 1 s v ( { i l } ) s .
Then, we have
0 κ s Ω s   f o r   s = 2 , , 2 m .
For convenience, we also define κ 1 = 0 . Now, we define
Ω s = min Ω s ( S ) : S N   w i t h   | S | = s ,
which is not related with any coalition S with | S | = s .

5. Formulation of the Corresponding Scalar Optimization Problem

By looking at the scalar optimization problem (SOP), we see that the coefficients are determined by the vector
w ( v ) = w 1 ( v ) , , w 2 m ( v ) .
We can treat the ith component w i ( v ) as the fair payoff received by player i under an agreement. In this case, the coefficients w can be regarded as depending on a cooperative game ( N , v ) . Now, we assume that the coefficient w satisfies the following agreement.
  • If S is any carrier of the game ( N , v ) , then i S w i ( v ) = v ( S ) .
  • For any one-to-one function π : N N and any i N , we have w π ( i ) ( v π ) = w i ( v ) ;
  • If ( N , v 1 ) and ( N , v 2 ) are any cooperative games, then we have w i ( v 1 + v 2 ) = w i ( v 1 ) + w i ( v 2 ) for all i N .
Let G be a family of all cooperative games with the same player set N. Then, the function w : G R 2 m given by ( N , v ) w ( v ) defines a vector w ( v ) that is the Shapley value of the cooperative game ( N , v ) .
Now, we can solve the scalar optimization problem (SOP) by taking the Shapley value w ( v ) as the coefficients, which can avoid the possibly biased determination of weights w caused by the decision-makers using intuition. More precisely, the coefficients are given by the following formula
w i ( v ) = { S : i S N } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { i } ) ) .
Using (6), the term v ( S ) v ( S { i } ) can be calculated as follows
v ( S ) v ( S { i } ) = v ( { i } ) + κ s s κ s 1 s 1 · j S { i } v ( { j } ) + κ s s · v ( { i } ) ,
where s = | S | .
In order to guarantee the nonnegativity of v ( S ) and w i ( v ) , we assume that the following conditions are satisfied.
  • We assume
    ζ i * > 0   a n d   0 v ( { i } ) < ζ i *   f o r   a l l   i = 1 , , 2 m .
    Under this assumption, using (7), it follows κ s > 0 for all s = 2 , , 2 m , which implies v ( S ) 0 for all S N .
  • Recall that κ 1 = 0 for convenience. For s = 2 , , 2 m , we assume
    κ s s κ s 1 s 1 .
    Under this assumption, using (9) and (10), it follows w i ( v ) 0 for i = 1 , , 2 m .
We still need to normalize the coefficients as follows
w ¯ i ( v ) = w i ( v ) k = 1 2 m w k ( v ) for   i = 1 , , 2 m .
It is clear to see 0 < w ¯ i ( v ) < 1 for all i = 1 , , 2 m . When the coefficients are taken to be the normalized Shapley values given in (12), Theorem 1 says that the optimal solutions of scalar optimization problem (SOP) are the nondominated solutions of the problem (FOP). In this case, these nondominated solutions are also called the Shapley-nondominated solutions of the problem (FOP).
By referring to (6), the cooperative game ( N , v ) depends on the nonnegative constants κ i for i = 1 , , 2 m . In other words, the payoff function v depends on the vector κ = ( κ 1 , , κ 2 m ) . By referring to (9) and (10), we see that the coefficients and its normalized coefficients also depend on κ . In this case, we can write
w i ( v ) w i ( κ )   a n d   w ¯ i ( v ) w ¯ i ( κ )   f o r   i = 1 , , 2 m .
The purpose is to obtain the Shapley-nondominated solution by solving the following scalar optimization problem
( S O P ) max i = 1 m w ¯ i ( κ ) · f ˜ α i L a ˜ , x + i = 1 m w ¯ m + i ( κ ) · f ˜ α i U a ˜ , x s u b j e c t   t o x X R p
The optimal solution x * ( κ ) of the problem (SOP) is the Shapley-nondominated solution by referring to Theorem 1, where x * ( κ ) depends on the vector κ . Let X be the set of all Shapley-nondominated solution, i.e.,
X = x * ( κ ) : 0 κ s Ω s   for   s = 1 , , 2 m ,
where Ω s can refer to (8). We are going to use the evolutionary algorithms to find a best Shapley-nondominated solution from X .

6. Evolutionary Algorithms

In what follows, we are going to design an evolutionary algorithm to find the best Shapley-nondominated solution from the set X , which is given in (13), by maximizing the following fitness function
τ ( κ ) = i = 1 m w ¯ i ( κ ) · f ˜ α i L a ˜ , x * ( κ ) + i = 1 m w ¯ m + i ( κ ) · f ˜ α i U a ˜ , x * ( κ ) .
We can see that obtaining the best Shapley-nondominated solution is indeed a hard problem. However, we can obtain its approximated best Shapley-nondominated solution by using the evolutionary algorithms, which will be designed in two phases.
The scalar optimization problem (SOP) depends on the partition Λ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] . Phase I is to obtain the approximated best Shapley-nondominated solution when the partition Λ is fixed. Phase II is to perform phase I for finer partitions Λ of [ 0 , 1 ] until the approximated best Shapley-nondominated solution cannot be improved. In this case, we can return the final result.

6.1. Phase I

The partition Λ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] is fixed. In order to generate the nonnegative constants κ s for s = 2 , , 2 m such that the inequalities (7) and (11) are satisfied, we are going to design a recursive procedure. In other words, the nonnegative constants κ s generated by the recursive procedure must satisfy κ 1 = 0 ,
0 κ s Ω s   a n d   κ s s κ s 1 s 1   f o r   s = 2 , , 2 m .
Now, we first generate κ 2 m as a random number in the closed interval [ 0 , Ω 2 m ] , where Ω 2 m is given in (8). Then, we have 0 κ 2 m Ω 2 m . Let
Ψ 2 m 1 = min Ω 2 m 1 , 1 1 2 m · κ 2 m ,
where Ω 2 m 1 is given in (8). Then, we generate κ 2 m 1 as a random number in the closed interval [ 0 , Ψ 2 m 1 ] . In this case, we have 0 κ 2 m 1 Ψ 2 m 1 , which implies
0 κ 2 m 1 Ω 2 m 1   a n d   κ 2 m 1 1 1 2 m · κ 2 m .
Therefore, we obtain
0 κ 2 m 1 Ω 2 m 1   a n d   κ 2 m 2 m κ 2 m 1 2 m 1 ,
which satisfy (15).
For Ω 2 m 2 given in (8), let
Ψ 2 m 2 = min Ω 2 m 2 , 1 1 2 m 1 · κ 2 m 1 .
We similarly generate κ 2 m 2 as a random number in the closed interval [ 0 , Ψ 2 m 2 ] . In this case, we have 0 κ 2 m 2 Ψ 2 m 2 , which implies
0 κ 2 m 2 Ω 2 m 2   a n d   κ 2 m 2 1 1 2 m 1 · κ 2 m 1 .
Therefore, we obtain
0 κ 2 m 2 Ω 2 m 2   a n d   κ 2 m 1 2 m 1 κ 2 m 2 2 m 2 ,
which satisfy (15).
Recursively, let
Ψ s = min Ω s , 1 1 s + 1 · κ s + 1   f o r   s = 2 , , 2 m 1 ,
where Ω s is given in (8). Then, for each s = 2 m 1 , 2 m 2 , , 2 , we generate κ s as a random number in the closed interval [ 0 , Ψ s ] . Then, we can obtain
0 κ s Ω s   a n d   κ s s κ s 1 s 1   f o r   s = 2 , , 2 m ,
which satisfy (15). Therefore, the recursive Formula (16) can be used to generate the nonnegative constants κ s satisfying (15) for s = 2 , , 2 m , where κ 1 = 0 for convenience.

6.1.1. Crossover Operation

Suppose that
κ ^ = ( 0 , κ ^ 2 , , κ ^ 2 m )   a n d   κ ¯ = ( 0 , κ ¯ 2 , , κ ¯ 2 m )
are two vectors satisfying the inequalities (15). Given any λ ( 0 , 1 ) , we consider the crossover operation
κ 1 = 0   a n d   κ s = λ κ ^ s + ( 1 λ ) κ ¯ s   f o r   s = 2 , , 2 m .
Since
0 κ ^ s Ω s   a n d   0 κ ¯ s Ω s   f o r   s = 2 , , 2 m ,
it follows 0 κ s Ω s for s = 2 , , 2 m . Since
κ ^ s s κ ^ s 1 s 1   a n d   κ ¯ s s κ ¯ s 1 s 1   f o r   s = 2 , , 2 m ,
we have
κ s s = 1 s λ κ ^ s + ( 1 λ ) κ ¯ s = λ · κ ^ s s + ( 1 λ ) · κ ¯ s s λ · κ ^ s 1 s 1 + ( 1 λ ) · κ ¯ s 1 s 1 = 1 s 1 λ κ ^ s 1 + ( 1 λ ) κ ¯ s 1 = κ s 1 s 1 .
This shows that the crossover operation
κ = λ κ ^ + ( 1 λ ) κ ¯
given by (17) satisfies (15).

6.1.2. Mutation Operation

We are going to present the mutation operation based on the normal distribution. Suppose that κ ¯ = ( 0 , κ ¯ 2 , , κ ¯ 2 m ) is a vector satisfying (15). We consider the mutation κ of κ ¯ with the components given by κ 1 = 0 and κ s for s = 2 , , 2 m , which are proposed below.
We first generate N ( 0 , σ 2 m 2 ) , and assign
κ ^ 2 m = κ ¯ 2 m + N ( 0 , σ 2 m 2 ) = κ ¯ 2 m + σ 2 m · N ( 0 , 1 ) .
The new mutation κ 2 m is defined by
κ 2 m = κ ^ 2 m i f   κ ^ 2 m [ 0 , Ω 2 m ] Ω 2 m i f   κ ^ 2 m > Ω 2 m 0 i f   κ ^ 2 m < 0 ,
where Ω 2 m is given in (8). It is clear to see κ 2 m [ 0 , Ω 2 m ] .
Let
Ψ 2 m 1 = min Ω 2 m 1 , 1 1 2 m · κ 2 m .
We generate N ( 0 , σ 2 m 1 2 ) , and assign
κ ^ 2 m 1 = κ ¯ 2 m 1 + N ( 0 , σ 2 m 1 2 ) = κ ¯ 2 m 1 + σ 2 m 1 · N ( 0 , 1 ) .
The new mutation κ 2 m 1 is defined by
κ 2 m 1 = κ ^ 2 m 1 i f   κ ^ 2 m 1 [ 0 , Ψ 2 m 1 ] Ψ 2 m 1 i f   κ ^ 2 m 1 > Ψ 2 m 1 0 i f   κ ^ 2 m 1 < 0 .
It is clear to see κ 2 m 1 [ 0 , Ψ 2 m 1 ] .
Let
Ψ 2 m 2 = min Ω 2 m 2 , 1 1 2 m 1 · κ 2 m 1 .
We generate N ( 0 , σ 2 m 2 2 ) , and assign
κ ^ 2 m 2 = κ ¯ 2 m 2 + N ( 0 , σ 2 m 2 2 ) = κ ¯ 2 m 2 + σ 2 m 2 · N ( 0 , 1 ) .
The new mutation κ 2 m 2 is defined by
κ 2 m 2 = κ ^ 2 m 2 i f   κ ^ 2 m 2 [ 0 , Ψ 2 m 2 ] Ψ 2 m 2 i f   κ ^ 2 m 2 > Ψ 2 m 2 0 i f   κ ^ 2 m 2 < 0 .
It is clear to see κ 2 m 1 [ 0 , Ψ 2 m 1 ] .
Recursively, for s = 2 m 3 , 2 m 4 , , 2 , we can define
Ψ s = min Ω s , 1 1 s + 1 · κ s + 1 .
We generate N ( 0 , σ s 2 ) , and assign
κ ^ s = κ ¯ s + N ( 0 , σ s 2 ) = κ ¯ s + σ s · N ( 0 , 1 ) .
The new mutation κ s is defined by
κ s = κ ^ s i f   κ ^ s [ 0 , Ψ s ] Ψ s i f   κ ^ s > Ψ s 0 i f   κ ^ s < 0 .
It is clear to see κ s [ 0 , Ψ s ] .
The new mutated vector κ of κ ¯ obtained from the above procedure satisfies κ 2 m [ 0 , Ω 2 m ] and κ s [ 0 , Ψ s ] for s = 2 , , 2 m 1 . By referring to (16), we can similarly show that the mutated vector κ also satisfies (15).

6.2. Phase II

Now, phase II will perform the procedure proposed in phase I for more finer partition Λ of [ 0 , 1 ] . Suppose that the partition Λ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] is considered in phase I. Then, we perform the procedure proposed in phase I by consider the partition Λ ¯ = { 0 = α 1 , α 2 , , α n = 1 } of [ 0 , 1 ] satisfying Λ Λ ¯ . Phase II will be continuously performed for different finer partitions until the approximated best Shapley-nondominated solution cannot be improved. In this paper, we suggest two ways to determine the finer partition Λ ¯ .
The simple way is to take the partition Λ ¯ of [ 0 , 1 ] such that each subinterval of [ 0 , 1 ] has equal length satisfying Λ Λ ¯ . In other words, the unit interval [ 0 , 1 ] is equally subdivided by the partition Λ ¯ satisfying Λ Λ ¯ .
The second way is to evolve the old partition Λ using the evolutionary algorithms to obtain a new finer partition Λ ¯ . We can take a population P = { α 1 , α 2 , , α m } that consists of the old partition Λ . We can perform the crossover operation and mutation operation in the population P to obtain new points β 1 , , β r . Then, we can generate a new finer partition Λ ¯ given by
Λ ¯ = Λ β 1 , , β r .
For example, we can perform the operations as follows.
  • Crossover operation. Given any two α s and α t in P , we can take the convex combination λ α s + ( 1 λ ) α t for different λ ( 0 , 1 ) to generate different new points.
  • Mutation operation. Give any α s P , we consider the mutation α s + δ , where δ is a random number in [ 0 , 1 ] . If α s + δ is in [ 0 , 1 ] , then α s + δ is taken to be the new generated point. If α s + δ > 1 , then α s + δ 1 is taken to be the new generated point.
After a new finer partition Λ ¯ is generated, we continue to perform phase I using this new partition Λ ¯ to obtain the approximated best Shapley-nondominated solution. After this step, the partition Λ ¯ is now treated as the old partition. Therefore, we are going to generate a new finer partition Λ ^ of [ 0 , 1 ] satisfying Λ ¯ Λ ^ , and then to perform phase I again using this new finer partition Λ ^ . Phase II is continued for different finer partitions until the approximated best Shapley-nondominated solution cannot be improved.

6.3. Computational Procedure

The detailed computational procedure of evolutionary algorithm for phase I is given below. Therefore, the partition Λ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] is fixed.
  • Step 1(initialization). The size of the population in this evolutionary algorithm is assumed to be p. The individuals playing the role of evolution are vectors κ . Therefore, the initial population is given by κ ( j ) = ( κ 1 ( j ) , , κ 2 m ( j ) ) such that κ 1 ( j ) = 0 and κ 2 m ( j ) is a random number in [ 0 , Ω 2 m ] , where Ω 2 m s given in (8), for all j = 1 , , p , and κ s ( j ) are random numbers in [ 0 , Ψ s ] , where Ψ s is given in (16) for all s = 2 , , 2 m 1 and j = 1 , , p . Then, κ ( j ) satisfies the inequalities (15) for j = 1 , , p .
  • Step 2 (fitness function). Given each individual κ ( j ) , for i = 1 , , 2 m and j = 1 , , p , we calculate the normalized Shapley value w ¯ i ( κ ( j ) ) using (9) and (12). For each κ ( j ) , we solve the scalar optimization problem (SOP) to obtain x * ( κ ( j ) ) for j = 1 , , p . By referring to (14), each κ ( j ) is assigned a fitness value given by the following fitness function
    τ ( κ ( j ) ) = i = 1 m w ¯ i ( κ ( j ) ) · f ˜ α i L a ˜ , x * ( κ ( j ) ) + i = 1 m w ¯ m + i ( κ ( j ) ) · f ˜ α i U a ˜ , x * ( κ ( j ) ) .
    for j = 1 , , p . According to the fitness values τ ( κ ( j ) ) for j = 1 , , p , the p individuals κ ( j ) for j = 1 , , p are ranked in descending order. The first one is saved to be the (initial) best individual named as τ ¯ 0 . We also save κ ( j ) as old elites κ ( * j ) by setting κ s ( * j ) κ s ( j ) for s = 1 , , 2 m and j = 1 , , p .
  • Step 3 (tolerance). We set the tolerance ϵ and set the maximum times m * of iterations for satisfying the tolerance ϵ . Set l = 0 , which means the initial generation, and k * = 1 , which means the first time for satisfying the tolerance ϵ . This step may be more clear by referring to step 8 for stopping criterion.
  • Step 4 (mutation). We set l l + 1 , which means the lth generation. In this algorithm, each individual must be mutated. Each individual κ ( j ) is mutated in the way of (19) and is assigned to κ ( j + p ) for j = 1 , , p . We want to generate N ( 0 , σ s ) . In this paper, the standard deviation σ s is taken by
    σ s = β s · τ ( κ ( j ) ) + ζ s ,
    where β s is a constant of proportionality to scale τ ( κ ( j ) ) and ζ s represents an offset. According to (19), we obtain the mutated individual κ 1 ( j + p ) = 0 and κ s ( j + p ) [ 0 , Ψ s ] for s = 2 , , 2 m . Since each individual κ ( j ) must be mutated to be κ ( j + p ) for j = 1 , , p ; after this step, we shall have 2 p individuals κ ( j ) for j = 1 , , 2 p .
  • Step 5 (crossover). We perform the crossover operation (18) by randomly selecting κ ( i ) and κ ( j ) for i , j { 1 , , 2 p } with i j . We first generate a random number λ ( 0 , 1 ) . The new individual is given by
    κ ( 2 p + 1 ) = λ κ ( i ) + ( 1 λ ) κ ( j ) ,
    where the components are given by
    κ s ( 2 p + 1 ) = λ κ s ( j ) + ( 1 λ ) κ s ( k ) [ 0 , Ω s ] for s = 1 , , 2 m .
    After this step, we shall have 2 p + 1 individuals κ ( j ) for j = 1 , , 2 p + 1 . We also see that the crossover operation (18) shows that κ ( 2 p + 1 ) satisfies the inequalities (15).
  • Step 6 (calculate new fitness). Now, we have p + 1 new individuals κ ( p + 1 ) , , κ ( 2 p + 1 ) . For each new individual κ ( j + p ) for j = 1 , , p + 1 , we calculate the normalized Shapley value w ¯ i ( κ ( j + p ) ) using (9) and (12) for i = 1 , , 2 m and j = 1 , , p + 1 . For each κ ( j + p ) , we solve the scalar optimization problem (SOP) to obtain x * ( κ ( j + p ) ) for j = 1 , , p + 1 . By referring to (14), each κ ( j + p ) is assigned a fitness value given by
    τ ( κ ( j + p ) ) = i = 1 m w ¯ i ( κ ( j + p ) ) · f ˜ α i L a ˜ , x * ( κ ( j + p ) ) + i = 1 m w ¯ m + i ( κ ( j + p ) ) · f ˜ α i U a ˜ , x * ( κ ( j + p ) ) .
    for j = 1 , , p + 1 .
  • Step 7 (selection). The p + 1 new individuals κ ( j + p ) for j = 1 , , p + 1 obtained from Steps 4 to 6, and p old elites κ ( * j ) in step 2 for j = 1 , , p are ranked in descending order of their corresponding fitness values τ ( κ ( j + p ) ) and τ ( κ ( * j ) ) for j = 1 , , p . The first p (best) individuals are saved to be the new elites κ ( * j ) for j = 1 , , p , and the first one is saved to be the best individual named as τ ¯ l for the lth generation.
  • Step 8 (stopping criterion). After step 7, it may happen to have τ ¯ l = τ ¯ l 1 . In order not to be trapped in the local optimum, we proceed more iterations for m * times (ref. step 3) even though τ ¯ l τ ¯ l 1 < ϵ . If τ ¯ l τ ¯ l 1 < ϵ and the iterations reach m * times, then the algorithm is halted and returns the solution for phase I. Otherwise, the new elites κ ( * j ) for j = 1 , , p are copied to be the next generation κ ( j ) for j = 1 , , p . We set k * k * + 1 and the algorithm proceeds to step 4, where k * counts the times for satisfying the tolerance τ ¯ l τ ¯ l 1 < ϵ .
After step 8, we can obtain an approximated best Shapley-nondominated solution for a fixed partition Λ = { 0 = α 1 , α 2 , , α m = 1 } of [ 0 , 1 ] . This solution is also denoted by x * ( Λ ) to emphasize that it depends on the partition Λ . Now, we proceed to phase II by considering more finer partitions of [ 0 , 1 ] .
  • Step 1. By referring to Section 6.2, we generate a new finer partition Λ ¯ satisfying Λ Λ ¯ .
  • Step 2. Based on the new finer partition Λ ¯ , we obtain a new approximated best Shapley-nondominated solution x * ( Λ ¯ ) using the evolutionary algorithm in phase I.
  • Step 3. If x * ( Λ ) x * ( Λ ¯ ) < ϵ for a pre-determined tolerance ϵ , then the algorithm is halted, and returns the final solution x * ( Λ ¯ ) . Otherwise, we set Λ ¯ to be the old partition Λ , and proceed to step 1 to generate a new finer partition.
Finally, after step 3, we obtain the approximated best Shapley-nondominated solution, which is treated as an approximated nondominated solution of the original fuzzy optimization problem (FOP) by referring to Theorem 1.

7. Numerical Example

We consider the triangular fuzzy interval a ˜ = ( a L , a , a U ) with the membership functions defined by
ξ a ˜ ( r ) = r a L a a L i f   a L r a a U r a U a i f   a < r a U 0 o t h e r w i s e .
Then, the α -level set is given by
a ˜ α = [ ( 1 α ) a L + α a , ( 1 α ) a U + α a ] ;
that is,
a ˜ α L = ( 1 α ) a L + α a   a n d   a ˜ α U = ( 1 α ) a U + α a .
We want to solve the following fuzzy linear programming problem
( F L P ) max 5 ˜ 1 ˜ x 1 4 ˜ 1 ˜ x 2 6 ˜ 1 ˜ x 3 s u b j e c t   t o x 1 x 2 + x 3 20 3 x 1 + 2 x 2 + 4 x 3 42 3 x 1 + 2 x 2 30 x 1 , x 2 , x 3 0 ,
where 4 ˜ = ( 3.5 , 4 , 4.5 ) , 5 ˜ = ( 4 , 5 , 5.5 ) and 6 ˜ = ( 5 , 6 , 7 ) are triangular fuzzy intervals. The feasible set X is given by
X = ( x 1 , x 2 , x 3 ) R + 3 : x 1 x 2 + x 3 20 , 3 x 1 + 2 x 2 + 4 x 3 42   a n d   3 x 1 + 2 x 2 30 .
Now, we have
5 ˜ α = 5 ˜ α L , 5 ˜ α U = ( 1 α ) · 4 + 5 α , ( 1 α ) · 5.5 + 5 α = 4 + α , 5.5 0.5 · α
4 ˜ α = 4 ˜ α L , 4 ˜ α U = ( 1 α ) · 3.5 + 4 α , ( 1 α ) · 4.5 + 4 α = 3.5 + 0.5 · α , 4.5 0.5 · α
6 ˜ α = 6 ˜ α L , 6 ˜ α U = ( 1 α ) · 5 + 6 α , ( 1 α ) · 7 + 6 α = 5 + α , 7 α .
In phase I, the partition of [ 0 , 1 ] is taken by
Λ = α 1 = 0 , α 2 = 0.5 , α 3 = 1 .
Then, we have
f ˜ α 1 L a ˜ , x = f ˜ 0 L a ˜ , x = 3.5 x 1 + 4 x 2 + 5 x 3
f ˜ α 2 L a ˜ , x = f ˜ 0.5 L a ˜ , x = 3.75 x 1 + 4.5 x 2 + 5.5 x 3
f ˜ α 3 L a ˜ , x = f ˜ 1 L a ˜ , x = 4 x 1 + 5 x 2 + 6 x 3
and
f ˜ α 1 U a ˜ , x = f ˜ 0 U a ˜ , x = 4.5 x 1 + 5.5 x 2 + 7 x 3
f ˜ α 2 U a ˜ , x = f ˜ 0.5 U a ˜ , x = 4.25 x 1 + 5.25 x 2 + 6.5 x 3
f ˜ α 3 U a ˜ , x = f ˜ 1 U a ˜ , x = 4 x 1 + 5 x 2 + 6 x 3
Since f ˜ α 3 L a ˜ , x = f ˜ α 3 U a ˜ , x , the corresponding scalar optimization problem (SOP) is given by
max w 1 f ˜ α 1 L a ˜ , x + w 2 f ˜ α 2 L a ˜ , x + w 3 f ˜ α 3 L a ˜ , x + w 4 f ˜ α 1 U a ˜ , x + w 5 f ˜ α 2 U a ˜ , x s u b j e c t   t o ( x 1 , x 2 , x 3 ) X .
We first obtain the ideal objective values ( ζ 1 * , , ζ 5 * ) given by
ζ 1 * = sup x X f ˜ α 1 L a ˜ , x = 75 , ζ 2 * = sup x X f ˜ α 2 L a ˜ , x = 84   a n d   ζ 3 * = sup x X f ˜ α 3 L a ˜ , x = 93
and
ζ 4 * = sup x X f ˜ α 1 U a ˜ , x = 103.5   a n d   ζ 5 * = sup x X f ˜ α 2 U a ˜ , x = 98.25
According to the above settings, we have N = { 1 , 2 , 3 , 4 , 5 } representing five players. We take
v ( { 1 } ) = 0.5 · ζ 1 * , v ( { 2 } ) = 0.6 · ζ 2 *   a n d   v ( { 3 } ) = 0.7 · ζ 3 *
and
v ( { 4 } ) = 0.5 · ζ 4 *   a n d   v ( { 5 } ) = 0.7 · ζ 5 * .
By referring to (8), for | S | = 2 , we have
Ω 2 ( { 1 , 2 } ) = 2 · ζ 1 * + ζ 2 * v ( { 1 } ) + v ( { 2 } ) 2 = 2 · ζ 1 * + ζ 2 * 0.5 · ζ 1 * + 0.6 · ζ 2 * 2 = 1.61774
Ω 2 ( { 1 , 3 } ) = 2 · ζ 1 * + ζ 3 * v ( { 1 } ) + v ( { 3 } ) 2 = 2 · ζ 1 * + ζ 3 * 0.5 · ζ 1 * + 0.7 · ζ 3 * 2 = 1.27485
Ω 2 ( { 1 , 4 } ) = 2 · ζ 1 * + ζ 4 * v ( { 1 } ) + v ( { 4 } ) 2 = 2 · ζ 1 * + ζ 4 * 0.5 · ζ 1 * + 0.5 · ζ 4 * 2 = 2
Ω 2 ( { 1 , 5 } ) = 2 · ζ 1 * + ζ 5 * v ( { 1 } ) + v ( { 5 } ) 2 = 2 · ζ 1 * + ζ 5 * 0.5 · ζ 1 * + 0.7 · ζ 5 * 2 = 1.26041
Ω 2 ( { 2 , 3 } ) = 2 · ζ 2 * + ζ 3 * v ( { 2 } ) + v ( { 3 } ) 2 = 2 · ζ 2 * + ζ 3 * 0.6 · ζ 2 * + 0.7 · ζ 3 * 2 = 1.06493
Ω 2 ( { 2 , 4 } ) = 2 · ζ 2 * + ζ 4 * v ( { 2 } ) + v ( { 4 } ) 2 = 2 · ζ 2 * + ζ 4 * 0.6 · ζ 2 * + 0.5 · ζ 4 * 2 = 1.67107
Ω 2 ( { 2 , 5 } ) = 2 · ζ 2 * + ζ 5 * v ( { 2 } ) + v ( { 5 } ) 2 = 2 · ζ 2 * + ζ 5 * 0.6 · ζ 2 * + 0.7 · ζ 5 * 2 = 1.05853
Ω 2 ( { 3 , 4 } ) = 2 · ζ 3 * + ζ 4 * v ( { 3 } ) + v ( { 4 } ) 2 = 2 · ζ 3 * + ζ 4 * 0.7 · ζ 3 * + 0.5 · ζ 4 * 2 = 1.36329
Ω 2 ( { 3 , 5 } ) = 2 · ζ 3 * + ζ 5 * v ( { 3 } ) + v ( { 5 } ) 2 = 2 · ζ 3 * + ζ 5 * 0.7 · ζ 3 * + 0.7 · ζ 5 * 2 = 0.85714
Ω 2 ( { 4 , 5 } ) = 2 · ζ 4 * + ζ 5 * v ( { 4 } ) + v ( { 5 } ) 2 = 2 · ζ 4 * + ζ 5 * 0.5 · ζ 4 * + 0.7 · ζ 5 * 2 = 1.34785 .
Therefore, we obtain
Ω 2 = min Ω 2 ( { 1 , 2 } ) , Ω 2 ( { 1 , 3 } ) , Ω 2 ( { 1 , 4 } ) , Ω 2 ( { 1 , 5 } ) , Ω 2 ( { 2 , 3 } ) , Ω 2 ( { 2 , 4 } ) , Ω 2 ( { 2 , 5 } ) , Ω 2 ( { 3 , 4 } ) , Ω 2 ( { 3 , 5 } ) , Ω 2 ( { 4 , 5 } ) = 0.85714 .
For | S | = 3 , we have
Ω 3 ( { 1 , 2 , 3 } ) = 3 · ζ 1 * + ζ 2 * + ζ 3 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * 3 = 1.94117
Ω 3 ( { 1 , 2 , 4 } ) = 3 · ζ 1 * + ζ 2 * + ζ 4 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.5 · ζ 4 * 3 = 2.63909
Ω 3 ( { 1 , 2 , 5 } ) = 3 · ζ 1 * + ζ 2 * + ζ 5 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 5 * 3 = 1.92580
Ω 3 ( { 1 , 3 , 4 } ) = 3 · ζ 1 * + ζ 3 * + ζ 4 * 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * 3 = 2.27697
Ω 3 ( { 1 , 3 , 5 } ) = 3 · ζ 1 * + ζ 3 * + ζ 5 * 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.7 · ζ 5 * 3 = 1.66083
Ω 3 ( { 1 , 4 , 5 } ) = 3 · ζ 1 * + ζ 4 * + ζ 5 * 0.5 · ζ 1 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 3 = 2.25391
Ω 3 ( { 2 , 3 , 4 } ) = 3 · ζ 2 * + ζ 3 * + ζ 4 * 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * 3 = 2.03139
Ω 3 ( { 2 , 3 , 5 } ) = 3 · ζ 2 * + ζ 3 * + ζ 5 * 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.7 · ζ 5 * 3 = 1.48107
Ω 3 ( { 2 , 4 , 5 } ) = 3 · ζ 2 * + ζ 4 * + ζ 5 * 0.6 · ζ 2 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 3 = 2.01536
Ω 3 ( { 3 , 4 , 5 } ) = 3 · ζ 3 * + ζ 4 * + ζ 5 * 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 3 = 1.76363 .
Therefore, we obtain
Ω 3 = min Ω 3 ( { 1 , 2 , 3 } ) , Ω 3 ( { 1 , 2 , 4 } ) , Ω 3 ( { 1 , 2 , 5 } ) , Ω 3 ( { 1 , 3 , 4 } ) , Ω 3 ( { 1 , 3 , 5 } ) , Ω 3 ( { 1 , 4 , 5 } ) Ω 2 ( { 2 , 3 , 4 } ) , Ω 2 ( { 2 , 3 , 5 } ) , Ω 2 ( { 2 , 4 , 5 } ) , Ω 2 ( { 3 , 4 , 5 } ) = 1.48107 .
For | S | = 4 , we have
Ω 4 ( { 1 , 2 , 3 , 4 } ) = 4 · ζ 1 * + ζ 2 * + ζ 3 * + ζ 4 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * 4 = 2.94505
Ω 4 ( { 1 , 2 , 3 , 5 } ) = 4 · ζ 1 * + ζ 2 * + ζ 3 * + ζ 5 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.7 · ζ 5 * 4 = 2.31721
Ω 4 ( { 1 , 2 , 4 , 5 } ) = 4 · ζ 1 * + ζ 2 * + ζ 4 * + ζ 5 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 4 = 2.92335
Ω 4 ( { 1 , 3 , 4 , 5 } ) = 4 · ζ 1 * + ζ 3 * + ζ 4 * + ζ 5 * 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 4 = 2.62857
Ω 4 ( { 2 , 3 , 4 , 5 } ) = 4 · ζ 2 * + ζ 3 * + ζ 4 * + ζ 5 * 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 4 = 2.41881 .
Therefore, we obtain
Ω 4 = min Ω 4 ( { 1 , 2 , 3 , 4 } ) , Ω 4 ( { 1 , 2 , 3 , 5 } ) , Ω 4 ( { 1 , 2 , 4 , 5 } ) , Ω 4 ( { 1 , 3 , 4 , 5 } ) , Ω 4 ( { 2 , 3 , 4 , 5 } ) = 2.31721 .
Finally, we obtain
Ω 5 = Ω 5 ( 1 , 2 , 3 , 4 , 5 ) = 5 · ζ 1 * + ζ 2 * + ζ 3 * + ζ 4 * + ζ 5 * 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * 5 = 3.29449 .
For s = | S | 2 , according to (6), we have
v ( S ) = l = 1 s v ( { i l } ) + κ s s l = 1 s v ( { i l } ) .
More precisely, for | S | = 2 , we have
v ( { 1 , 2 } ) = 1 + κ 2 2 v ( { 1 } ) + v ( { 2 } ) = 1 + κ 2 2 0.5 · ζ 1 * + 0.6 · ζ 2 *
v ( { 1 , 3 } ) = 1 + κ 2 2 v ( { 1 } ) + v ( { 3 } ) = 1 + κ 2 2 0.5 · ζ 1 * + 0.7 · ζ 3 *
v ( { 1 , 4 } ) = 1 + κ 2 2 v ( { 1 } ) + v ( { 4 } ) = 1 + κ 2 2 0.5 · ζ 1 * + 0.5 · ζ 4 *
v ( { 1 , 5 } ) = 1 + κ 2 2 v ( { 1 } ) + v ( { 5 } ) = 1 + κ 2 2 0.5 · ζ 1 * + 0.7 · ζ 5 *
v ( { 2 , 3 } ) = 1 + κ 2 2 v ( { 2 } ) + v ( { 3 } ) = 1 + κ 2 2 0.6 · ζ 2 * + 0.7 · ζ 3 *
v ( { 2 , 4 } ) = 1 + κ 2 2 v ( { 2 } ) + v ( { 4 } ) = 1 + κ 2 2 0.6 · ζ 2 * + 0.5 · ζ 4 *
v ( { 2 , 5 } ) = 1 + κ 2 2 v ( { 2 } ) + v ( { 5 } ) = 1 + κ 2 2 0.6 · ζ 2 * + 0.7 · ζ 5 *
v ( { 3 , 4 } ) = 1 + κ 2 2 v ( { 3 } ) + v ( { 4 } ) = 1 + κ 2 2 0.7 · ζ 3 * + 0.5 · ζ 4 *
v ( { 3 , 5 } ) = 1 + κ 2 2 v ( { 3 } ) + v ( { 5 } ) = 1 + κ 2 2 0.7 · ζ 3 * + 0.7 · ζ 5 *
v ( { 4 , 5 } ) = 1 + κ 2 2 v ( { 4 } ) + v ( { 5 } ) = 1 + κ 2 2 0.5 · ζ 4 * + 0.7 · ζ 5 * .
For | S | = 3 , we have
v ( { 1 , 2 , 3 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 3 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 *
v ( { 1 , 2 , 4 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 4 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.5 · ζ 4 *
v ( { 1 , 2 , 5 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 2 } ) + v ( { 5 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 5 *
v ( { 1 , 3 , 4 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 3 } ) + v ( { 4 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.5 · ζ 4 *
v ( { 1 , 3 , 5 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 3 } ) + v ( { 5 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.7 · ζ 5 *
v ( { 1 , 4 , 5 } ) = 1 + κ 3 3 v ( { 1 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + κ 3 3 0.5 · ζ 1 * + 0.5 · ζ 4 * + 0.7 · ζ 5 *
v ( { 2 , 3 , 4 } ) = 1 + κ 3 3 v ( { 2 } ) + v ( { 3 } ) + v ( { 4 } ) = 1 + κ 3 3 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 *
v ( { 2 , 3 , 5 } ) = 1 + κ 3 3 v ( { 2 } ) + v ( { 3 } ) + v ( { 5 } ) = 1 + κ 3 3 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.7 · ζ 5 *
v ( { 2 , 4 , 5 } ) = 1 + κ 3 3 v ( { 2 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + κ 3 3 0.6 · ζ 2 * + 0.5 · ζ 4 * + 0.7 · ζ 5 *
v ( { 3 , 4 , 5 } ) = 1 + κ 3 3 v ( { 3 } ) + v ( { 4 } ) + v ( { 5 } ) = 1 + κ 3 3 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * .
For | S | = 4 , we have
v ( { 1 , 2 , 3 , 4 } ) = 1 + κ 4 4 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 *
v ( { 1 , 2 , 3 , 5 } ) = 1 + κ 4 4 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.7 · ζ 5 *
v ( { 1 , 2 , 4 , 5 } ) = 1 + κ 4 4 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.5 · ζ 4 * + 0.7 · ζ 5 *
v ( { 1 , 3 , 4 , 5 } ) = 1 + κ 4 4 0.5 · ζ 1 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 *
v ( { 2 , 3 , 4 , 5 } ) = 1 + κ 4 4 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * .
Finally, for | S | = 5 , we have
v ( { 1 , 2 , 3 , 4 , 5 } ) = 1 + κ 5 5 0.5 · ζ 1 * + 0.6 · ζ 2 * + 0.7 · ζ 3 * + 0.5 · ζ 4 * + 0.7 · ζ 5 * .
The detailed computational procedure for phase I is presented below.
  • Step 1 (initialization). The population size is assumed to be p = 20 . The initial population is determined by setting κ ( j ) = ( κ 1 ( j ) , κ 2 ( j ) , κ 3 ( j ) , κ 4 ( j ) , κ 5 ( j ) ) such that κ 1 ( j ) = 0 and κ 5 ( j ) is a random number in [ 0 , Ω 5 ] = [ 0 , 3.29449 ] for all j = 1 , , 20 , and κ s ( j ) are random numbers in [ 0 , Ψ s ] for all s = 2 , 3 , 4 and j = 1 , , 20 , where Ψ s refers to (16); that is, we have
    Ψ 4 = min Ω 4 , 1 1 5 · κ 5 ( j ) = min 2.31721 , 4 · κ 5 ( j ) 5
    Ψ 3 = min Ω 3 , 1 1 4 · κ 4 ( j ) = min 1.48107 , 3 · κ 4 ( j ) 4
    Ψ 2 = min Ω 2 , 1 1 3 · κ 3 ( j ) = min 0.85714 , 2 · κ 3 ( j ) 3 .
    Then, κ ( j ) satisfies the inequalities (15) for j = 1 , , p .
  • Step 2 (fitness function). Given each κ ( j ) , according to (9), we calculate
    w i ( κ ( j ) ) w i ( v ) = { S : i S N } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { i } ) ) .
    Let
    S 1 = { S : { 1 } S N } = { 1 } , { 1 , 2 } , { 1 , 3 } , { 1 , 4 } , { 1 , 5 } , { 1 , 2 , 3 } , { 1 , 2 , 4 } , { 1 , 2 , 5 } , { 1 , 3 , 4 } , { 1 , 3 , 5 } , { 1 , 4 , 5 } , { 1 , 2 , 3 , 4 } , { 1 , 2 , 3 , 5 } , { 1 , 2 , 4 , 5 } , { 1 , 3 , 4 , 5 } , { 1 , 2 , 3 , 4 , 5 }
    S 2 = { S : { 2 } S N } = { 2 } , { 1 , 2 } , { 2 , 3 } , { 2 , 4 } , { 2 , 5 } , { 1 , 2 , 3 } , { 1 , 2 , 4 } , { 1 , 2 , 5 } , { 2 , 3 , 4 } , { 2 , 3 , 5 } , { 2 , 4 , 5 } , { 1 , 2 , 3 , 4 } , { 1 , 2 , 3 , 5 } , { 1 , 2 , 4 , 5 } , { 2 , 3 , 4 , 5 } , { 1 , 2 , 3 , 4 , 5 }
    S 3 = { S : { 3 } S N } = { 3 } , { 1 , 3 } , { 2 , 3 } , { 3 , 4 } , { 3 , 5 } , { 1 , 2 , 3 } , { 1 , 3 , 4 } , { 1 , 3 , 5 } , { 2 , 3 , 4 } , { 2 , 3 , 5 } , { 3 , 4 , 5 } , { 1 , 2 , 3 , 4 } , { 1 , 2 , 3 , 5 } , { 1 , 3 , 4 , 5 } , { 2 , 3 , 4 , 5 } , { 1 , 2 , 3 , 4 , 5 }
    S 4 = { S : { 4 } S N } = { 4 } , { 1 , 4 } , { 2 , 4 } , { 3 , 4 } , { 4 , 5 } , { 1 , 2 , 4 } , { 1 , 3 , 4 } , { 1 , 4 , 5 } , { 2 , 3 , 4 } , { 2 , 4 , 5 } , { 3 , 4 , 5 } , { 1 , 2 , 3 , 4 } , { 1 , 2 , 4 , 5 } , { 1 , 3 , 4 , 5 } , { 2 , 3 , 4 , 5 } , { 1 , 2 , 3 , 4 , 5 }
    S 5 = { S : { 5 } S N } = { 5 } , { 1 , 5 } , { 2 , 5 } , { 3 , 5 } , { 4 , 5 } , { 1 , 2 , 5 } , { 1 , 3 , 5 } , { 1 , 4 , 5 } , { 2 , 3 , 5 } , { 2 , 4 , 5 } , { 3 , 4 , 5 } , { 1 , 2 , 3 , 5 } , { 1 , 2 , 4 , 5 } , { 1 , 3 , 4 , 5 } , { 2 , 3 , 4 , 5 } , { 1 , 2 , 3 , 4 , 5 }
    More precisely, we have
    w 1 ( κ ( j ) ) = { S : S S 1 } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { 1 } ) ) = 0 ! 4 ! 5 ! · v ( { 1 } ) v ( ) + 1 ! 3 ! 5 ! · v ( { 1 , 2 } ) v ( { 2 } ) + 1 ! 3 ! 5 ! · v ( { 1 , 3 } ) v ( { 3 } ) + 1 ! 3 ! 5 ! · v ( { 1 , 4 } ) v ( { 4 } ) + 1 ! 3 ! 5 ! · v ( { 1 , 5 } ) v ( { 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 3 } ) v ( { 2 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 4 } ) v ( { 2 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 5 } ) v ( { 2 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 4 } ) v ( { 3 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 5 } ) v ( { 3 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 4 , 5 } ) v ( { 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 4 } ) v ( { 2 , 3 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 5 } ) v ( { 2 , 3 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 4 , 5 } ) v ( { 2 , 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 3 , 4 , 5 } ) v ( { 3 , 4 , 5 } ) + 4 ! 0 ! 5 ! · v ( { 1 , 2 , 3 , 4 , 5 } ) v ( { 2 , 3 , 4 , 5 } ) ,
    w 2 ( κ ( j ) ) = { S : S S 2 } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { 2 } ) ) = 0 ! 4 ! 5 ! · v ( { 2 } ) v ( ) + 1 ! 3 ! 5 ! · v ( { 1 , 2 } ) v ( { 1 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 3 } ) v ( { 3 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 4 } ) v ( { 4 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 5 } ) v ( { 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 3 } ) v ( { 1 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 4 } ) v ( { 1 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 5 } ) v ( { 1 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 4 } ) v ( { 3 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 5 } ) v ( { 3 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 4 , 5 } ) v ( { 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 4 } ) v ( { 1 , 3 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 5 } ) v ( { 1 , 3 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 4 , 5 } ) v ( { 1 , 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 2 , 3 , 4 , 5 } ) v ( { 3 , 4 , 5 } ) + 4 ! 0 ! 5 ! · v ( { 1 , 2 , 3 , 4 , 5 } ) v ( { 1 , 3 , 4 , 5 } ) ,
    w 3 ( κ ( j ) ) = { S : S S 3 } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { 3 } ) ) = 0 ! 4 ! 5 ! · v ( { 3 } ) v ( ) + 1 ! 3 ! 5 ! · v ( { 1 , 3 } ) v ( { 1 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 3 } ) v ( { 2 } ) + 1 ! 3 ! 5 ! · v ( { 3 , 4 } ) v ( { 4 } ) + 1 ! 3 ! 5 ! · v ( { 3 , 5 } ) v ( { 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 3 } ) v ( { 1 , 2 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 4 } ) v ( { 1 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 5 } ) v ( { 1 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 4 } ) v ( { 2 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 5 } ) v ( { 2 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 3 , 4 , 5 } ) v ( { 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 4 } ) v ( { 1 , 2 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 5 } ) v ( { 1 , 2 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 3 , 4 , 5 } ) v ( { 1 , 4 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 2 , 3 , 4 , 5 } ) v ( { 2 , 4 , 5 } ) + 4 ! 0 ! 5 ! · v ( { 1 , 2 , 3 , 4 , 5 } ) v ( { 1 , 2 , 4 , 5 } ) ,
    w 4 ( κ ( j ) ) = { S : S S 4 } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { 4 } ) ) = 0 ! 4 ! 5 ! · v ( { 4 } ) v ( ) + 1 ! 3 ! 5 ! · v ( { 1 , 4 } ) v ( { 1 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 4 } ) v ( { 2 } ) + 1 ! 3 ! 5 ! · v ( { 3 , 4 } ) v ( { 3 } ) + 1 ! 3 ! 5 ! · v ( { 4 , 5 } ) v ( { 5 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 4 } ) v ( { 1 , 2 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 4 } ) v ( { 1 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 4 , 5 } ) v ( { 1 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 4 } ) v ( { 2 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 4 , 5 } ) v ( { 2 , 5 } ) + 2 ! 2 ! 5 ! · v ( { 3 , 4 , 5 } ) v ( { 3 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 4 } ) v ( { 1 , 2 , 3 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 4 , 5 } ) v ( { 1 , 2 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 3 , 4 , 5 } ) v ( { 1 , 3 , 5 } ) + 3 ! 1 ! 5 ! · v ( { 2 , 3 , 4 , 5 } ) v ( { 2 , 3 , 5 } ) + 4 ! 0 ! 5 ! · v ( { 1 , 2 , 3 , 4 , 5 } ) v ( { 1 , 2 , 3 , 5 } )
    and
    w 5 ( κ ( j ) ) = { S : S S 5 } ( | S | 1 ) ! ( | N | | S | ) ! | N | ! · ( v ( S ) v ( S { 5 } ) ) = 0 ! 4 ! 5 ! · v ( { 5 } ) v ( ) + 1 ! 3 ! 5 ! · v ( { 1 , 5 } ) v ( { 1 } ) + 1 ! 3 ! 5 ! · v ( { 2 , 5 } ) v ( { 2 } ) + 1 ! 3 ! 5 ! · v ( { 3 , 5 } ) v ( { 3 } ) + 1 ! 3 ! 5 ! · v ( { 4 , 5 } ) v ( { 4 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 2 , 5 } ) v ( { 1 , 2 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 3 , 5 } ) v ( { 1 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 1 , 4 , 5 } ) v ( { 1 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 3 , 5 } ) v ( { 2 , 3 } ) + 2 ! 2 ! 5 ! · v ( { 2 , 4 , 5 } ) v ( { 2 , 4 } ) + 2 ! 2 ! 5 ! · v ( { 3 , 4 , 5 } ) v ( { 3 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 3 , 5 } ) v ( { 1 , 2 , 3 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 2 , 4 , 5 } ) v ( { 1 , 2 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 1 , 3 , 4 , 5 } ) v ( { 1 , 3 , 4 } ) + 3 ! 1 ! 5 ! · v ( { 2 , 3 , 4 , 5 } ) v ( { 2 , 3 , 4 } ) + 4 ! 0 ! 5 ! · v ( { 1 , 2 , 3 , 4 , 5 } ) v ( { 1 , 2 , 3 , 4 } )
    Then, according to (12), we also calculate the normalized Shapley value
    w ¯ i ( κ ( j ) ) = w i ( κ ( j ) ) w 1 ( κ ( j ) ) + w 2 ( κ ( j ) ) + w 3 ( κ ( j ) ) + w 4 ( κ ( j ) ) + w 5 ( κ ( j ) )
    for i = 1 , 2 , 3 , 4 , 5 and j = 1 , , p . Each κ ( j ) is assigned a fitness value given by
    τ ( κ ( j ) ) = w ¯ 1 ( κ ( j ) ) · f ˜ α 1 L x * ( κ ( j ) ) + w ¯ 2 ( κ ( j ) ) · f ˜ α 2 L x * ( κ ( j ) ) + w ¯ 3 ( κ ( j ) ) · f ˜ α 3 L x * ( κ ( j ) ) + w ¯ 4 ( κ ( j ) ) · f ˜ α 1 U x * ( κ ( j ) ) + w ¯ 5 ( κ ( j ) ) · f ˜ α 2 U x * ( κ ( j ) )
    for j = 1 , , p . The p individuals κ ( j ) for j = 1 , , p are ranked in descending order of their corresponding fitness values τ ( κ ( j ) ) for j = 1 , , p , where the first one is saved to be the (initial) best individual named as τ ¯ 0 . We save κ ( j ) as an elite κ ( * j ) given by κ s ( * j ) κ s ( j ) for s = 1 , , 5 and j = 1 , , p .
  • Step 3 (tolerance). We set l = 0 , m * = 20 , k * = 1 and the tolerance ϵ = 10 6 .
  • Step 4 (mutation). We set l l + 1 that means the lth generation. Each
    κ ( j ) = 0 , κ 2 ( j ) , κ 3 ( j ) , κ 4 ( j ) , κ 5 ( j )
    is mutated and assigned to
    κ ( j + p ) = 0 , κ 2 ( j + p ) , κ 3 ( j + p ) , κ 4 ( j + p ) , κ 5 ( j + p )
    in the way of (19) for j = 1 , , p . Generate N ( 0 , 1 ) , and assign
    κ ^ 5 ( j + p ) = κ 5 ( j + p ) + N ( 0 , σ 5 2 ) = κ 5 ( j + p ) + σ 5 · N ( 0 , 1 ) .
    The new mutated individual κ 5 ( j + p ) is defined by
    κ 5 ( j + p ) = κ ^ 5 ( j + p )   i f   κ ^ 5 ( j + p ) [ 0 , Ω 5 ] = [ 0 , 2 ] Ω 5 = 2   i f   κ ^ 5 > Ω 5 = 2 0   i f   κ ^ 5 ( j + p ) < 0 .
    Then, κ 5 ( j + p ) [ 0 , Ω 5 ] = [ 0 , 2 ] . Let
    Ψ 4 = min Ω 4 , 1 1 5 · κ 5 ( j + p ) .
    Generate N ( 0 , 1 ) , and assign
    κ ^ 4 ( j + p ) = κ 4 ( j + p ) + N ( 0 , σ 4 2 ) = κ 4 ( j + p ) + σ 4 · N ( 0 , 1 ) .
    The new mutated individual κ 4 ( j + p ) is defined by
    κ 4 ( j + p ) = κ ^ 4 ( j + p )   i f   κ ^ 4 ( j + p ) [ 0 , Ψ 4 ] Ψ 4   i f   κ ^ 4 ( j + p ) > Ψ 4 0   i f   κ ^ 4 ( j + p ) < 0 .
    Then, κ 4 ( j + p ) [ 0 , Ψ 4 ] . We can similarly obtain κ 3 ( j + p ) [ 0 , Ψ 3 ] and κ 2 ( j + p ) [ 0 , Ψ 2 ] . For s = 2 , 3 , 4 , 5 , the standard deviation σ s is taken the following form
    σ s = β s · τ ( κ ( j ) ) + ζ s ,
    where β s is a constant of proportionality to scale τ ( κ ( j ) ) and ζ s represents an offset. In this example, we take
    β 2 = 0.02 , β 3 = 0.01 , β 4 = 0.02 , β 5 = 0.01
    and ζ s = 0 for s = 2 , 3 , 4 , 5 . After this step, we can have 2 p individuals κ ( j ) for j = 1 , , 2 p .
  • Step 5 (crossover). We randomly select
    κ ( i ) = 0 , κ 2 ( i ) , κ 3 ( i ) , κ 4 ( i ) , κ 5 ( i )   a n d   κ ( j ) = 0 , κ 2 ( j ) , κ 3 ( j ) , κ 4 ( j ) , κ 5 ( j )
    for i , j { 1 , , 2 p } with i j . Generate a random number λ ( 0 , 1 ) , the new individual is given by κ ( 2 p + 1 ) = λ κ ( i ) + ( 1 λ ) κ ( j ) with components
    κ s ( 2 p + 1 ) = λ κ s ( i ) + ( 1 λ ) κ s ( j )   f o r   s = 2 , 3 , 4 , 5 .
    After this step, we can have 2 p + 1 individuals κ ( i ) for j = 1 , , 2 p + 1 .
  • Step 6 (calculate New Fitness). For each
    κ ( j + p ) = 0 , κ 2 ( j + p ) , κ 3 ( j + p ) , κ 4 ( j + p ) , κ 5 ( j + p ) ,
    using (9) and (12), we calculate the normalized Shapley value w ¯ i ( κ ( j + p ) ) for i = 1 , 2 , 3 , 4 , 5 and j = 1 , , p + 1 . Each κ ( j + p ) is assigned a fitness value given by
    τ ( κ ( j + p ) ) = w ¯ 1 ( κ ( j + p ) ) · f ˜ α 1 L x * ( κ ( j + p ) ) + w ¯ 2 ( κ ( j + p ) ) · f ˜ α 2 L x * ( κ ( j + p ) ) + w ¯ 3 ( κ ( j + p ) ) · f ˜ α 3 L x * ( κ ( j + p ) ) + w ¯ 4 ( κ ( j + p ) ) · f ˜ α 1 U x * ( κ ( j + p ) ) + w ¯ 5 ( κ ( j + p ) ) · f ˜ α 2 U x * ( κ ( j + p ) )
    for j = 1 , , p + 1 .
  • Step 7 (selection). The p + 1 new individuals
    κ ( j + p ) = 0 , κ 2 ( j ) , κ 3 ( j ) , κ 4 ( j ) , κ 5 ( j )   f o r   j = 1 , , p + 1
    obtained from Steps 4 to 6, and p old elites
    κ ( * j ) = 0 , κ 2 ( * j ) , κ 3 ( * j ) , κ 4 ( * j ) , κ 5 ( * j )   f o r   j = 1 , , p
    are ranked in descending order of their corresponding fitness values τ ( κ ( j + p ) ) and τ ( κ ( * j ) ) for j = 1 , , p . The first p (best) individuals are saved to be the new elites
    κ ( * j ) = 0 , κ 2 ( * j ) , κ 3 ( * j ) , κ 4 ( * j ) , κ 5 ( * j )   f o r   j = 1 , , p ,
    and the first one is saved to be the best individual named as τ ¯ l for the lth generation.
  • Step 8 (stopping criterion). If τ ¯ k τ ¯ k 1 < ϵ and the iterations reach m * times, then the algorithm is halted and return the solution. Otherwise, the new elites
    κ ( * j ) = 0 , κ 2 ( * j ) , κ 3 ( * j ) , κ 4 ( * j ) , κ 5 ( * j )   f o r   j = 1 , , p
    are copied to be the next generation
    κ ( j ) = 0 , κ 2 ( j ) , κ 3 ( j ) , κ 4 ( j ) , κ 5 ( j )   f o r   j = 1 , , p .
    We set k * k * + 1 and the algorithm proceeds to step 4.
The computer code is implemented using Microsoft Excel VBA. After step 8, we obtain the best fitness value is 73.3694027 and the approximated Shapley-nondominated solution is x * ( Λ ) = ( 0 , 15 , 3 ) .
Now, we proceed to phase II by considering more finer partitions of [ 0 , 1 ] .
  • Step 1. By referring to Section 6.2, we generate a new finer partition
    Λ ¯ = α 1 = 0 , α 2 = 0.25 , α 3 = 0.5 , α 4 = 0.75 , α 5 = 1
    satisfying Λ Λ ¯ .
  • Step 2. Based on the new finer partition Λ ¯ , we obtain a new approximated best Shapley-nondominated solution x * ( Λ ¯ ) = ( 0 , 15 , 3 ) using the evolutionary algorithm in phase I.
  • Step 3. Since we obtain
    x * ( Λ ) = x * ( Λ ¯ ) = ( 0 , 15 , 3 ) ,
    it follows that the final best Shapley-nondominated solution is ( x 1 , x 2 , x 3 ) = ( 0 , 15 , 3 ) .
Finally, after step 3, the approximated nondominated solution of the original fuzzy linear programming problem (FLP) is ( x 1 , x 2 , x 3 ) = ( 0 , 15 , 3 ) by referring to Theorem 1.

8. Conclusions

The essential way for solving the fuzzy optimization problem is firstly to transform the original problem into a scalar optimization problem. Under some suitable settings, the nondominated solutions of the original fuzzy optimization problem can be obtained by solving its transformed scalar optimization problem. Formulating the scalar optimization problem needs to assign some suitable weights. Usually, these weights are determined by the decision-makers according to their intuition. In order to avoid this kind of biased assignment, we consider the Shapley values of a cooperative game to be the weights of this scalar optimization problem. After assigning the weights, the transformed scalar optimization problem can be solved to obtain the nondominated solutions of the original fuzzy optimization problem. The different weights will lead to different nondominated solutions. In other words, the set of all nondominated solutions is frequently large. In this case, an efficient evolutionary algorithm is designed to find the best Shapley-nondominated solution.
This paper adopts the genetic algorithm to obtain the best Shapley-nondominated solution. We have to say that the other heuristic algorithms can still be used to obtain the best Shapley-nondominated solution, for example, ant colony optimization, artificial immune systems, particle swarm optimization, simulated annealing and Tabu search etc. can also be used to obtain the best Shapley-nondominated solution. The purpose of this paper is not about to providing a new genetic algorithm to compare the efficiency with the other heuristic algorithms. The main purpose of this paper is to propose a new methodology to solve the fuzzy optimization problems by incorporating the Shapley value of a formulated cooperative game to obtain the best Shapley-nondominated solution using the genetic algorithm. It is clear to see that the genetic algorithm adopted in this paper is the conventional one. The efficiencies of conventional genetic algorithms compared with the above heuristic algorithms have been presented in a lot of articles. Therefore, in future research, it is possible to design a new genetic algorithm and use the statistical analysis to compare its efficiency with the above heuristic algorithms.
Although this paper considers the Shapley values of a cooperative game that is formulated from the fuzzy objective functions, in future research, it is also possible to use the other solution concept of cooperative games to set up the weights. On the other hand, using the solution concepts of non-cooperative games to set up the weights may also be the future research to solve the fuzzy optimization problems with fuzzy decision variables.

Funding

This research was funded by Taiwan NSTC with grant number 110-2221-E-017-008-MY2.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bellmam, R.E.; Zadeh, L.A. Decision-Making in a Fuzzy Environment. Manag. Sci. 1970, 17, 141–164. [Google Scholar] [CrossRef]
  2. Buckley, J.J. Joint Solution to Fuzzy Programming Problems. Fuzzy Sets Syst. 1995, 72, 215–220. [Google Scholar] [CrossRef]
  3. Herrera, F.; Kovács, M.; Verdegay, J.L. Optimality for fuzzified mathematical programming problems: A parametric approach. Fuzzy Sets Syst. 1993, 54, 279–285. [Google Scholar] [CrossRef]
  4. Julien, B. An extension to possibilistic linear programming. Fuzzy Sets Syst. 1994, 64, 195–206. [Google Scholar] [CrossRef]
  5. Inuiguchi, M. Necessity Measure Optimization in Linear Programming Problems with Fuzzy Polytopes. Fuzzy Sets Syst. 2007, 158, 1882–1891. [Google Scholar] [CrossRef]
  6. Luhandjula, M.K.; Ichihashi, H.; Inuiguchi, M. Fuzzy and semi-infinite mathematical programming. Inf. Sci. 1992, 61, 233–250. [Google Scholar] [CrossRef]
  7. Verdegay, J.L. A Dual Approach to Solve the Fuzzy Linear Programming Problems. Fuzzy Sets Syst. 1984, 14, 131–141. [Google Scholar] [CrossRef]
  8. Tanaka, H.; Okuda, T.; Asai, K. On Fuzzy-Mathematical Programming. J. Cybern. 1974, 3, 37–46. [Google Scholar] [CrossRef]
  9. Zimmermann, H.-J. Description and Optimization of Fuzzy Systems. Int. J. Gen. Syst. 1976, 2, 209–215. [Google Scholar] [CrossRef]
  10. Zimmemmann, H.-J. Fuzzy Programming and Linear Programming with Several Objective Functions. Fuzzy Sets Syst. 1978, 1, 45–55. [Google Scholar] [CrossRef]
  11. Chalco-Cano, Y.; Lodwick, W.A.; Osuna-Gómez, R.; Rufian-Lizana, A. The Karush-Kuhn-Tucker Optimality Conditions for Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2016, 15, 57–73. [Google Scholar] [CrossRef]
  12. Wu, H.-C. The Karush-Kuhn-Tucker Optimality Conditions for Multi-objective Programming Problems with Fuzzy-Valued Objective Functions. Fuzzy Optim. Decis. Mak. 2009, 8, 1–28. [Google Scholar] [CrossRef]
  13. Li, L.; Liu, S.; Zhang, J. On fuzzy generalized convex mappings and optimality conditions for fuzzy weakly univex mappings. Fuzzy Sets Syst. 2015, 280, 107–132. [Google Scholar] [CrossRef]
  14. Wu, H.-C. The Optimality Conditions for Optimization Problems with Fuzzy-Valued Objective Functions. Optimization 2008, 57, 473–489. [Google Scholar] [CrossRef]
  15. Wu, H.-C. Duality Theory in Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2004, 3, 345–365. [Google Scholar] [CrossRef]
  16. Wu, H.-C. Duality Theorems and Saddle Point Optimality Conditions in Fuzzy Nonlinear Programming Problems Based on Different Solution Concepts. Fuzzy Sets Syst. 2007, 158, 1588–1607. [Google Scholar] [CrossRef]
  17. Chalco-Cano, Y.; Silva, G.N.; Rufian-Lizana, A. On the Newton method for solving fuzzy optimization problems. Fuzzy Sets Syst. 2015, 272, 60–69. [Google Scholar] [CrossRef]
  18. Pirzada, U.M.; Pathak, V.D. Newton method for solving the multi-variable fuzzy optimization problem. J. Optim. Theorey Appl. 2013, 156, 867–881. [Google Scholar] [CrossRef]
  19. Wu, H.-C. Solving Fuzzy Linear Programming Problems with Fuzzy Decision Variables. Mathematics 2019, 7, 569. [Google Scholar] [CrossRef]
  20. Buckley, J.J.; Feuring, T. Evolutionary Algorithm Solution to Fuzzy Problems: Fuzzy Linear Programming. Fuzzy Sets Syst. 2000, 109, 35–53. [Google Scholar] [CrossRef]
  21. Baykasoglu, A.; Gocken, T. A Direct Solution Approach to Fuzzy Mathematical Programs with Fuzzy Decision Variables. Expert Syst. Appl. 2012, 39, 1972–1978. [Google Scholar] [CrossRef]
  22. Ezzati, R.; Khorram, E.; Enayati, R. A New Algorithm to Solve Fully Fuzzy Linear Programming Problems Using the MOLP Problem. Appl. Math. Model. 2015, 39, 3183–3193. [Google Scholar] [CrossRef]
  23. Ahmad, T.; Khan, M.; Khan, I.U.; Maan, N. Fully Fuzzy Linear Programming (FFLP) with a Special Ranking Function for Selection of Substitute Activities in Project Management. Int. J. Appl. Sci. Technol. 2011, 1, 234–246. [Google Scholar]
  24. Jayalakshmi, M.P.; Ian, P. A New Method for Finding an Optimal Fuzzy Solution for Fully Fuzzy Linear Programming Problems. Int. J. Eng. Res. Appl. 2012, 2, 247–254. [Google Scholar]
  25. Khan, I.U.; Ahmad, T.; Maan, N. A Simplified Novel Technique for Solving Fully Fuzzy Linear Programming Problems. J. Optim. Theory Appl. 2013, 159, 536–546. [Google Scholar] [CrossRef]
  26. Kumar, A.; Kaur, J.; Singh, P. A New Method for Solving Fully Fuzzy Linear Programming Problems. Appl. Math. Model. 2011, 35, 817–823. [Google Scholar] [CrossRef]
  27. Lotfi, F.H.; Allahviranloo, T.; Jondabeh, M.A.; Alizadeh, L. Solving a Full Fuzzy Linear Programming Using Lexicography Method and Fuzzy Approximate Solution. Appl. Math. Model. 2009, 33, 3151–3156. [Google Scholar] [CrossRef]
  28. Najafi, H.S.; Edalatpanah, S.A.; Dutta, H. A Nonlinear Model for Fully Fuzzy Linear Programming with Fully Unrestricted Variables and Parameters. Alex. Eng. J. 2016, 55, 2589–2595. [Google Scholar] [CrossRef]
  29. Nasseri, S.H.; Behmanesh, E.; Taleshian, F.; Abdolalipoor, M.; Taghi-Nezhad, N.A. Fully Fuzzy Linear Programming with Inequality Constraints. Int. J. Ind. Math. 2013, 5, 309–316. [Google Scholar]
  30. Kaur, A.; Kumar, A. Exact Fuzzy Optimal Solution of Fully Fuzzy Linear Programming Problems with Unrestricted Fuzzy Variables. Appl. Intell. 2012, 37, 145–154. [Google Scholar] [CrossRef]
  31. Chakraborty, D.; Jana, D.K.; Roy, T.K. A New Approach to Solve Fully Fuzzy Transportation Problem Using Triangular Fuzzy Number. Int. J. Oper. Res. 2016, 26, 153–179. [Google Scholar] [CrossRef]
  32. Jaikumar, K. New Approach to Solve Fully Fuzzy Transportation Problem. Int. J. Math. Its Appl. 2016, 4, 155–162. [Google Scholar]
  33. Baykasoglu, A.; Subulan, K. Constrained Fuzzy Arithmetic Approach to Fuzzy Transportation Problems with Fuzzy Decision Variables. Expert Syst. Appl. 2017, 81, 193–222. [Google Scholar] [CrossRef]
  34. Ebrahimnejad, A. A Simplified New Approach for Solving Fuzzy Transportation Problems with Generalized Trapezoidal Fuzzy Numbers. Appl. Soft Comput. 2014, 19, 171–176. [Google Scholar] [CrossRef]
  35. Kaur, A.; Kumar, A. A New Approach for Solving Fuzzy Transportation Problems Using Generalized Trapezoidal Fuzzy Numbers. Appl. Soft Comput. 2012, 12, 1201–1213. [Google Scholar] [CrossRef]
  36. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
  37. Nash, J.F. Two-Person Cooperative Games. Econometrica 1953, 21, 128–140. [Google Scholar] [CrossRef]
  38. Young, H.P. Monotonic Solutions of Cooperative Games. Int. J. Game Theory 1985, 14, 65–72. [Google Scholar] [CrossRef]
  39. Barron, E.N. Game Theory: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  40. Branzei, R.; Dimitrov, D.; Tijs, S. Models in Cooperative Game Theory; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  41. Curiel, I. Cooperative Game Theory and Applications: Cooperative Games Arising from Combinatorial Optimization Problems; Kluwer Academic Publishers: New York, NY, USA, 1997. [Google Scholar]
  42. González-Díaz, J.; García-Jurado, I.; Fiestras-Janeiro, M.G. An Introductory Course on Mathematical Game Theory; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  43. Owen, G. Game Theory, 3rd ed.; Academic Press: New York, NY, USA, 1995. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, H.-C. Solving Fuzzy Optimization Problems Using Shapley Values and Evolutionary Algorithms. Mathematics 2023, 11, 4871. https://doi.org/10.3390/math11244871

AMA Style

Wu H-C. Solving Fuzzy Optimization Problems Using Shapley Values and Evolutionary Algorithms. Mathematics. 2023; 11(24):4871. https://doi.org/10.3390/math11244871

Chicago/Turabian Style

Wu, Hsien-Chung. 2023. "Solving Fuzzy Optimization Problems Using Shapley Values and Evolutionary Algorithms" Mathematics 11, no. 24: 4871. https://doi.org/10.3390/math11244871

APA Style

Wu, H.-C. (2023). Solving Fuzzy Optimization Problems Using Shapley Values and Evolutionary Algorithms. Mathematics, 11(24), 4871. https://doi.org/10.3390/math11244871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop