Abstract
The fusion of evolutionary algorithms and the solution concepts of cooperative game theory is proposed in this paper to solve the fuzzy optimization problems. The original fuzzy optimization problem is transformed into a scalar optimization problem by assigning some suitable coefficients. The assignment of those coefficients is frequently determined by the decision-makers via their subjectivity, which may cause some biases. In order to avoid these subjective biases, a cooperative game is formulated by considering the -level functions of the fuzzy objective function. Using the Shapley values of this formulated cooperative game, the suitable coefficients can be reasonably set up. Under these settings, the transformed scalar optimization problem is solved to obtain the nondominated solution, which will depend on the coefficients. In other words, we shall obtain a bunch of nondominated solutions depending on the coefficients. Finally, the evolutionary algorithms are invoked to find the best nondominated solution by evolving the coefficients.
Keywords:
cooperative games; evolutionary algorithms; fuzzy optimization; scalar optimization; Shapley value MSC:
90C70; 68W50
1. Introduction
The research topic of fuzzy optimization was initiated by Bellman and Zadeh [1]. Inspired and motivated by this, a lot of articles dealing with fuzzy optimization problems have come out. The early works by Buckley [2], Herrera et al. [3], Julien [4], Inuiguchi [5], Luhandjula et al. [6], Verdegay [7], Tanaka et al. [8] and Zimmermann [9,10] mainly considered the fuzzified constraints and objective functions. This kind of approach fuzzifies the crisp (conventional) optimization problems. For example, the following (crisp) constraints
where and are real numbers, are fuzzified to be
which were formulated by the aspiration level using the membership functions to describe the degree of violation for the original crisp constraints given in (1). Also, some of the real numbers and can be fuzzified by imposing the possibility distributions.
There is another approach to studying fuzzy optimization problems by considering the fuzzy coefficients. In other words, the coefficients in optimization problems are assumed to be fuzzy numbers. For example, we can consider the following fuzzy linear programming problem with fuzzy objective functions and real constraint functions
where are fuzzy numbers for . Chalco-Cano et al. [11] and Wu [12] study the Karush–Kuhn–Tucker optimality conditions for optimization problems with fuzzy coefficients. Li et al. [13] and Wu [14] also studied the different types of the optimality conditions. The duality theorems for optimization problems with fuzzy coefficients were studied by Wu [15,16] using the so-called Hukuhara derivative. On the other hand, Chalco-Cano et al. [17] and Pirzada and Pathak [18] proposed the Newton method to solve optimization problems with fuzzy coefficients. In general, solving optimization problems with fuzzy decision variables is a difficult task; a long paper by Wu [19] provides an efficient way to solve the fuzzy linear programming problems with fuzzy decision variables.
The so-called fully fuzzified linear programming problem was solved by Buckley and Feuring [20] in which the evolutionary algorithm was used. The particle swarm optimization method was used by Baykasoglu and Gocken [21] to solve the fuzzy optimization in which only triangular fuzzy numbers were considered. The fully fuzzy linear programming problem was studied by Ezzati et al. [22] in which the original fuzzy problem was converted into a multiobjective linear programming problems. The fully fuzzy linear programming problem was also studied by Ahmad et al. [23], Jayalakshmi and Pandian [24], Khan et al. [25], Kumar et al. [26], Lotfi et al. [27], Najafi et al. [28] and Nasseri et al. [29] in which the triangular fuzzy numbers were used. On the other hand, the fully fuzzy linear programming problems were studied by Kaur and Kumar [30] in which the trapezoidal fuzzy numbers were considered. Considering the triangular fuzzy numbers or trapezoidal fuzzy numbers can simplify the formulation. However, the proposed methods based on the triangular fuzzy numbers or trapezoidal fuzzy numbers will be invalid when the fuzzy quantities are taken to be the general type of bell-shaped fuzzy numbers. Regarding the engineering problems, the fuzzy transportation problem was studied by Chakraborty et al. [31], Jaikumar [32] and Baykasoglu and Subulan [33] in which the fuzzy quantities are also taken to be the triangular fuzzy numbers. The fuzzy transportation problem was also solved by Ebrahimnejad [34] and Kaur and Kumar [35] in which the so-called generalized trapezoidal fuzzy numbers were considered. The unbalanced fully fuzzy minimal cost flow problem was studied by Kaur and Kumar [30] in which the fuzzy quantities are taken to LR fuzzy numbers.
von Neumann and Morgenstern [36] initiated the study of game theory in economics in which the behavior of players is mainly concerned such that their decisions affect each other. This is also called the noncooperative game theory. On the other hand, the cooperative game is regarded as a game in coalitional form such that the cooperations among the different players are concerned. Nash [37] studied the concept of a general two-player cooperative game and provided a solution concept of such cooperative game. The cooperations mean that the players can have complete freedom of communication and comprehensive information on the structure of the game. After this inspiration, many solution concepts of cooperative games were proposed. The monotonic solution of cooperative games was studied by Young [38]. The idea of monotonicity says that if a game changes such that some player’s contribution to all coalitions increases or stays the same, then the player’s allocation should not decrease. The well-known Shapley value of cooperative game is a unique symmetric and efficient solution concept, which is also a monotonic solution. This paper will adopt the Shapley value to study fuzzy optimization problems. Moreover, the monographs by Barron [39], Branzei et al. [40], Curiel [41], González-Díaz et al. [42] and Owen [43] also address more details on the topic of game theory.
In this paper, we study the optimization problems with fuzzy coefficients. We first transform the original fuzzy optimization problem into a scalar optimization problem. An ordering on the family of all fuzzy numbers is proposed. Then, we can use this ordering to define the so-called nondominated solution of the original fuzzy optimization problem. Under these settings, we establish a relationship showing that each optimal solution of the transformed scalar optimization problem is also a nondominated solution of the original fuzzy optimization problem. In this situation, we can just solve the transformed scalar optimization problem.
In order to formulate this scalar optimization problem, we need to assign different weights to the objective functions. Therefore, in this paper, we introduce a cooperative game that can be formulated using objective functions. In this case, the weights of the objective functions are assigned to be the corresponding Shapley values of this formulated cooperative game. After the weights have been assigned, we can solve this scalar optimization problem to obtain the nondominated solutions of the original fuzzy optimization problems. Since we can have a lot of different scalar optimization problems according to the different weights, the set of all nondominated solutions is frequently large in the sense that it is always an uncountable set. In this paper, we shall apply the evolutionary algorithms to find the best nondominated solution.
In Section 2, we formulate an optimization problem with fuzzy coefficients and define its solution concepts called nondominated solutions. We also transform this fuzzy optimization problem into a scalar (crisp) optimization problem. We can show that the optimal solutions of this scalar optimization problem are also the nondominated solutions of the original fuzzy optimization problem. In Section 3, we introduce the concept of Shapley value in cooperative games. In Section 4, in order to solve the scalar optimization problem, we are going to formulate its objective functions to be a cooperative game. In Section 5, using the Shapley value of the formulated cooperative games to set up the corresponding scalar optimization problem. From the different scalar optimization problems, we can generate a bunch of different nondominated solutions. Therefore, in Section 6, we shall use the evolutionary algorithms to find the best nondominated solution. A concise numerical example is presented in Section 7.
2. Formulation
The fuzzy subset of is defined by a membership function . The -level set of , denoted by , is defined by
for all . The 0-level set is defined as the closure of the set . It is clear to see that for .
Any subset A of can also be treated as a fuzzy set in by taking the membership function as follows
When A is a singleton , we also write , which also means that each real number is treated as .
Let and be two fuzzy subsets of . According to the extension principle, the addition and multiplication between and are defined by
and
Let be a fuzzy subset of . We say that is a fuzzy interval when the following conditions are satisfied:
- is normal, i.e., for some ;
- is convex, i.e., the membership function is quasi-concave;
- The membership function is upper semicontinuous;
- The 0-level set is a closed and bounded subset of .
It is well known that each -level set of a fuzzy interval is a bounded closed interval in , which is also denoted by
We denote by the family of all fuzzy intervals, and consider the fuzzy-valued function defined on a nonempty set X. In this case, we can generate the real-valued functions and for defined by
We consider the following fuzzy optimization problem (FOP) with fuzzy coefficients and real decision variables in :
where X is a feasible region in and denotes the fuzzy objective function of (FOP). For example, we can consider the following fuzzy linear programming problem with fuzzy objective function and real constraint functions:
In this paper, the fuzzy coefficients are taken to be fuzzy intervals. In order to interpret the meaning of the optimal solution of the problem (FOP), we need to introduce an ordering among the set of all fuzzy intervals.
Definition 1.
Let and be two fuzzy intervals. We define an ordering “≺” between and as follows. We write when the following conditions are satisfied:
- and for all ;
- There exists satisfying for all or for all .
Transitivity is an import issue of an ordering. It is clear to see that the ordering proposed in Definition 1 indeed owns the transitivity.
Definition 2.
We say that is a nondominated optimal solution of fuzzy optimization problem (FOP) when there does not exists another feasible solution satisfying
Let be a partition of . We consider the following scalar optimization problem
where for all satisfying
The following theorem will be useful to find the nondominated solution of the problem (FOP).
Theorem 1.
Suppose that for all . If is an optimal solution of scalar optimization problem (SOP), then it is also a nondominated solution of fuzzy optimization problem (FOP).
Proof.
Suppose that is not a nondominated optimal solution of the problem (FOP). Then, there exists satisfying . Therefore, the following conditions are satisfied.
- We have and for all .
- There exists , satisfying for all or for all .
We are going to claim that satisfies the following conditions:
- and for all ;
- There exists such that or .
It is obvious that and for all . Now, we consider the following cases.
- Suppose that for all . There exists such that . In this case, we have
- Suppose that for all . There exists such that . In this case, we have
The above two cases say that there exists satisfying or . Since each for , it follows that
which contradicts the fact that is an optimal solution of the scalar optimization problem (SOP). This completes the proof. □
The determination of coefficients for depends on the viewpoint of decision makers. It means that there is no usual way to set up the scalar optimization problems. This paper will follow the solution concept of game theory to determine the coefficients for . The main reason is that the objective functions and can be regarded as the payoff of players i and for . In this case, we can formulate a cooperative game. The Shapley values are the usual solution concepts of a cooperative game, which will be taken to be the coefficients for for creating the scalar optimization problem.
3. Shapley Values
Given a set of players, any nonempty subset is called a coalition. Let denote the family of all subsets of N. Equivalently, is a family of all coalitions. We consider a function defined on such that it satisfies . Then, the ordered pair is called a cooperative game. Given any , the number is treated as the worth of coalition S in the game .
Let be a payoff vector or an allocation, where each represents the share of the value of received by player i for . Then, we have the following concepts.
- The vector is called a pre-imputation when it satisfies the following equalityThis also says that the group rationality is satisfied.
- The vector is called an imputation when it is pre-imputation and satisfies the following individual rationality
The individual rationality condition (3) says that each member of coalition receives at least the same amount that the player can obtain by acting alone without any support from other players. The group rationality means that any increase in reward to a player must be matched by a decrease in reward for one or more other players. The main objective of cooperative game theory is to determine the imputation that results in a fair allocation of the total rewards, which will depend on the definition of fair. In this paper, we shall consider the Shapley values of cooperative games, which can be treated as the fair allocation of the total rewards.
The carrier of the cooperative game is a coalition T satisfying
This definition states that the player is a dummy player; that is to say, the player i has nothing to contribute to any coalitions.
Let be a one-to-one function. Given a coalition with , we can write . Then, we have . In this case, we can define a new cooperative game by for any .
Given a cooperative game , we consider a corresponding vector
where the ith component is interpreted as the payoffs received by player i under an agreement. This function is taken to satisfy the following Shapley axioms.
- (S1) If S is any carrier of the game , then we have .
- (S2) For any one-to-one function and any , we have ;
- (S3) If and are any cooperative games, then we have for all .
The function from the family of all cooperative games into the n-dimensional Euclidean space defines a vector that is called the Shapley value of the cooperative game .
The well-known result is given as follows. There exists a unique function defined on the family of all cooperative games and satisfies axioms (S1), (S2) and (S3). Moreover, we have
In this paper, we are going to use the Shapley values to transform the optimization problem with fuzzy objective functions into a scalar optimization problem by setting up a reasonable weight.
4. Formulation of Corresponding Cooperative Game
Based on the objective function in the scalar optimization problem (SOP), we are going to formulate a cooperative game. First of all, we assume to have players with . The objective functions are regarded as the payoff of players i for , and the objective functions are regarded as the payoff of players for .
The ideal payoff of each player is to maximize its corresponding payoff function. Let
In this case, are the ideal payoffs of players , respectively. Since is treated as the payoff of player i in a cooperative game, it is reasonable to assume
which means that the payoff must be less than the maximum payoff. In a perfect cooperative game, the payoff of player i may possibly reach its maximum payoff .
Let S be a subset of N, which is regarded as a coalition. Under this coalition with , the payoff of this coalition S will be greater than the total payoffs of each player in S. In other words, we must have
which shows the effect of cooperation. Also, the payoff of coalition S cannot be greater than the total ideal payoffs on S. More precisely, we have the following inequalities
Now, we are going to define the payoff of any coalition S with . We define
where is a nonnegative constant and is not related with any coalition S with . The second term in (6) says that, under the coalition S, the extra payoff can be obtained by taking that multiplies the average of individual payoffs.
Since the upper bound of is given in (5), the constant must satisfy the following inequality
After some algebraic calculations, we obtain
Let
Then, we have
For convenience, we also define . Now, we define
which is not related with any coalition S with .
5. Formulation of the Corresponding Scalar Optimization Problem
By looking at the scalar optimization problem (SOP), we see that the coefficients are determined by the vector
We can treat the ith component as the fair payoff received by player i under an agreement. In this case, the coefficients can be regarded as depending on a cooperative game . Now, we assume that the coefficient satisfies the following agreement.
- If S is any carrier of the game , then .
- For any one-to-one function and any , we have ;
- If and are any cooperative games, then we have for all .
Let be a family of all cooperative games with the same player set N. Then, the function given by defines a vector that is the Shapley value of the cooperative game .
Now, we can solve the scalar optimization problem (SOP) by taking the Shapley value as the coefficients, which can avoid the possibly biased determination of weights caused by the decision-makers using intuition. More precisely, the coefficients are given by the following formula
Using (6), the term can be calculated as follows
where .
In order to guarantee the nonnegativity of and , we assume that the following conditions are satisfied.
- We assumeUnder this assumption, using (7), it follows for all , which implies for all .
- Recall that for convenience. For , we assume
We still need to normalize the coefficients as follows
It is clear to see for all . When the coefficients are taken to be the normalized Shapley values given in (12), Theorem 1 says that the optimal solutions of scalar optimization problem (SOP) are the nondominated solutions of the problem (FOP). In this case, these nondominated solutions are also called the Shapley-nondominated solutions of the problem (FOP).
By referring to (6), the cooperative game depends on the nonnegative constants for . In other words, the payoff function v depends on the vector . By referring to (9) and (10), we see that the coefficients and its normalized coefficients also depend on . In this case, we can write
The purpose is to obtain the Shapley-nondominated solution by solving the following scalar optimization problem
The optimal solution of the problem (SOP) is the Shapley-nondominated solution by referring to Theorem 1, where depends on the vector . Let be the set of all Shapley-nondominated solution, i.e.,
where can refer to (8). We are going to use the evolutionary algorithms to find a best Shapley-nondominated solution from .
6. Evolutionary Algorithms
In what follows, we are going to design an evolutionary algorithm to find the best Shapley-nondominated solution from the set , which is given in (13), by maximizing the following fitness function
We can see that obtaining the best Shapley-nondominated solution is indeed a hard problem. However, we can obtain its approximated best Shapley-nondominated solution by using the evolutionary algorithms, which will be designed in two phases.
The scalar optimization problem (SOP) depends on the partition of . Phase I is to obtain the approximated best Shapley-nondominated solution when the partition is fixed. Phase II is to perform phase I for finer partitions of until the approximated best Shapley-nondominated solution cannot be improved. In this case, we can return the final result.
6.1. Phase I
The partition of is fixed. In order to generate the nonnegative constants for such that the inequalities (7) and (11) are satisfied, we are going to design a recursive procedure. In other words, the nonnegative constants generated by the recursive procedure must satisfy ,
Now, we first generate as a random number in the closed interval , where is given in (8). Then, we have . Let
where is given in (8). Then, we generate as a random number in the closed interval . In this case, we have , which implies
Therefore, we obtain
which satisfy (15).
For given in (8), let
We similarly generate as a random number in the closed interval . In this case, we have , which implies
Therefore, we obtain
which satisfy (15).
Recursively, let
where is given in (8). Then, for each , we generate as a random number in the closed interval . Then, we can obtain
which satisfy (15). Therefore, the recursive Formula (16) can be used to generate the nonnegative constants satisfying (15) for , where for convenience.
6.1.1. Crossover Operation
Suppose that
are two vectors satisfying the inequalities (15). Given any , we consider the crossover operation
Since
it follows for . Since
we have
6.1.2. Mutation Operation
We are going to present the mutation operation based on the normal distribution. Suppose that is a vector satisfying (15). We consider the mutation of with the components given by and for , which are proposed below.
We first generate , and assign
The new mutation is defined by
where is given in (8). It is clear to see .
Let
We generate , and assign
The new mutation is defined by
It is clear to see .
Let
We generate , and assign
The new mutation is defined by
It is clear to see .
Recursively, for , we can define
We generate , and assign
The new mutation is defined by
It is clear to see .
6.2. Phase II
Now, phase II will perform the procedure proposed in phase I for more finer partition of . Suppose that the partition of is considered in phase I. Then, we perform the procedure proposed in phase I by consider the partition of satisfying . Phase II will be continuously performed for different finer partitions until the approximated best Shapley-nondominated solution cannot be improved. In this paper, we suggest two ways to determine the finer partition .
The simple way is to take the partition of such that each subinterval of has equal length satisfying . In other words, the unit interval is equally subdivided by the partition satisfying .
The second way is to evolve the old partition using the evolutionary algorithms to obtain a new finer partition . We can take a population that consists of the old partition . We can perform the crossover operation and mutation operation in the population to obtain new points . Then, we can generate a new finer partition given by
For example, we can perform the operations as follows.
- Crossover operation. Given any two and in , we can take the convex combination for different to generate different new points.
- Mutation operation. Give any , we consider the mutation , where is a random number in . If is in , then is taken to be the new generated point. If , then is taken to be the new generated point.
After a new finer partition is generated, we continue to perform phase I using this new partition to obtain the approximated best Shapley-nondominated solution. After this step, the partition is now treated as the old partition. Therefore, we are going to generate a new finer partition of satisfying , and then to perform phase I again using this new finer partition . Phase II is continued for different finer partitions until the approximated best Shapley-nondominated solution cannot be improved.
6.3. Computational Procedure
The detailed computational procedure of evolutionary algorithm for phase I is given below. Therefore, the partition of is fixed.
- Step 1(initialization). The size of the population in this evolutionary algorithm is assumed to be p. The individuals playing the role of evolution are vectors . Therefore, the initial population is given by such that and is a random number in , where s given in (8), for all , and are random numbers in , where is given in (16) for all and . Then, satisfies the inequalities (15) for .
- Step 2 (fitness function). Given each individual , for and , we calculate the normalized Shapley value using (9) and (12). For each , we solve the scalar optimization problem (SOP) to obtain for . By referring to (14), each is assigned a fitness value given by the following fitness functionfor . According to the fitness values for , the p individuals for are ranked in descending order. The first one is saved to be the (initial) best individual named as . We also save as old elites by setting for and .
- Step 3 (tolerance). We set the tolerance and set the maximum times of iterations for satisfying the tolerance . Set , which means the initial generation, and , which means the first time for satisfying the tolerance . This step may be more clear by referring to step 8 for stopping criterion.
- Step 4 (mutation). We set , which means the lth generation. In this algorithm, each individual must be mutated. Each individual is mutated in the way of (19) and is assigned to for . We want to generate . In this paper, the standard deviation is taken bywhere is a constant of proportionality to scale and represents an offset. According to (19), we obtain the mutated individual and for . Since each individual must be mutated to be for ; after this step, we shall have individuals for .
- Step 5 (crossover). We perform the crossover operation (18) by randomly selecting and for with . We first generate a random number . The new individual is given bywhere the components are given by
- Step 6 (calculate new fitness). Now, we have new individuals . For each new individual for , we calculate the normalized Shapley value using (9) and (12) for and . For each , we solve the scalar optimization problem (SOP) to obtain for . By referring to (14), each is assigned a fitness value given byfor .
- Step 7 (selection). The new individuals for obtained from Steps 4 to 6, and p old elites in step 2 for are ranked in descending order of their corresponding fitness values and for . The first p (best) individuals are saved to be the new elites for , and the first one is saved to be the best individual named as for the lth generation.
- Step 8 (stopping criterion). After step 7, it may happen to have . In order not to be trapped in the local optimum, we proceed more iterations for times (ref. step 3) even though . If and the iterations reach times, then the algorithm is halted and returns the solution for phase I. Otherwise, the new elites for are copied to be the next generation for . We set and the algorithm proceeds to step 4, where counts the times for satisfying the tolerance .
After step 8, we can obtain an approximated best Shapley-nondominated solution for a fixed partition of . This solution is also denoted by to emphasize that it depends on the partition . Now, we proceed to phase II by considering more finer partitions of .
- Step 1. By referring to Section 6.2, we generate a new finer partition satisfying .
- Step 2. Based on the new finer partition , we obtain a new approximated best Shapley-nondominated solution using the evolutionary algorithm in phase I.
- Step 3. If for a pre-determined tolerance , then the algorithm is halted, and returns the final solution . Otherwise, we set to be the old partition , and proceed to step 1 to generate a new finer partition.
Finally, after step 3, we obtain the approximated best Shapley-nondominated solution, which is treated as an approximated nondominated solution of the original fuzzy optimization problem (FOP) by referring to Theorem 1.
7. Numerical Example
We consider the triangular fuzzy interval with the membership functions defined by
Then, the -level set is given by
that is,
We want to solve the following fuzzy linear programming problem
where , and are triangular fuzzy intervals. The feasible set X is given by
Now, we have
In phase I, the partition of is taken by
Then, we have
and
Since , the corresponding scalar optimization problem (SOP) is given by
We first obtain the ideal objective values given by
and
According to the above settings, we have representing five players. We take
and
Therefore, we obtain
For , we have
Therefore, we obtain
For , we have
Therefore, we obtain
Finally, we obtain
For , according to (6), we have
More precisely, for , we have
For , we have
For , we have
Finally, for , we have
The detailed computational procedure for phase I is presented below.
- Step 1 (initialization). The population size is assumed to be . The initial population is determined by setting such that and is a random number in for all , and are random numbers in for all and , where refers to (16); that is, we haveThen, satisfies the inequalities (15) for .
- Step 2 (fitness function). Given each , according to (9), we calculateLetMore precisely, we haveandThen, according to (12), we also calculate the normalized Shapley valuefor and . Each is assigned a fitness value given byfor . The p individuals for are ranked in descending order of their corresponding fitness values for , where the first one is saved to be the (initial) best individual named as . We save as an elite given by for and .
- Step 3 (tolerance). We set , , and the tolerance .
- Step 4 (mutation). We set that means the lth generation. Eachis mutated and assigned toin the way of (19) for . Generate , and assignThe new mutated individual is defined byThen, . LetGenerate , and assignThe new mutated individual is defined byThen, . We can similarly obtain and . For , the standard deviation is taken the following formwhere is a constant of proportionality to scale and represents an offset. In this example, we takeand for . After this step, we can have individuals for .
- Step 5 (crossover). We randomly selectfor with . Generate a random number , the new individual is given by with componentsAfter this step, we can have individuals for .
- Step 7 (selection). The new individualsobtained from Steps 4 to 6, and p old elitesare ranked in descending order of their corresponding fitness values and for . The first p (best) individuals are saved to be the new elitesand the first one is saved to be the best individual named as for the lth generation.
- Step 8 (stopping criterion). If and the iterations reach times, then the algorithm is halted and return the solution. Otherwise, the new elitesare copied to be the next generationWe set and the algorithm proceeds to step 4.
The computer code is implemented using Microsoft Excel VBA. After step 8, we obtain the best fitness value is and the approximated Shapley-nondominated solution is .
Now, we proceed to phase II by considering more finer partitions of .
- Step 1. By referring to Section 6.2, we generate a new finer partitionsatisfying .
- Step 2. Based on the new finer partition , we obtain a new approximated best Shapley-nondominated solution using the evolutionary algorithm in phase I.
- Step 3. Since we obtainit follows that the final best Shapley-nondominated solution is .
Finally, after step 3, the approximated nondominated solution of the original fuzzy linear programming problem (FLP) is by referring to Theorem 1.
8. Conclusions
The essential way for solving the fuzzy optimization problem is firstly to transform the original problem into a scalar optimization problem. Under some suitable settings, the nondominated solutions of the original fuzzy optimization problem can be obtained by solving its transformed scalar optimization problem. Formulating the scalar optimization problem needs to assign some suitable weights. Usually, these weights are determined by the decision-makers according to their intuition. In order to avoid this kind of biased assignment, we consider the Shapley values of a cooperative game to be the weights of this scalar optimization problem. After assigning the weights, the transformed scalar optimization problem can be solved to obtain the nondominated solutions of the original fuzzy optimization problem. The different weights will lead to different nondominated solutions. In other words, the set of all nondominated solutions is frequently large. In this case, an efficient evolutionary algorithm is designed to find the best Shapley-nondominated solution.
This paper adopts the genetic algorithm to obtain the best Shapley-nondominated solution. We have to say that the other heuristic algorithms can still be used to obtain the best Shapley-nondominated solution, for example, ant colony optimization, artificial immune systems, particle swarm optimization, simulated annealing and Tabu search etc. can also be used to obtain the best Shapley-nondominated solution. The purpose of this paper is not about to providing a new genetic algorithm to compare the efficiency with the other heuristic algorithms. The main purpose of this paper is to propose a new methodology to solve the fuzzy optimization problems by incorporating the Shapley value of a formulated cooperative game to obtain the best Shapley-nondominated solution using the genetic algorithm. It is clear to see that the genetic algorithm adopted in this paper is the conventional one. The efficiencies of conventional genetic algorithms compared with the above heuristic algorithms have been presented in a lot of articles. Therefore, in future research, it is possible to design a new genetic algorithm and use the statistical analysis to compare its efficiency with the above heuristic algorithms.
Although this paper considers the Shapley values of a cooperative game that is formulated from the fuzzy objective functions, in future research, it is also possible to use the other solution concept of cooperative games to set up the weights. On the other hand, using the solution concepts of non-cooperative games to set up the weights may also be the future research to solve the fuzzy optimization problems with fuzzy decision variables.
Funding
This research was funded by Taiwan NSTC with grant number 110-2221-E-017-008-MY2.
Conflicts of Interest
The author declares no conflict of interest.
References
- Bellmam, R.E.; Zadeh, L.A. Decision-Making in a Fuzzy Environment. Manag. Sci. 1970, 17, 141–164. [Google Scholar] [CrossRef]
- Buckley, J.J. Joint Solution to Fuzzy Programming Problems. Fuzzy Sets Syst. 1995, 72, 215–220. [Google Scholar] [CrossRef]
- Herrera, F.; Kovács, M.; Verdegay, J.L. Optimality for fuzzified mathematical programming problems: A parametric approach. Fuzzy Sets Syst. 1993, 54, 279–285. [Google Scholar] [CrossRef]
- Julien, B. An extension to possibilistic linear programming. Fuzzy Sets Syst. 1994, 64, 195–206. [Google Scholar] [CrossRef]
- Inuiguchi, M. Necessity Measure Optimization in Linear Programming Problems with Fuzzy Polytopes. Fuzzy Sets Syst. 2007, 158, 1882–1891. [Google Scholar] [CrossRef]
- Luhandjula, M.K.; Ichihashi, H.; Inuiguchi, M. Fuzzy and semi-infinite mathematical programming. Inf. Sci. 1992, 61, 233–250. [Google Scholar] [CrossRef]
- Verdegay, J.L. A Dual Approach to Solve the Fuzzy Linear Programming Problems. Fuzzy Sets Syst. 1984, 14, 131–141. [Google Scholar] [CrossRef]
- Tanaka, H.; Okuda, T.; Asai, K. On Fuzzy-Mathematical Programming. J. Cybern. 1974, 3, 37–46. [Google Scholar] [CrossRef]
- Zimmermann, H.-J. Description and Optimization of Fuzzy Systems. Int. J. Gen. Syst. 1976, 2, 209–215. [Google Scholar] [CrossRef]
- Zimmemmann, H.-J. Fuzzy Programming and Linear Programming with Several Objective Functions. Fuzzy Sets Syst. 1978, 1, 45–55. [Google Scholar] [CrossRef]
- Chalco-Cano, Y.; Lodwick, W.A.; Osuna-Gómez, R.; Rufian-Lizana, A. The Karush-Kuhn-Tucker Optimality Conditions for Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2016, 15, 57–73. [Google Scholar] [CrossRef]
- Wu, H.-C. The Karush-Kuhn-Tucker Optimality Conditions for Multi-objective Programming Problems with Fuzzy-Valued Objective Functions. Fuzzy Optim. Decis. Mak. 2009, 8, 1–28. [Google Scholar] [CrossRef]
- Li, L.; Liu, S.; Zhang, J. On fuzzy generalized convex mappings and optimality conditions for fuzzy weakly univex mappings. Fuzzy Sets Syst. 2015, 280, 107–132. [Google Scholar] [CrossRef]
- Wu, H.-C. The Optimality Conditions for Optimization Problems with Fuzzy-Valued Objective Functions. Optimization 2008, 57, 473–489. [Google Scholar] [CrossRef]
- Wu, H.-C. Duality Theory in Fuzzy Optimization Problems. Fuzzy Optim. Decis. Mak. 2004, 3, 345–365. [Google Scholar] [CrossRef]
- Wu, H.-C. Duality Theorems and Saddle Point Optimality Conditions in Fuzzy Nonlinear Programming Problems Based on Different Solution Concepts. Fuzzy Sets Syst. 2007, 158, 1588–1607. [Google Scholar] [CrossRef]
- Chalco-Cano, Y.; Silva, G.N.; Rufian-Lizana, A. On the Newton method for solving fuzzy optimization problems. Fuzzy Sets Syst. 2015, 272, 60–69. [Google Scholar] [CrossRef]
- Pirzada, U.M.; Pathak, V.D. Newton method for solving the multi-variable fuzzy optimization problem. J. Optim. Theorey Appl. 2013, 156, 867–881. [Google Scholar] [CrossRef]
- Wu, H.-C. Solving Fuzzy Linear Programming Problems with Fuzzy Decision Variables. Mathematics 2019, 7, 569. [Google Scholar] [CrossRef]
- Buckley, J.J.; Feuring, T. Evolutionary Algorithm Solution to Fuzzy Problems: Fuzzy Linear Programming. Fuzzy Sets Syst. 2000, 109, 35–53. [Google Scholar] [CrossRef]
- Baykasoglu, A.; Gocken, T. A Direct Solution Approach to Fuzzy Mathematical Programs with Fuzzy Decision Variables. Expert Syst. Appl. 2012, 39, 1972–1978. [Google Scholar] [CrossRef]
- Ezzati, R.; Khorram, E.; Enayati, R. A New Algorithm to Solve Fully Fuzzy Linear Programming Problems Using the MOLP Problem. Appl. Math. Model. 2015, 39, 3183–3193. [Google Scholar] [CrossRef]
- Ahmad, T.; Khan, M.; Khan, I.U.; Maan, N. Fully Fuzzy Linear Programming (FFLP) with a Special Ranking Function for Selection of Substitute Activities in Project Management. Int. J. Appl. Sci. Technol. 2011, 1, 234–246. [Google Scholar]
- Jayalakshmi, M.P.; Ian, P. A New Method for Finding an Optimal Fuzzy Solution for Fully Fuzzy Linear Programming Problems. Int. J. Eng. Res. Appl. 2012, 2, 247–254. [Google Scholar]
- Khan, I.U.; Ahmad, T.; Maan, N. A Simplified Novel Technique for Solving Fully Fuzzy Linear Programming Problems. J. Optim. Theory Appl. 2013, 159, 536–546. [Google Scholar] [CrossRef]
- Kumar, A.; Kaur, J.; Singh, P. A New Method for Solving Fully Fuzzy Linear Programming Problems. Appl. Math. Model. 2011, 35, 817–823. [Google Scholar] [CrossRef]
- Lotfi, F.H.; Allahviranloo, T.; Jondabeh, M.A.; Alizadeh, L. Solving a Full Fuzzy Linear Programming Using Lexicography Method and Fuzzy Approximate Solution. Appl. Math. Model. 2009, 33, 3151–3156. [Google Scholar] [CrossRef]
- Najafi, H.S.; Edalatpanah, S.A.; Dutta, H. A Nonlinear Model for Fully Fuzzy Linear Programming with Fully Unrestricted Variables and Parameters. Alex. Eng. J. 2016, 55, 2589–2595. [Google Scholar] [CrossRef]
- Nasseri, S.H.; Behmanesh, E.; Taleshian, F.; Abdolalipoor, M.; Taghi-Nezhad, N.A. Fully Fuzzy Linear Programming with Inequality Constraints. Int. J. Ind. Math. 2013, 5, 309–316. [Google Scholar]
- Kaur, A.; Kumar, A. Exact Fuzzy Optimal Solution of Fully Fuzzy Linear Programming Problems with Unrestricted Fuzzy Variables. Appl. Intell. 2012, 37, 145–154. [Google Scholar] [CrossRef]
- Chakraborty, D.; Jana, D.K.; Roy, T.K. A New Approach to Solve Fully Fuzzy Transportation Problem Using Triangular Fuzzy Number. Int. J. Oper. Res. 2016, 26, 153–179. [Google Scholar] [CrossRef]
- Jaikumar, K. New Approach to Solve Fully Fuzzy Transportation Problem. Int. J. Math. Its Appl. 2016, 4, 155–162. [Google Scholar]
- Baykasoglu, A.; Subulan, K. Constrained Fuzzy Arithmetic Approach to Fuzzy Transportation Problems with Fuzzy Decision Variables. Expert Syst. Appl. 2017, 81, 193–222. [Google Scholar] [CrossRef]
- Ebrahimnejad, A. A Simplified New Approach for Solving Fuzzy Transportation Problems with Generalized Trapezoidal Fuzzy Numbers. Appl. Soft Comput. 2014, 19, 171–176. [Google Scholar] [CrossRef]
- Kaur, A.; Kumar, A. A New Approach for Solving Fuzzy Transportation Problems Using Generalized Trapezoidal Fuzzy Numbers. Appl. Soft Comput. 2012, 12, 1201–1213. [Google Scholar] [CrossRef]
- von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
- Nash, J.F. Two-Person Cooperative Games. Econometrica 1953, 21, 128–140. [Google Scholar] [CrossRef]
- Young, H.P. Monotonic Solutions of Cooperative Games. Int. J. Game Theory 1985, 14, 65–72. [Google Scholar] [CrossRef]
- Barron, E.N. Game Theory: An Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
- Branzei, R.; Dimitrov, D.; Tijs, S. Models in Cooperative Game Theory; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Curiel, I. Cooperative Game Theory and Applications: Cooperative Games Arising from Combinatorial Optimization Problems; Kluwer Academic Publishers: New York, NY, USA, 1997. [Google Scholar]
- González-Díaz, J.; García-Jurado, I.; Fiestras-Janeiro, M.G. An Introductory Course on Mathematical Game Theory; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
- Owen, G. Game Theory, 3rd ed.; Academic Press: New York, NY, USA, 1995. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).