1. Introduction
Bellman and Zadeh [
1] initiate the research topic of fuzzy optimization. The main idea of their approach is to combine the fuzzy goals and fuzzy decision space using the aggregation operators. Tanaka et al. [
2] and Zimmermann [
3,
4] proposed the concept of aspiration level to study the linear programming problems and linear multiobjective programming problems. Herrera et al. [
5] also used the concept of aspiration level and triangular norm (t-norm) to aggregate the fuzzy goals and fuzzy constraints. On the other hand, by mimicking the probability distribution in stochastic optimization, Buckley [
6], Julien [
7] and Luhandjula et al. [
8] considered the concept of possibility distribution to study the fuzzy optimization problems. Inuiguchi [
9] also used the so-called possibility and necessity measures to study the modality constrained optimization problems. Those approaches were mainly to fuzzify the crisp constraints and crisp objective functions. More precisely, given the following crisp constraints
where
and
are real numbers, we can fuzzify the crisp constraints as follows
where the membership functions are assigned using the aspiration level to describe the degree of violation for the original crisp constraints. Another method is to fuzzify the real numbers
and
using the possibility distributions.
There is another interesting approach without considering the fuzzification. This kind of approach mainly takes care of the coefficients of optimization problems. Usually, those coefficients are auumed to be fuzzy intervals (fuzzy numbers). For instance, the fuzzy linear programming problem (FLP) is formulated as follows
where the addition ⊕ and multiplication ⊗ of fuzzy intervals are involved and appeared as the coefficients. Owing to the unexpected fluctuation and turbulence in a uncertain environment, we sometimes cannot precisely measure the desired data. In this case, the corresponding optimization problems cannot be precisely formulated, since the data appear to be uncertain. Therefore, the reasonable way is to consider the fuzzy intervals or fuzzy numbers to be the coefficients of these optimization problems. In other words, the fuzzy optimization problems can be formulated such that the coefficients are assumed to be fuzzy intervals or fuzzy numbers. This kind of approach seems to become a mainstream of the topic of fuzzy optimization.
Regarding the fuzzy coefficients and using the Hukuhara derivative, Wu [
10,
11,
12,
13] studied the duality theorems and optimality conditions for fuzzy optimization problems. The so-called generalized Hukuhara derivative was adopted by Chalco-Cano et al. [
14] to extend the Karush–Kuhn–Tucker optimality conditions for fuzzy optimization problems with fuzzy coefficients. The concept of generalized convexity was also considered by Li et al. [
15] to study the optimality conditions of fuzzy optimization problems. On the other hand, regarding the issue of numerical methods, Chalco-Cano et al. [
16] and Pirzada and Pathak [
17] proposed the so-called Newton method to solve the fuzzy optimization problems.
The multiobjective programming problems with fuzzy objective functions was studied by Luhandjula [
18] in which the approach of defuzzification was adopted. An interactive method was proposed by Yano [
19] to solve the multiobjective linear programming problems in which fuzzy coefficients were considered in the objective functions. Regarding the applications of fuzzy multiobjective optimization problem, Ebenuwa et al. [
20] proposed a multi-objective design optimization approach for the optimal analysis of buried pipe. Charles et al. [
21] studied a probabilistic fuzzy goal multi-objective supply chain network problem. Roy et al. [
22] studied a multiobjective multi-product solid transportation problem in which the system parameters are assumed to be rough fuzzy variables.
In this paper, we study the fuzzy multiobjective linear programming problem (FMLP) as follows
where the objective functions are fuzzy linear functions given by
In order to introduce the concepts of nondominated solution, we need to propose an ordering relation among the set of all fuzzy intervals or fuzzy numbers. Using this ordering relation, the concept of nondominated solution of fuzzy multiobjective optimization problems can be defined. One of the main approaches is to transform the original fuzzy multiobjective optimization problem into a scalar optimization problem by considering the suitable weights, which is a conventional optimization problem such that the coefficients are real numbers. Under these settings, the important issue is to show that the optimal solution of the transformed scalar optimization problem is also a nondominated solution of the original fuzzy multiobjective optimization problem. This also says that it is sufficient to just solve the transformed scalar optimization problem. As we can see later, the set of all nondominated solutions can be a large set, which depends on the weights that are determined in the step of transformation. In order to find the best nondominated solution, we are going to design a genetic algorithm by providing a suitable fitness function.
There are many ways to formulate the scalar optimization problem. The issue is to assign the suitable weights to the fuzzy objective functions. Usually, the weights are determined by the decision maker using their subjectivity. In order to avoid the possible bias caused by their subjectivity, this paper considers a cooperative game, which is formulated by using the fuzzy objective functions. The important assignment of weights are mechanically determined according to the core values of this formulated cooperative game. This kind of mechanical assignment can rule out the bias caused by the intuition that are believed by the decision-makers for determining the weights.
Game theory mainly concerns the behavior of players like cooperation or non-cooperation such that the decisions determined by the players may affect each other. The pioneering work was initiated by von Neumann and Morgenstern [
23]. Nash [
24] proposed the concept of a two-person cooperative game in which the concept of Nash equilibrium was proposed. Another solution concept called monotonic solutions was proposed by Young [
25]. The monographs by Barron [
26], Branzei et al. [
27], Curiel [
28], González-Díaz et al. [
29] and Owen [
30] also provide detailed concepts of game theory. On the other hand, Yu and Zhang [
31] used the generalized triangular fuzzy number to study a cooperative game with fuzzy payoffs. Three solution concepts called fuzzy cores were defined using the fuzzy max order.
Jing et al. [
32] studied a bi-objective optimization problem, where the multi-benefit allocation constraints are modeled. The approach by Lokeshgupta and Sivasubramani [
33] treated the two objective functions as a cooperative game with two players. Alternatively, the approach by Lee [
34] treated the two objective functions as a non-cooperative game with two players and tried to obtain the Nash equilibrium. Meng and Xie [
35] formulated a competitive–cooperative game method to obtain the optimal preference solutions. A three-objective optimization problem was studied by Li et al. [
36] in which a three-players game was formulated. The approach by Yu et al. [
37] and Zhang et al. [
38] also formulated a three-players game. Zhang et al. [
38] considered the sub-game perfect Nash equilibrium, and Yu et al. [
37] incorporated the genetic algorithm to obtain the solutions. A four-objective optimization problem by Chai et al. [
39] and a bi-objective optimization problem by Cao et al. [
40] were solved by using the genetic algorithm in which the non-cooperative game theory was adopted.
Solving the multiobjective optimization problems via genetic algorithms has attracted attention for a long time. We may refer to the monographs by Deb [
41], Osyczka [
42], and Tan et al. [
43] for more details on this topic. Also, the topic of fuzzy the multiobjective optimization problem can refer to the monograph by Sakawa [
44]. On the other hand, Tiwari et al. [
45] studied a nonlinear optimization problem in which a genetic algorithm was proposed to solve the problem.
In
Section 2, a multiobjective optimization problem with fuzzy coefficients is introduced, and its nondominated solutions are also defined. In
Section 3, the concept of core values in cooperative games is introduced. In
Section 4, the multiple objective functions are formulated as a cooperative game. In
Section 5, the suitable coefficients of the scalar optimization problem are determined by using the core values of the formulated cooperative games. The different settings of coefficients can generate the different core-nondominated solutions. Therefore, in
Section 6, an genetic algorithm is designed to find the best core-nondominated solution by providing a suitable fitness function. Finally, a practical numerical example is provided in
Section 7 to illustrate the possible usage of the methodology proposed in this paper.
2. Formulation
The fuzzy set
in
is defined by a membership function
. The
-level set of
is denoted and defined by
for all
. According to the usual topology of
, the 0-level set
is defined to be the closure of the support
In other words, the 0-level set
is defined by
Then, we have
for
with
.
Given a subset
A of
, we can treat it as a fuzzy set
in
with membership function defined by
In particular, if
A is a singleton
, then we write
. In other words, each real number
can be identified with the membership function
.
We say that is a fuzzy interval when it is a fuzzy set in satisfying the following conditions.
is normal; that is, for some .
The membership function is quasi-concave and upper semicontinuous.
The 0-level set is a closed and bounded subset of .
Since
is normal, it says that the
-level sets
are nonempty for all
. We also see that the
-level sets
of fuzzy interval
are bounded closed intervals given by
Let
denote the family of all fuzzy intervals in
. The fuzzy optimization problem considers the fuzzy-valued functions
, which is defined on a nonempty subset
X of
. It means that, for each
, the function value
is a fuzzy interval. Now, given any fixed
, we can generate two real-valued functions
which are defined by
Given any two fuzzy
and
in
, according to the extension principle, the addition and multiplication between
and
are defined by
and
In this paper, the following fuzzy multiobjective optimization problem (FMOP)
is considered, where
X is a feasible region and is a subset of
,
are vectors of fuzzy intervals and
are the fuzzy objective functions of (FMOP) for
. For example, we can take
where the coefficients
for
and
are taken to be the fuzzy intervals. In particular, the fuzzy multiobjective linear programming problem (FMLP) is formulated below
where the objective functions are fuzzy-valued functions and the constraint functions are real-valued functions. The meaning of nondominated solution of problem (FMOP) should be defined based on an ordering relation among the set of all fuzzy intervals, which is shown below.
Definition 1. Let and be two fuzzy intervals in .
We see that implies . The ordering relation “≺” is transitive on in the sense of and implying .
Definition 2. Given a feasible solution , when there does not exists another feasible solution satisfyingandthis feasible solution is said to be a nondominated solution of fuzzy multiobjective optimization problem (FMOP). For convenience, we write
for
and
. Let
be a partition of
. For
, we define the following functions
where
for all
and
satisfying
The following scalar optimization problem
is considered. The nondominated solutions of problem (FMOP) can be obtained by following the theorem presented below.
Theorem 1. For for all and , If is an optimal solution of problem (SOP), then it is also a nondominated solution of problem (FMOP).
Proof. Assume that
is not a nondominated solution of problem (FMOP). Then, we want to lead to a contradiction. In this case, there exists
satisfying
and
For
and
, by referring to (
1), we have
which imply
Since
, the following conditions are satisfied.
For
, we have
There exists
satisfying
or
We want to show that the following conditions are satisfied.
For
, we have
There exists
satisfying
Using (
3), it follows
for all
. Since
is a partition of
, we consider the following cases.
Using (
4), there exists
satisfying
and
Using (
5), there exists
satisfying
and
Therefore, we conclude that there exists
satisfying
Since each
, we have
which also says
Using (
2), we obtain
Since
is an optimal solution of problem (SOP), we lead to a contradiction. This completes the proof. □
The assignment of coefficients for and are frequently determined by the decision makers via intuition. We may argue that this kind of determination can subject to bias caused by the decision makers. Therefore, a mechanical way is suggested in this paper by following the solution concept of game theory to determine the coefficients . The main idea is that the objective functions and are treated as the payoffs of players.
3. Cores of Cooperative Games
Given a set of players, any nonempty subset S of N is called a coalition. We consider the function satisfying . A cooperative game is defined to be an ordered pair . Given any coalition S, the function value is interpreted as the worth of coalition S in the game .
Given a payoff vector (or an allocation) , each represents the value received by player i for . Many concepts are defined below.
We say that the vector
is a pre-imputation when the following group rationality is satisfied
We say that the vector
is an imputation when it is a pre-imputation and satisfies the following individual rationality
The set of all imputations is denoted by , and the set of all pre-imputations is denoted by .
Given a coalition
S and a payoff vector
, the excess of
S with respect to
is defined by
Now, the core of a game
is defined by
It is also clear to see
We can see that the core of a game
is a set of all imputations such that only nonpositive excesses are taken into account.
4. Formulation of Cooperative Game
Given any fixed
j, the function
is treated as the payoff of player
for
, and the function
is treated as the payoff of player
for
. In this case, the player is taken to be an ordered pair
. Therefore, the set of all players is given by
We have total
players corresponding to
functions. Let
where
are regarded as the ideal payoffs for
and
. Therefore, we must assume
which means that the true payoff
of player
may not reach the ideal payoff
.
Given
with
, this coalition is written as
. By intuition, the payoff of coalition
S must be larger than the total payoffs of each player in
S such that the cooperation is meaningful. In other words, the following inequality
must be satisfied. Also, the payoff of coalition
S cannot be larger than the total ideal payoffs on
S, which means that the payoff of coalition may not reach the total ideal payoffs. That is to say, the following inequalities must be satisfied:
Now, the payoff of coalition
S with
is defined by
where
is a non-negative constant. The second term
can be treated as the extra payoff subject to cooperation by forming a coalition
S with
. We assume that this extra payoff is obtained by taking a non-negative constant
that multiplies the average of individual payoffs. In this situation, the non-negative constant
should be independent of any coalition
S with
.
According to the upper bound of
given in (
6), the constant
must satisfy
Equivalently, we obtain
for
. Now, we define
Then, we have
for
. We also define
for convenience.
5. Solving the Problem Using Core Values
Recall that the core of cooperative game
is given by
Using (
7), it follows that
if and only if
satisfies
and
Since the payoffs
are non-negative for
, the positive core
is defined by
We normalize the values of
to be
given by
In this case, we also write
to denote the set of all normalized values
obtained from
.
Now, the following linear programming problem
is considered. The following property is useful for further study.
Proposition 1. Let be an optimal solution of problem (LP). If for all then .
Proof. Since
is a feasible solution of problem (LP), we have
We take
and
for
. Since
for all
, it follows that
is a feasible solution of problem (LP) with objective value
, which implies
By taking
in (
12), we also have
Therefore, we obtain
. This completes the proof. □
From the payoff defined in (
7), the formulation of cooperative game
is determined by the non-negative constants
for
, which also says that the payoff function
v must be determined by the vector
. We also write
Proposition 1 shows that the optimal solution
is determined by the payoff functions
v. In this case, we can write
in Proposition 1. Then, the normalized values
are given by
Now, we assign the normalized core values given in (
13) to the coefficients of scalar optimization problem (SOP). In this case, according to Theorem 1, the optimal solution of problem (SOP) is called the core-nondominated solution of problem (FMOP), which means that the solution concept of core is involved. Therefore, we can solve the following scalar optimization problem
to obtain the core-nondominated solution. Since the core-nondominated solution depends on the vector
of non-negative constants, we also write
for convenience.
Let
be the set of all core-nondominated solutions of problem (FMOP). From (
10), we have
Since
is a large set, we intend to find a best core-nondominated solution from
by using the genetic algorithm. In this case, we plan to maximize the following fitness function
where
for
.
6. Genetic Algorithms
The purpose is to design an genetic algorithm such that a best core-nondominated solution can be obtained from the set
of all core-nondominated solutions of problem (FMOP). Therefore, we are going to maximize the following fitness function
We shall evolve the non-negative vector
by performing crossover and mutation to find a best chromosome according to the fitness function given in (
14).
From (
10), the non-negative constants
must satisfy
In this algorithm, each non-negative constant
will be a random number from the closed interval
for
. The chromosome in this algorithm is defined to be a vector
in
satisfying
and
for
.
Two phases will be performed in this algorithm. Since the scalar optimization problem (SOP) depends on the partition of , phase I will obtain the approximated best core-nondominated solution when the partition is taken to be fixed. In phase II, we shall use the more finer partition of to perform the algorithm in phase I until the approximated best core-nondominated solution cannot be improved.
6.1. Phase I
In phase I, given a fixed partition of , we are going to obtain the approximated best core-nondominated solution by solving the scalar optimization problem (SOP) through performing crossover and mutation operations.
Proposition 2. (Crossover)
Let and be two chromosomes satisfyingGiven any , we consider the following crossover operationwhere the components are given byThen, the new chromosome γ also satisfies for . Proof. Since and , it is clear to see that the convex combination also satisfies . □
Given a vector
, we shall perform the mutation to obtain
from
. Given a fixed
, we first generate a random Gaussian number with mean zero and standard deviation
. Then, we assign
The new mutated chromosome
is defined by
where
is given in (
9) for
.
Proposition 3. (Mutation)
Suppose that is a chromosome. We consider the mutation in the way of (15). Then, the new mutated chromosome γ satisfies Proof. It is clear to see from (
15). □
6.2. Phase II
In phase II, we shall use the more finer partition of to perform the algorithm proposed in phase I. Assume that the partition of was considered in phase I. Now, given a new partition of , which is finer than in the sense of . We shall perform the algorithm proposed in phase I using this new partition . In other words, we are going to continuously execute Phase II by using the more finer partitions such that the approximated best core-nondominated solution cannot be improved. Two ways are suggested in this paper to obtain the finer partition as follows.
For the first way, we can simply take the partition
of
such that
is equally divided and satisfies
. The second way is to design another genetic algorithm to generate a new finer partition
by evolving the old partition
. More precisely, a population
is taken from the old partition
. After performing the crossover and mutation operations in the old population
, we can generate new points
such that a new finer partition
is obtained as follows
For example, the crossover and mutation operations can be designed as follows.
(Crossover operation). Take two and from the old partition . We perform the convex combination for different , which can generate the different new points.
(Mutation operation). Take from . We perform the mutation , which is a random number in . When is in , the new generated point is taken to be . When , the new generated point is taken to be .
When a new finer partition is generated, the algorithm in phase I is again performed using this new partition . In this case, we can obtain a new approximated best core-nondominated solution. Also, this partition is now regarded as the old partition. Afterward, a new finer partition of is also generated to satisfy . Now, we again perform the algorithms in phase I using this new finer partition .
6.3. Computational Procedure
Given a fixed partition of , We present the detailed computational procedure of genetic algorithm for phase I as follows.
Step 1 (Initialization). The population size of this algorithm is assumed to be
p. The chromosomes in the initial population is determined by setting
, where
and
are taken to be the random numbers in
for all
and
, where
is given in (
9). For each
, we solve the linear programming problem in (
11) and use Proposition 1 to obtain the positive core
for
. Using (
13), we also calculate the normalized positive core
for
and
. For each chromosome
, we assign its corresponding fitness value by calculating the following expression
for
. Then, the
p chromosomes
for
are ranked in descending order according to their fitness values
for
. In this case, the top one is saved to be the (initial) best chromosome and is named as
. We also save
to be the old elites
given by
for
and
.
Step 2 (Tolerance). We set the tolerance . We also set the number of maximum times of iterations such that the tolerance is satisfied. We set to mean the initial generation. We set to mean the first time such that the tolerance is satisfied. This step is related with the stopping criterion to avoid to be trapped in the local optimum. Therefore, it can be more clear by referring to step 7.
Step 3 (Mutation). We set
to mean the
t-th generation. By referring to Proposition 3, each chromosome
is mutated, and is assigned to
using (
15) for
. For each
, the random Gaussian numbers with mean zero and standard deviation
are generated in which
is taken by
The constant
is the proportionality to scale
and the constant
represents the offset. Then, we assign
In this case, the mutated chromosome
is obtained in which the components are given by
and
for
and
. After this mutation step, we shall have
chromosomes
for
.
Step 4 (Crossover). By referring to Proposition 2, randomly select
and
for
with
. A random number
is generated. In this case, the new chromosome is taken by
where the components are given by
After this step, we shall have
chromosomes
for
.
Step 5 (Calculate New Fitness). Using Proposition 1 and (
13), for each new chromosome
, we calculate the normalized positive core
by solving the linear programming problem in (
11) for
and
. For each
, we assign its corresponding fitness value by calculating the following expression
for
.
Step 6 (Selection). The p old elites for and the new chromosomes for obtained from Steps 3 and 4 are ranked in descending order according to their fitness values and , respectively. In this case, the top p chromosomes are saved as the new elites for . Also, the top one is saved as the best chromosome that is named as for the t-th generation.
Step 7 (Stopping Criterion). After step 6, we may obtain , which seems to be trapped in the local optimum. In order to escape this trap, we are going to proceed more iterations for times by referring to step 2 even though the criterion is satisfied. When the criterion is satisfied and the iterations reach times, we stop the algorithm and return the solution for phase I. Otherwise, the new elites for are copied as the next generation for . Then, we set , and proceed to step 3. We also remark that the number counts the times such that the tolerance is satisfied.
After step 7, given a fixed partition of , an approximated best core-nondominated solution can be obtained, which is denoted by . It also means that this solution is determined the partition . Then, the algorithm proceeds to phase II by considering more finer partitions of as follows.
Step 1. A new finer partition is generated to satisfy .
Step 2. By using this new finer partition to perform the genetic algorithm in phase I, we can obtain a new approximated best core-nondominated solution .
Step 3. Given a pre-determined tolerance , once the criterion is reached, the algorithm halts and returns the final solution . Otherwise, is set as the old partition , and proceeds to step 1 to generate a new finer partition.
Finally, after step 3, we obtain the approximated best core-nondominated solution. In other words, by referring to Theorem 1, an approximated nondominated solution of problem (FMOP) is obtained.
7. Numerical Example
The membership function of triangular fuzzy interval
is defined by
Then, its
-level set
is given by
In particular, we consider the triangular fuzzy intervals as follows
Then, their
-level sets are given by
In this case, the following fuzzy linear programming problem
will be solved. According to the above formulation, we equally divide the unit closed interval
by taking
,
and
. Let
. Using the above notations, we obtain
and
We see that
. Next, we are going to solve the following scalar optimization problem
where the feasible set
X is given by
In order to formulate the corresponding cooperative game, we must obtain the ideal objective values
given by
Therefore, we consider five players
such the cooperative game
is defined by
and
By referring to (
8), for
, we have
Using (
9), we obtain
For
, we have
Therefore, we obtain
For
, we have
Therefore, we obtain
Finally, we obtain
For
, according to (
7), we have
More precisely, for
, we have
For
, we have
For
, we have
Finally, for
, we have
Under the above settings, the computational procedure is presented below.
Step 1 (Initialization). The population size is taken to be
. The initial population is given by
such that
and
are random numbers in
for all
and
. Given each chromosome
, we solve the linear programming problem in (
11) given by
More precisely, we are going to solve the following linear programming problem
where the detailed constraints are given below
Proposition 1 says that the optimal solution
of problem (LP) is a positive core for
. According to (
13), we also calculate the normalized positive core as follows
for
and
. Each chromosome
is assigned a fitness value given by
for
. We rank the 20 chromosomes
for
in descending order of their corresponding fitness values
for
. The top one is saved to be the (initial) best chromosome named as
. Save
as an elite
given by
for
and
.
Step 2 (Tolerance). Set , , and the tolerance .
Step 3 (Mutation). We set
to mean the
t-th generation. For
, each chromosome
is mutated, and is assigned to
by using (
15). Generate the random Gaussian numbers with mean zero and standard deviation
, where
is taken to be the following form
The constant
is the proportionality to scale
and the constant
represents the offset. In this example, we take
and
for
. Then, we assign
Therefore, we obtain the mutated chromosome
with components given by
and
for
and
. After this step, we shall have 40 chromosomes
for
.
Step 4 (Crossover). We randomly select
for
with
. We generate a random number
, the new chromosome is given by
with components
After this step, we shall have 41 chromosomes for .
Step 5 (Calculate New Fitness). For the new generated chromosomes
for
, using Proposition 1 and (
13), we calculate the normalized positive value
by solving the linear programming problem in (
11) for
and
. Each chromosome
is assigned a fitness value given by
for
.
Step 6 (Selection). We rank the 20 old elites
and the 41 new chromosomes
obtained from Steps 3 and 4 in descending order of their corresponding fitness values
and
. The top 20 chromosomes are saved to be the new elites
Also, the top one is saved to be the best chromosome named as for the t-th generation
Step 7 (Stopping Criterion). After step 6, we may obtain
, which seems to be trapped in the local optimum. In order to escape this trap, we proceed more iterations for
times even though the criterion
is satisfied. When the criterion
is satisfied and the iterations reach 20 times, we stop the algorithm and return the solution for phase I. Otherwise, the new elites
must be copied to be the next generation
We set and the algorithm proceeds to step 3. Note that the number counts the times for satisfying the tolerance .
The computer code is implemented using Microsoft Excel VBA. The best fitness value is and the approximated core-nondominated solution is .
Now, we consider more finer partitions of according to the suggestion of phase II.
Step 1. From
Section 6.2, we consider a new finer partition by equally divide the unit interval
. Therefore, we take
It is clear to see .
Step 2. Using this new finer partition from Step 1 and the genetic algorithm in phase I, a new approximated best core-nondominated solution can be obtained.
Step 3. Step 2 says . Therefore, the final best core-nondominated solution is given by .
Finally, using Theorem 1, we obtain the approximated nondominated solution of the original fuzzy linear programming problem.
8. Conclusions
This paper proposes a new methodology by incorporating the core values of cooperative games and the genetic algorithms to solve the fuzzy multiobjective optimization problem, which is a new attempt for solving this kind of problem. Usually, the fuzzy multiobjective optimization problem can be transformed into a conventional single-objective optimization problem such that the suitable weights are determined by the decision makers. In order to avoid the possible biased assignment of weights, a mechanical procedure is proposed in this paper by assigning the core values of cooperative game as the weights of this conventional single-objective optimization problem.
The purpose is to use the popular numerical methods of optimization to solve this conventional single-objective optimization problem. For example, for solving the fuzzy multiobjective linear programming problem, this original problem can be transformed into a conventional single-objective linear programming problem, In this case, the simplex method can be used to solve the desired problem. Frequently, the core-nondominated solutions can be a large set. Therefore, the genetic algorithms is adopted to obtain the best core-nondominated solution from this large set of core-nondominated solutions. This paper does not intend to use the genetic algorithm to directly solve the fuzzy multiobjective optimization problems. The genetic algorithm used in this paper is just to obtain the best core-nondominated solution from a large set of core-nondominated solutions. The monograph by Sakawa [
44] provides the method for using the genetic algorithms to directly solve the fuzzy multiobjective optimization problems. Although the genetic algorithms is adopted in this paper to obtain the best core-nondominated solution, some other heuristic algorithms like Particle Swarm Optimization, Scatter Search, Tabu Search, Ant Colony Optimization, Artificial Immune Systems, and Simulated Annealing can still be used to obtain the best core-nondominated solutions.
Although the core values of cooperative games are considered in this paper, many other solution concepts of cooperative games can also be adopted to set up conventional single-objective optimization problem, which can be the future research. The theory of non-cooperative games may be another way to set up the conventional single-objective optimization problem, which can be the future research.