1. Introduction
Solving the fuzzy linear programming problem has received considerable attention for a long time. Bellman and Zadeh [
1] first studied the fuzzy optimization problems by using the aggregation operators to combine the fuzzy goals and fuzzy decision space. Tanaka et al. [
2], Zimmermann [
3,
4] and Herrera et al. [
5] considered the aspiration level approach to study the fuzzy linear programming problems. Verdegay [
6] studied the fuzzy primal and dual problems. Buckley [
7], Julien [
8] and Luhandjula et al. [
9] solved the mathematical programming problems based on the possibility. Inuiguchi [
10] studied the so-called modality constrained programming problems based on the possibility and necessity measures.
Considering the fuzzy coefficients in optimization problems is also an important issue, where the decision variables are still assumed to be nonnegative real numbers. Wu [
11,
12,
13,
14] used the so-called Hukuhara derivative to study the duality theorems and optimality conditions for fuzzy optimization problems. Li et al. [
15] used the generalized convexity to study the optimality conditions. The Newton method was proposed by Chalco–Cano et al. [
16] and Pirzada and Pathak [
17] to solve the fuzzy optimization problems. In this paper, we are going to solve the fuzzy linear programming problems in which the decision variables are assumed to be nonnegative fuzzy numbers. This consideration will complicate the problem. The existing articles for studying the fuzzy linear programming problems with fuzzy decision variables always assume the fuzzy decision variables to be the triangular fuzzy numbers or trapezoidal fuzzy numbers by means of their simple forms. In this paper, the fuzzy decision variables are assumed to be the general bell-shaped fuzzy numbers. This is the first attempt to solve this kind of difficult problem.
Buckley and Feuring [
7] used the evolutionary algorithm to solve the so-called fully fuzzified linear programming problems in which the coefficients and decision variables were assumed to be triangular fuzzy numbers. Ezzati et al. [
18] studied the fully fuzzy linear programming problems in which the coefficients and decision variables were also assumed to be triangular fuzzy numbers by converting the original fuzzy problem into a multiobjective linear programming problems. Ahmad et al. [
19], Jayalakshmi and Pandian [
20], Khan et al. [
21], Kumar et al. [
22], Lotfi et al. [
23], Najafi et al. [
24] and Nasseri et al. [
25] also studied the fully fuzzy linear programming problems based on triangular fuzzy numbers. Kaur and Kumar [
26] studied the fully fuzzy linear programming problems based on trapezoidal fuzzy numbers. Baykasoglu and Gocken [
27] used the particle swarm optimization method to solve the fuzzy mathematical programs with fuzzy decision variables in which only triangular fuzzy numbers were taken into account. The main reason for considering the triangular fuzzy numbers and trapezoidal fuzzy numbers is that they can be parametrically represented and formulated. In other words, the above methods will be invalid when the fuzzy quantities are not taken to be triangular fuzzy numbers or trapezoidal fuzzy numbers. This drawback will be overcome in this paper.
The fuzzy transportation problems are frequently formulated as fuzzy linear programming problems with fuzzy decision variables. However, the fuzzy quantities are still assumed to be triangular fuzzy numbers or trapezoidal fuzzy numbers for avoiding the complication. Chakraborty et al. [
28] and Jaikumar [
29] solved the fully fuzzy transportation problems using triangular fuzzy numbers. Baykasoglu and Subulan [
30] studied the fuzzy transportation problems with fuzzy decision variables in which the fuzzy quantities are also taken to be triangular fuzzy numbers. They used the so-called constrained fuzzy arithmetic (ref. Klir and Pan [
31]) to transform the fuzzy transportation problems with fuzzy decision variables into
-level problems for
, where the
-level problems are the crisp mathematical programming problems. Ebrahimnejad [
32] and Kaur and Kumar [
33] solved the fuzzy transportation problems using the so-called generalized trapezoidal fuzzy numbers. Kaur and Kumar [
26] also studied the unbalanced fully fuzzy minimal cost flow problems in which the fuzzy quantities are taken to LR fuzzy numbers. The above existing methods did not consider the error between the optimal solution and numerical optimal solution. One of the contribution of this paper is to obtain the error bound between the optimal solution and numerical optimal solution.
The fuzzy linear programming problem with fuzzy decision variables is a difficult problem because of solving this kind of problem is that the decision variables are taken to be membership functions. In this paper, we shall use the basic properties of fuzzy numbers to transform the original fuzzy linear programming problem into a scalar optimization problem in which the decision variables are monotonic functions defined on the unit interval . In order to solve this transformed scalar optimization problem, we shall consider its discretization by equally dividing the unit interval into subintervals, which can be formulated as a linear programming problem. One of the purposes of this paper is to derive the analytic formula of error estimation regarding the approximate optimal solution. Therefore, we need to study the dual problem of the discretized linear programming problem. The existence of optimal solutions is also studied in this paper. Some slight sufficient conditions are needed to guarantee the existence of optimal solutions.
This paper is organized as follows. In
Section 2, the fuzzy linear programming problems with fuzzy decision variables are formulated. In order to solve this original problem, it is transformed into a scalar optimization problem. The solution concepts and some related properties will be established. In
Section 3, we shall introduce a discretized problem of the transformed scalar optimization problem by equally dividing the unit interval
into subintervals with equal lengths. The dual problem of this discretized problem will also be formulated in order to design the computational procedure. In
Section 4, we shall derive an analytic formula of the error bound. The concept of asymptotic no duality gap and the related results will be established. In
Section 5, we shall study the existence of optimal solutions by providing some slight sufficient conditions. In
Section 6, we shall derive a tighter bound of error estimation for nonnegative case. In
Section 7 and
Section 8, the computational procedures are proposed, and two numerical example are provided to demonstrate the usefulness of this practical algorithm.
2. Formulation
The fuzzy subset
of
is defined by a membership function
. The
-level set of
, denoted by
, is defined by
for all
. The 0-level set
is defined as the closure of the set
. It is clear to see that
for
.
Let
and
be two fuzzy subsets of
. According to the extension principle, the addition and multiplication between
and
are defined by
and
Let be a fuzzy subset of . We say that is a fuzzy number if and only if the following conditions are satisfied:
is normal, i.e., for some ;
is convex, i.e., the membership function is quasi-concave;
the membership function is upper semicontinuous;
the 0-level set is a closed and bounded subset of .
It is well-known that each
-level sets
of a fuzzy number
is a bounded closed interval in
, which is also denoted by
Remark 1. Let be a fuzzy number. Then we have the following properties:
for all ;
is increasing with respect to α on ;
is decreasing with respect to α on .
We say that is a nonnegative fuzzy number if and only if for all ; that is, if then . Similarly, we say that is a nonpositive fuzzy number if and only if for all ; that is, if then .
Let
and
be two fuzzy numbers. Then
and
In particular, if
is a nonnegative fuzzy number, then
Therefore we can also consider the following two cases.
Suppose that
and
are nonnegative fuzzy numbers. Then
Suppose that
is a nonpositive fuzzy number and
is a nonnegative fuzzy numbers. Then
Let
,
and
be fuzzy numbers for
and
. For convenience, we write the
-level sets of them as
Now we consider the following fuzzy linear programming problem with fuzzy decision variables
:
where the partial ordering ≼ appeared in the constraints will be interpreted below by considering their
-level sets. Let
and
Then, by considering the lower and upper bounds of
-level sets of constraints, the fuzzy linear programming (FLP) problem is interpreted as follows:
We assume that each
is a nonnegative or nonpositive fuzzy number. For each
, we define the sets of indices as follows:
and
It is clear to see that for .
Using (
1)–(
3), the problem FLP can be written as follows:
Since the decision variables
for
are assumed to be nonnegative fuzzy numbers, according to Remark 1, the problem FLP can also be written as follows:
Given any two vectors
and
in
. we define
and
It is clear to see that
if and only if
Given a fuzzy number
, we can consider the following vector
Now, given any two fuzzy numbers
and
, we define
Equivalently, it means that
It is clear to see that
if and only if one of the following conditions is satisfied:
or
or
Definition 1. We say that is a nondominated optimal solution of fuzzy optimization problem FLP if and only if there does not exists another feasible solution such thatwhere denotes the fuzzy objective function of FLP. From (
7), we see that
is a nondominated optimal solution of problem FLP if and only if there does not exists another feasible solution
such that
We define
and consider the following biobjective optimization problem
Recall that
is a Pareto optimal solution of the biobjective optimization problem BOP if and only if there does not exists another feasible solution
such that
It is clear to see that the expressions (
8) and (
9) are equivalent. Therefore we have the following useful result.
Proposition 1. is a nondominated optimal solution of FLP problem if and only if is a Pareto optimal solution of BOP.
Now we consider the following scalar optimization problem corresponding to the BOP:
Then we have the following interesting result.
Proposition 2. If is an optimal solution of scalar optimization problem (SOP), then is also a Pareto optimal solution of BOP.
Proof. Suppose that
is not a Pareto optimal solution of the BOP problem. Then there exists
such that
Using (
6), we see that
which contradicts the optimality of
for the SOP problem. This completes the proof. □
Theorem 1. If is an optimal solution of SOP, then is also a nondominated optimal solution of FOP problem.
Proof. The result follows immediately from Propositions 1 and 2. □
From Theorem 1, in order to obtain the nondominated optimal solution of the original FOP, it suffices to find the optimal solution of SOP. In the sequel, we are going to numerically solve the SOP by presenting the error estimation.
We assume that each
is a nonnegative or nonpositive fuzzy number for
, and define the sets of indices as follows:
and
It is clear to see that
. Let us recall that we also have the sets of indices
and
defined in (
4) and (
5), respectively, for
.
Using (
1)–(
3), we have
and
Then the SOP can be rewritten as follows:
For
, we define the real-valued functions
,
,
and
on
as follows:
and
For
, we define the real-valued functions
and
on
as follows:
For
and
, we define the real-valued functions
and
on
as follows:
Then the SOP can be written as follows:
We note that the “decision variables” in problem SOP are monotonic real-valued functions defined on , which can be regarded as the decision functions of problem SOP.
In order to design the computational procedure, we assume that the following two conditions are satisfied:
Since
is increasing and
is decreasing on
, if
and
are also continuous on
, then
and
Given any fixed
with
, we define the following auxiliary problem of the SOP by considering the monotonic properties of decision functions
and
for
:
Remark 2. It is clear to see that each feasible solution of problem SOP is also a feasible solution of problem .
In order to investigate the error estimation, we need to consider the dual problem of of problem
, which is defined as follows:
The primal and dual pair of problems and are really helpful for further discussion. The reason for naming them as the primal and dual pair of problems is that we can establish the weak duality theorem shown in Theorem 2 and the asymptotic no duality gap shown in Theorem 3 below.
Given an optimization problem (P), if problem (P) is a maximization problem, then denotes the supremum of its objective function, and if problem (P) is a minimization problem, then denotes the infimum of its objective function. We have to mention that the supremum or infimum is attained when the optimal solution of problem (P) exists. In other words, denotes the optimal objective value of problem (P) when there exists a feasible solution of problem (P) such that the supremum or infimum is equal to the objective value of .
We first prove the weak duality theorem
. The asymptotic strong duality theorem can also be obtained in the subsequent discussion by showing that
where
is taken to be
for
. The weak duality theorem is also useful for investigating the error estimation.
Theorem 2. (Weak Duality Theorem) Considering the primal and dual pair of problems and , respectively, for any feasible solution of primal problem SOP and any feasible solution of dual problem , we have Proof. We first note that
is also a feasible solution of problem
by Remark 2. Since all the decision functions of problems
and
are nonnegative, we have
and
and
and
Therefore, using (
32)–(
35), we obtain
which also says that
. From Remark 2, we see that
. This completes the proof. □
For convenient discussion, we adopt the following notations:
and
It is clear to see that the primal problem is feasible, since the zero functions form a feasible solution. The following result presents the feasibility of dual problem .
Proposition 3. For , we define the constant functions on as follows For , we define Then is a feasible solution of problem . Moreover we havefor any δ with . Proof. It is clear to see that
is nonnegative. Let
By substituting
into the constraints of
, we have, for
and
,
and
This shows that
is a feasible solution of problem
with objective value
The inequality (
38) is obvious, since the dual problem
is a minimization problem. This completes the proof. □
3. Discretization
In order to solve problem SOP, we need to consider the discretization of problem SOP by equally dividing the unit interval into n subintervals. In this case, the discretized problem will turn into a conventional linear programming problem.
We assume that the closed interval
is equally divided into
n closed subintervals such that the length of subinterval is
. Let
be a partition of
, where
and
for
. The
n closed subintervals are denoted by
We assume that the real-valued functions
,
,
,
,
and
are all continuous on
for
and
. Now we define
Since the functions
,
and
are increasing on
and the functions
,
and
are decreasing on
for
and
, we have
Now we are in a position to discretize the problem SOP as follows. For each
and
, we define the following linear programming problem:
where
and
are nonnegative decision variables for
and
. We write
to denote the feasible solution of problem
.
In order to formulate the dual problem of
, we need to write it as a compact form in terms of matrices. For
, we consider the following vector
For , we define the following vectors:
is a vector for such that its j-th entry is for and for .
is a vector for such that its j-th entry is for and for .
We define the following matrices:
is a matrix for such that its -th entry is for and 0 for . If , then for .
is a matrix for such that its -th entry is for and 0 for . If , then for .
is a matrix for such that its -th entry is for and 0 for . If , then for .
is a matrix for such that its -th entry is for and 0 for . If , then for .
We also define the following
matrices
for
. Let
denote a
identity matrix, i.e., the entries in the diagonal are all 1’s and the remaining entries are all 0’s. We also define a
matrix
A and a
matrix
C by
Then the linear programming problem
can be rewritten as follows:
We also write it as the following standard form:
where
and
Then the dual problem of
is given by
where
with
and
with
The dual problem
is written by
By multiplying
n on both sides of the constraints, the dual problem
can be equivalently written by
More precisely, we have the following form
where
and
The feasible solution of problem is denoted by .
Since and are nonnegative for and , it is clear to see that the zero vector is a feasible solution of problem , i.e., . The following result presents the feasibility and the bound of optimal objective value for dual problem , which will be used to design the computational procedure.
Proposition 4. For , we adopt the following notations: For and , we define For and , we define For and , we define Then withandis a feasible solution of problem . In other words, the dual problem is feasible for each . Moreover we havefor and . We also have Proof. From (
15), (
16) and (
41), we see that
and
for
. It means that
is well-defined and nonnegative. For
, we define
By substituting
into the constraints of
and using (
57)–(
59), we have, for
,
and
This shows that
is indeed a feasible solution of problem
. From (
57), (
36) and (
39), we also see that
Using (
37) and (
40), the objective value of
is given by
The inequality (
60) is obvious, since the dual problem
is a minimization problem. This completes the proof. □
The following result presents the uniform boundedness of optimal solutions of primal and dual problems and , which can be used to discuss the existence of optimal solution of problem SOP.
Proposition 5. The following statements hold true.
- (i)
Let with for be an optimal solution of primal problem . Suppose that and are nonnegative for and . Then and are uniformly bounded; that is to say, there exists a constant that is independent of n such that and for all and .
- (ii)
Let with , , , , and be an optimal solution of dual problem . Then and are uniformly bounded; that is to say, there exists a constant that is independent of n such that and for all and .
Proof. According to the strong duality theorem for linear programming problems, we first have
. Since
is a feasible solution of primal problem
, i.e.,
, using (
60), we obtain
Since the objective values of
and
are
and
, respectively, using (
61) and the nonnegativity, it is clear to see that
and
must be bounded by some constant
, and that
and
must be bounded by some constant
, where the constants
and
are independent of
n. This completes the proof. □
Let
with
for
be a feasible solution of problem
. For
, we define the following real-valued functions:
and
Then the feasibility is shown below.
Proposition 6. The vector-valued function with component functions defined in (62) and (63) is a feasible solution of problem SOP. Proof. Since
is a feasible solution of problem
, it follows that
We consider the following cases.
Suppose that
for
. Then we have
and, using (
65), we also have
Suppose that
. Then
and
Suppose that
for
. Then, using (
66), we have
Suppose that
. Then we also have
This shows that
for
and
.
Suppose that
and
for
, i.e.,
. Using (
67), we have
Suppose that
and
. Then
Suppose that
for
with
, or that
and
. Then it is clear that
This shows that
is an increasing function on
.
Suppose that
and
for
, i.e.,
. Using (
68), we have
Suppose that
and
. Then
Suppose that
for
with
, or that
and
. Then it is clear that
This shows that
is a decreasing function on
.
Therefore we conclude that is indeed a feasible solution of problem SOP. This completes the proof. □
4. Error Estimation
In the sequel, we consider
in problems
and
. In this case, we also write
and
. By rewriting problem
, we first note that
,
Then the dual problem
has the following form:
Let
be an optimal solution of problem
. Then, according to Proposition 6, we see that
constructed from
is a feasible solution of problem SOP Therefore we obtain
Using Theorem 2, it follows that
We first define some notations. Let
be an optimal solution of problem
. For
, we define the real-valued functions
and
on
for
and
and
on
by
and
It is clear to see that the real-valued functions and are nonnegative for and .
For
, we define
It is also clear to see that
and
for
. We want to prove
We first provide some useful lemmas.
Lemma 1. For , and , we have Proof. We see that
is continuous on
. Let
be a decreasing sequence such that it converges to zero and
for all
m, where
is taken to satisfy
. Therefore we can define the compact interval
Since
, it follows that
is continuous on each compact interval
, which also means that
is uniformly continuous on each compact interval
. Therefore, given any
, there exists
such that
implies
Since the length of each
is
, we can consider a sufficiently large
such that
. In this case, each length of
for
is less than
for
. In other words, if
, then (
78) is satisfied for any
. We consider the following cases.
Suppose that the infimum
is attained at
, i.e.,
. From (
77), there exists
such that
. Now, given any
, we see that
for some
. Let
. From (
77), it follows that
. Then we have
since the length of
is less than
, where
is independent of
t because of the uniform continuity.
Suppose that the infimum
is not attained at any point in
. It means that
is increasing or decreasing on
. Since
is continuous on
, it follows that the infimum
is either the righthand limit or lefthand limit given by
Therefore, for sufficiently large
n, i.e., the interval
is sufficiently small such that its length
is less than
, using (
79), we have
for all
.
From the above two cases, since
for all
, we conclude that
The other cases can be similarly obtained. This completes the proof. □
Lemma 2. For each , we have Proof. Since the sequences and are uniformly bounded by part (ii) of Proposition 5, using Lemma 1, we obtain as . We can similarly show that as . This completes the proof. □
For
, we define
and
From (
15) and (
16), we see that
Since
is increasing on
and
is decreasing on
, it follows that, for
,
and
which says that
We also define the real-valued functions
and
on
by
and
From (
80), it is clear to see that
Now we define the real numbers
and
where we assume
and
We also define the real-valued functions
and
on
by
and
Using part (ii) of Proposition 5 and Lemma 1, we see that the sequences
and
of functions are uniformly bounded, which also say that the sequences
,
,
and
of real numbers are bounded. Therefore there exist two constants
and
such that
and
for all
and
. Then
We also define the real-valued functions
and
on
by
It is clear to see that the real-valued functions and are nonnegative. The following lemma will be used for further discussion.
Lemma 3. For , and , we haveand For and , we also haveand The sequences and of real-valued functions are uniformly bounded.
Proof. It suffices to consider the function
. For
,
and
, using (
76), we have
From (
83) and (
86), it is obvious that the sequences
and
of real-valued functions are uniformly bounded. This completes the proof. □
Let
be an optimal solution of problem
. For
and
, we define the following real-valued functions:
Then we have the following feasibility.
Proposition 7. The vector-valued functionwith component functions defined in (88)–(92) is a feasible solution of problem . Proof. We are going to show that
satisfies all the constraints of problem
. For
, we define the real-valued functions
on
for
and
on
by
Since is the length of each interval , for and , we see that . We consider the following cases.
For
and
, i.e.,
, we obtain
For
and
, we obtain
For
and
, i.e.,
, we obtain
For
and
, it is clear to see that
. We obtain
On the other hand, for
, we define the real-valued functions
on
for
and
on
by
We consider the following cases.
For
and
, i.e.,
, we obtain
For
and
, we obtain
For
and
, i.e.,
, we obtain
For
and
, it is clear to see that
. We obtain
This completes the proof. □
For
and
, we define the real-valued functions:
Lemma 4. For and , we haveand Proof. It is clear to see that the following functions
are continuous a.e. on
, i.e., they are Riemann-integrable on
. In other words, their Riemann integral and Lebesgue integral are identical. From Lemma 1, for
,
and
, we see that
and
Since the sequences
and
are uniformly bounded by part (ii) of Proposition 5, using (
104), (
103) and the Lebesgue bounded convergence theorem for integrals by referring to Royden [
34], we can obtain (
101) and (
102). This completes the proof. □
Theorem 3. Given an optimal solution of primal problem and an optimal solution of dual problem , we define the vector-valued functionsaccording to (62) and (63) and (88)–(92), respectively. Then the following statements hold true. - (i)
We haveandwheresatisfying as . Moreover, there exist two convergent subsequences and of and , respectively, such that - (ii)
(Asymptotic No Duality Gap). We haveandand
Proof. To prove part (i), we have
which implies
Now we have
and
by using (
98), (
100) and (
102).
By applying Lemma 2 to (
84) and (
85), it follows that
and
as
a.e. on
. By applying (
83) to (
87), we see that
and
as
a.e. on
. Using the Lebesgue bounded convergence theorem for integrals (ref. Royden [
34]), we obtain
and
Therefore we conclude that
as
. From (
109), we obtain
which proves (
105). From the inequality (
38), we see that
is a bounded sequence, which says that there exists a convergent subsequence
of
. From the inequality (
60), we also see that
is a bounded sequence, which also says that there exists a convergent subsequence
of
. In other words, the two subsequences
and
are convergent. Using (
109) again, we obtain
which is equivalent to (
107).
Moreover the error
can be rewritten as
which implies the expression (
106). This proves part (i).
To prove part (ii), since
for each
, using part (i) and inequality (
69), we obtain
and
Using
, we also have
This completes the proof. □
The following result presents the error estimation by simply solving the conventional linear programming problem .
Theorem 4. Given an optimal solution of problem , let be defined in (62) and (63). Then the error between and the objective value of is less than or equal to defined in (106). In other words, we havewhere as . Proof. Proposition 6 says that
is a feasible solution of problem SOP with the objective value given by
This completes the proof. □
Recall that denotes the supremum of the objective function of problem SOP. However, the supremum may not be attained. In other words, there may not exist a feasible solution such that its objective value is equal to the supremum . In the next section, we shall provide the sufficient conditions to guarantee the existence of optimal solutions. From the computational viewpoint, it suffices to consider the concept of so-called -optimal solution that is formally defined below.
Definition 2. Given any sufficiently small , we say that the feasible solution of problem SOP is an ϵ-optimal solution if and only if the difference between and the objective value of is less than ϵ. More precisely, the feasible solution is an ϵ-optimal solution if and only if Now we have the following interesting result.
Theorem 5. Given any sufficiently small , the ϵ-optimal solution of problem SOP always exists in the following sense: there exists such that we can take that is obtained from Theorem 4 satisfying .
Proof. Theorem 4 says that as . Therefore, given any sufficiently small , there exists a sufficiently large such that . Then the result follows immediately. □
5. Existence of Optimal Solution
We shall present the convergent properties of the sequences
and
that are constructed from the optimal solutions
of primal problem
and the optimal solution
of dual problem
according to (
62) and (
63) and (
88)–(
92), respectively. In this case, we shall provide the sufficient conditions to guarantee the existence of optimal solutions of problem SOP.
Theorem 6. Given an optimal solution of primal problem and an optimal solution of dual problem , we defineaccording to (62) and (63) and (88)–(92), respectively. Considering the component functions and of and , respectively, for , we define the real-valued functions and on bywhich can form the vector-valued functions on . Suppose that the objective value of with respect to the primal problem is greater than or equal to the objective value of with respect to the dual problem ; that is, Then is an optimal solution of problem SOP.
Proof. Since
is a feasible solution of problem SOP by Proposition 6, for
,
and
, we have
For
and
, we obtain
Using (
115), for
and
, we can similarly obtain
Using (
116), for
and
, we also have
For
, using (
117) and (
118), we have
which imply
and
Therefore is an increasing function on and is a decreasing function on for . This shows that is indeed a feasible solution of problem SOP.
Given any feasible solution
of problem SOP, we have
which says that
is an optimal solution of problem SOP. This completes the proof. □
In the sequel, we shall consider the uniform boundedness of the sequences
and
obtained from (
62) and (
63). Recall that part (i) of Proposition 5 has provided the sufficient conditions to guarantee the uniform boundedness of
and
. In order to obtain the desired results, we also provide the different sufficient conditions below.
Proposition 8. Let with for be a feasible solution of primal problem . Suppose that and are nonnegative for , and such that the following conditions are satisfied:
for each and , the following strict inequalities are satisfied: there exist constants and such that, for each , and , implies and implies .
Thenfor all and . In other words, the feasible set of primal problem is uniformly bounded in the sense of saying that each feasible solution is bounded by some constant that is independent of n. Proof. The assumptions say that, for each
j and
l, there exists
such that
. Then we have
which implies
This completes the proof. □
For
and
, we define the real-valued functions:
and
Lemma 5. Suppose that the sequences and obtained from (62) and (63) are uniformly bounded. For and , we have Proof. It suffices to prove the case of (
124). It is clear to see that the following function
is continuous a.e. on
, i.e., it is Riemann-integrable on
. In other words, its Riemann integral and Lebesgue integral are identical. From Lemma 1, for
,
and
, we see that
Therefore we obtain
Since the sequence
is uniformly bounded, using (
128) and the Lebesgue bounded convergence theorem for integrals by referring to Royden [
34], we can obtain (
124). This completes the proof. □
We remark that the uniform boundedness of the sequences and in Lemma 5 can be guaranteed by applying Proposition 8 and part (i) of Proposition 5.
Lemma 6. (Fatou’s Lemma) (Royden [34]) Let be a sequence of Lebesgue measurable functions defined on the same Lebesgue measurable set E, and let ν be a Lebesgue measure on E. - (i)
If there exists a Lebesgue integrable function ϕ on E such that a.e. on E for all k, then - (ii)
If there exists a Lebesgue integrable function ϕ on E such that a.e. on E for all k, then
Theorem 7. Given an optimal solution of primal problem and an optimal solution of dual problem , we defineaccording to (62) and (63) and (88)–(92), respectively. Considering the component functions and of and , respectively, for , we assume that the sequences and are uniformly bounded and satisfy the following inequalitiesandfor . Now we definefor and . Then with the component functions and of and , respectively, for , is an optimal solution of problem SOP. Proof. For
and
, we have
For
and
, using (
115) and (
130), we can similarly obtain
Applying the limit superior to the argument in the proof of Theorem 6, we can also show that is a feasible solution of problem SOP.
We remain to prove the optimality. We first have
where
and similarly have
where
By adding (
132)–(
135) together, we also obtain
On the other hand, we have
and
where
and
By adding (
141) and (
142) together, we obtain
Since
, from (
140) and (
148), we have
Given any feasible solution
of problem SOP, using (
31) and (
149), we have
Using the Fatou’s Lemma 6, we have
and
Since can be any feasible solution of problem , it shows that is an optimal solution of problem . This completes the proof. □
In the sequel, we shall present the existence of optimal solutions by considering the subsequences. We first provide some useful lemmas.
Lemma 7. (Riesz and Sz.–Nagy ([35] p. 64)) Let be a sequence in . If the sequence is uniformly bounded with respect to , then exists a subsequence which weakly converges to . In other words, for any , we have We remark that if the sequences and are uniformly bounded, then they are also uniformly bounded with respect to .
Lemma 8. (Levinson [36]) If the sequence is uniformly bounded on with respect to and weakly converges to , then Lemma 9. Let and be two sequences in that weakly converge to and in , respectively.
- (i)
If the function η defined on is bounded, then the sequence weakly converges to .
- (ii)
The sequence weakly converges to .
Proof. To prove part (i), for any
, we see that
. Therefore the weak convergence says that
which says that the sequence
weakly converges to
.
To prove part (ii), for any
, we have
This completes the proof. □
Theorem 8. Given an optimal solution of primal problem and an optimal solution of dual problem , we defineaccording to (62) and (63) and (88)–(92), respectively. Considering the component functions and of and , respectively, for , we assume that the sequences and are uniformly bounded with respect to and satisfy the following inequalities:for . Then there is a subsequence of such that is weakly convergent to an optimal solution of problem SOP. Proof. Since the sequence
is uniformly bounded with respect to
, using Lemma 7, there exists a subsequence
of
that weakly converges to some
. Using Lemma 7 again, there exists a subsequence
of
that weakly converges to some
. By induction, there exists a subsequence
of
that weakly converges to some
for
. The above argument can also apply to the sequences
for
. Therefore we can construct two subsequences
and
that weakly converges to
and
, respectively. Since
and
are feasible solutions of problem SOP by Proposition 6, for
,
and
, we have
Using Lemma 9, we see that the sequences
and
weakly converge to
and
respectively, which imply, by using (
158), (
159) and Lemma 8,
and
Using (
160), (
155) and Lemma 8, we have
From Lemma 8, we also have
and
Let
be a subset of
such that the inequalities (
163)–(
167) are violated. Then
has measure zero. In other words, if
, then all the inequalities (
163)–(
167) are satisfied. Now, for
, we define the real-valued functions
and
on
as follows
and
Since the set
has measure zero, it is clear to see that
We are going to show that
formed from (
168) is a feasible solution of problem SOP.
For
, from (
163)–(
165), we see that
satisfies the inequalities (
10)–(
12). For
, by referring to (
119)–(
121), we also see that
satisfies the inequalities (
10)–(
12). We remain to show that
is an increasing function on
and
is a decreasing function on
for
. For
, from (
161) and (
162), we have
Now we consider the following cases.
Suppose that
. Then we have
We also have
Suppose that
and
. Then we have
We also have
Suppose that
and
. Then we have
We also have
Suppose that
. Then we have
We also have
Therefore we conclude that
satisfies (
13) and (
14). This shows that
is indeed a feasible solution of problem SOP.
Next we want to prove the optimality. Given any feasible solution
of problem SOP, by referring to (
150), we can also obtain
where
and
by referring to (
136)–(
139) and (
147), respectively. Since the subsequences
and
are weakly convergent to
and
, respectively, for
, we have
By taking limit on both sides of (
170) and using (
171)–(
176), we obtain
Since can be any feasible solution of problem , it shows that is an optimal solution of problem . This completes the proof. □
6. Tighter Bound of Error Estimation for Nonnegative Constraints
Although the error estimation
presented in (
106) can be used to solve the fuzzy linear programming problem FLP in which all of the fuzzy numbers in the constraints are assumed to be nonnegative, in this section, we shall plan to derive the tighter bound of error estimation when all of the fuzzy numbers in the constraints are taken to be nonnegative. The key issue is that the real-valued functions
,
,
,
and
for
and
do not need to be defined in the forms of (
88)–(
92). More simple forms can be used for the nonnegative problem. We need to remark that these simple forms will not be the special case of the forms presented in (
88)–(
92). Therefore we need to separately study the nonnegative problem FLP in this section.
We first note that, for the nonnegative case, the index sets
for all
. Therefore, all of the expressions involving the index sets
for
can be re-written as the simple forms. Since the real-valued functions presented in (
88)–(
92) do not involve the index sets
. It means that those real-valued functions cannot be re-written as the simple forms. Therefore we are going to present the different forms for the problem FLP with nonnegative constraints, which means that, as we mentioned above, the problem FLP with nonnegative constraints should be separately studied.
Now the problem FLP with nonnegative constraints, denoted by (NFLP), can be re-written from FLP by taking
for all
, which is shown below:
Also its corresponding scalar optimization problem NSOP can be written as follows:
In this case, given any fixed
with
, the auxiliary problem is given by
For
, the dual problem of
is also given by
Regarding the discretization problem, the primal linear programming problem is given below
and the dual linear programming problem is given below
For further discussion, we first define some notations. Let
be an optimal solution of problem
. For
, we define the real-valued functions
and
on
for
and
and
on
by
For
, we define
From Lemma 2, it follows that
For
, we define
Since
is increasing on
and
is decreasing on
, it follows that, for
,
According to (
15) and (
16), we define
We also define the real-valued functions
and
on
by
and
Now we define the real numbers
and
where we assume
and
Then we define the real-valued functions
and
on
by
and
We also define the real-valued functions
and
on
by
According to Lemma 3, we can similarly obtain the following lemma.
Lemma 10. For , and , we haveand For and , we also haveand The sequences and of real-valued functions are uniformly bounded.
Instead of defining the real-valued functions
,
,
,
and
in (
88)–(
92), we define them as follows:
The feasibility is presented below.
Proposition 9. The vector-valued functionwith component functions defined in (191)–(195) is a feasible solution of problem . Proof. We are going to show that
satisfies all the constraints of problem
. For
, we define the real-valued functions
on
for
and
on
by
Since is the length of each interval , for and , we see that . We consider the following cases.
For
,
and
, i.e.,
, we obtain
For
,
and
, we obtain
For
,
and
, i.e.,
, we obtain
For
,
and
, it is clear to see that
. We obtain
On the other hand, for
and
, we define the real-valued functions
on
and
on
by
We consider the following cases.
For
,
and
, i.e.,
, we obtain
For
,
and
, we obtain
For
,
and
, i.e.,
, we obtain
For
,
and
, it is clear to see that
. We obtain
This completes the proof. □
Theorem 9. Given an optimal solution of problem and an optimal solution of problem , we defineaccording to (62) and (63) and (191)–(195), respectively. Then the following statements hold true. - (i)
We haveandwheresatisfying as . Moreover, there exist two convergent subsequences and of and , respectively, such that - (ii)
(Asymptotic No Duality Gap). We haveandand
Proof. To prove part (i), we have
which implies
Therefore we conclude that
as
. Moreover the error
can be rewritten as
which implies the expression (
202). The remaining proof can follow the similar argument in the proof of Theorem 3. This completes the proof. □
For the nonnegative problem NSOP, we can also use (
106) to obtain the following error estimation
where
,
,
and
by assuming
for
. From (
202) and (
208), it is clear to see that
, which means that we can have the tighter bound of error estimation for the nonnegative problem NSOP. In other words, when all of the fuzzy numbers in the constraints are assumed to be nonnegative, we shall calculate the tighter bound
presented in (
202) instead of calculating the error bound
presented in (
208).
The remaining results for the nonnegative problem NSOP, including the existence of optimal solutions can be similarly obtained. We omit the details here. We can also see that the real-valued functions
,
,
,
and
for
and
defined in (
191)–(
195) have the simple forms that are clearly not the special case of the forms presented in (
88)–(
92). These simple forms of real-valued functions can also save the computational resources.
7. Computational Procedure
In the sequel, we are going to present the computational procedure to obtain the error estimation and the approximate solutions of problem SOP. It is clear to see that the approximate solutions will be the step functions. According to Theorem 4, it is possible to obtain the appropriate step functions so that the corresponding objective function value is close enough to the supremum when n is taken to be sufficiently large. In other words, the computational procedure will obtain the -optimal solution of problem SOP by referring to Theorem 5.
In order to calculate the error
between the approximate objective value and the optimal objective value as shown in Theorem 3, we need to obtain
and
. By referring to (
72)–(
75), we need to solve
and
For
and
, we define
and
For
and
for
and
for
, we define
and
Then, from (
70) and (
71), we see that the real-valued functions
and
can be rewritten as
Since the functions
,
,
and
are continuous on
for all
and
, it follows that the functions
and
are also continuous on the compact interval
. In other words, the supremum in (
209) and (
210) can be obtained as follows. For
, we have
and
In order to further design the computational procedure, we need to assume that the real-valued functions
,
,
and
are twice-differentiable on
for the purpose of applying the Newton’s method. For
, we define the real-valued functions
For
, we define the real-valued functions
From (
214), we just need to solve the following simple type of optimization problem
We can see that the optimal solution is
According to (
213), it follows that the optimal solution of problem (
220) is
Let
denote the set of all zeros of the real-valued function
for
, where
is assumed to be a finite set. Then
Let
denote the set of all zeros of the real-valued function
for
. Then we can similarly obtain
Let
denote the set of all zeros of the real-valued function
for
. Then we can similarly obtain
Let
denote the set of all zeros of the real-valued function
for
. Then we can similarly obtain
Therefore, using (
214), (
215), (
221) and (
222), we can obtain the desired supremum (
209), and using (
216), (
217), (
223) and (
224), we can obtain the desired supremum (
210).
Now we consider the following cases.
Suppose that
,
,
and
are linear functions of
t on
. This situation will happen when the fuzzy numbers are taken to be the triangular fuzzy numbers. We also remark that the triangular fuzzy numbers are frequently used in the real problems. Now the linear functions are assumed to be the following forms:
Using (
214)–(
217), we obtain
Suppose that
,
,
and
are not the linear functions of
t on
. In order to obtain the zero
of
, we can apply the Newton’s method to generate a sequence
such that
as
. The iteration is given by
for
. The initial guess is
. Since the real-valued function
may have more than one zero, we need to apply the Newton’s method by taking as many as possible for the initial guesses
’s. The zeros of
can be similarly obtained.
The computational procedure is given below.
Step 1. Set the error tolerance and the initial value of natural number .
Step 2. Find the optimal objective value and optimal solution of dual problem .
Step 3. Find the sets
,
,
and
of all zeros of the real-valued functions
,
,
and
, respectively, by applying the Newton method given in (
229).
Step 4. Evaluate the supremum (
209) and (
210) according to (
214)–(
217) and (
221)–(
224), respectively.
Step 5. Obtain the real numbers
and
according to (
76) by using the supremum obtained in step 4.
Step 6. Evaluate the error estimation
according to (
106). If
, then go to step 7; otherwise, consider one more subdivision by setting
and go to step 2.
Step 7. Find the optimal solution of primal problem .
Step 8. Set the step functions
defined in (
62) and (
63), which will be the approximate solution of problem SOP. The actual error between the optimal objective value
and the objective value at
is less than
by Theorem 4, where the error tolerance
is reached for this partition
. This approximate solution
is an
-optimal solution by referring to Theorem 5.
The above computational procedure obtains the real-valued functions
and
on
that can form the
-level closed intervals of a fuzzy number
for
. More precisely, the
-level sets of
is given by
for
. According to the decomposition theorem in fuzzy sets theory, the membership function
of
is given by
where
denotes the indicator function of set
A given by
According to Theorem 1, if
is an
-optimal solution of problem SOP, then
is called an
-nondominated optimal solution of problem FOP. Since
obtained in the above computational procedure is an
-optimal solution of problem SOP, i.e., the vector
of fuzzy numbers obtained from (
230) is an
-optimal solution of problem SOP, it follows that
is an
-nondominated optimal solution of problem FOP. The numerical examples will be provided below to demonstrate the
-nondominated optimal solution.