Since most multi-objective optimization problems have more than one conflicting objective and there is no single optimal solution that can optimize all the objective functions simultaneously [
35], decision makers usually look for the “most preferred” solution. For this purpose, the
-constraint method with a priori articulation of the decision maker’s preference information is widely used to figure out the Pareto-optimal solution of the multi-objective optimization problem. In this study, the
-constraint method is also applied to solve the MOMILP model first. Then, in order to select the “best” compromised solution among the Pareto-optimal solutions, two interactive fuzzy methods are further discussed. In the following, more details about the proposed multi-objective methods, as well as their relations are presented.
5.2. Interactive Fuzzy Methods
Fuzzy solution methods have been commonly applied to address the multi-objective optimization problems in recent years because of their capability of measuring the satisfaction degree of each objective directly. The first one was introduced by Zimmermann [
37], called the min-max method, which converts the bi-objective model to a single objective model. It allows the decision maker to make a trade-off between the multiple objectives and gives the achieved level of each objective under different preferences. However, there is a well known deficiency that the solution yielded by the max-min operator might not be unique nor efficient [
38]. Therefore, several methods were further developed to improve this method. Of particular note, Werners [
39], Torabi and Hassini [
40] (the TH method), and Selim and Ozkarahan [
32] (the SO method) effectively remedied the original defect by adding the coefficient of compensation
into the model. In this study, we apply the TH method and the SO method simultaneously for exploring the optimal solution more effectively. Following these two methods, the procedure to solve the MOMILP model is summarized as below.
Step 1: Determine the positive ideal solution (PIS) and the negative solution (NIS) for each objective. The former is the optimum value of the objective function to be optimized while other objectives are ignored, and the latter is the possible worst value under the scenarios in which other objectives achieve their optimum values. In terms of Model (
29), let
and
(
) denote the decision vector associated with the PIS/NIS of the
objective and the corresponding value of the objective function, respectively. Accordingly, the positive ideal solutions for the two objectives of (
29) can be denoted as
and
. Then, the related NIS can be obtained as follows:
Step 2: Specify a linear membership function for each objective as follows:
where
represents the satisfaction degree of the
objective,
.
Step 3: Construct the aggregation function on the basis of the membership function. This procedure converts the MOMILP into a single objective MILP model by using the TH method and SO method. Note that both of these methods ensure obtaining the efficient solutions.
The TH aggregation function is given as follows:
where
indicates the coefficient of compensation and
denotes the importance of the
objective such that
.
Furthermore, the SO aggregation function is given as follows:
where
is similarly defined as that in (
34).
Step 4: Specify the value of the coefficient of compensation and relative importance of each objective and solve the respective single objective MILP model. If the decision maker is satisfied with the current solution, stop, otherwise provide another compromise solution by changing the value of and , and go to Step 3.
5.3. Comparison of the Proposed Methods
To discuss the relation between the proposed methods, which may provide more insights to the decision maker while determining the “most preferred” solution by utilizing these methods, some theoretical analyses are presented here first. Then, they are further illustrated by the numerical experiments given in the next section.
As for the TH method, it is easy to obtain that
must hold true, if the objective function of Model (
34) is optimized. Therefore, the decision variable
in (
34) indicates the minimum satisfaction degree of the two objectives. Furthermore, the TH Model (
34) can be rewritten in an equivalent form as:
As for the SO method, it can be verified that
must hold true for both
, if the objective function of Model (
35) is optimized. Therefore, the constraint
in (
35) can be replaced by
and
. Then, the SO Model (
35) can be reformulated as:
If
in the SO Model (
37), the coefficient
before
in the objective function becomes negative. Then, which the objective function being optimized,
can be deduced. In such a case, the SO model actually seeks to maximize
(the positive constant coefficient
before it can be ignored), i.e., the weighted sum of the satisfaction degrees of the two objectives. Furthermore, if
, Model (
37) maximizes
as well. Therefore, if
, the SO Model (
37) equals the TH Model (
34) by setting the compensation coefficient as zero.
If
in the SO Model (
37), the coefficient
before
in the objective function becomes positive. Correspondingly,
in Models (
35) and (
37) can be explained as the minimum satisfaction degree of the two objectives, whereas
in Model (
35) is the difference between the satisfaction degree of the
objective and the minimum satisfaction degree
. Similarly to that converting (
34) to (
36), the SO Model (
35) can be further rewritten in an equivalent form as:
Comparing Models (
30) and (
34)–(
38), formulated following the three different methods, the
-constraint, TH, and SO, some conclusions regarding the relation of these methods are presented as below.
(1) For the proposed multi-objective methodology, the -constraint method fails to achieve the measurement of the satisfaction level of each objective function when generating multiple optimal solutions. Nevertheless, the TH and SO methods can make up for this demerit through the transformation of membership functions, to ensure the assessment of the optimization of each object function. Besides, when the the parameter and , both the TH and SO methods lead to a single objective problem with the optimization of the objective function. In such a case, the same solution can be obtained by utilizing the -constraint method with a very large value of .
(2) In terms of the TH and SO methods, there are some similarities and connections between them. First, as discussed above, the SO model with a compensation coefficient
equals the TH model with a compensation coefficient
. Second, while the compensation coefficient
takes a value of 1 in both methods, the two methods also yield equivalent models (see Models (
36) and (
38) with
). Furthermore, following from (
36) and (
38), it is easy to prove that for each SO model with a compensation coefficient
can be found an equivalent counterpart in the TH method with a compensation coefficient
by setting:
To display the connections between these two methods explicitly,
Table 1 lists the objective functions to be optimized for the two methods while the compensation coefficient
takes some critical values. For instance, the objective functions with
in the TH method and
in the SO method are
and
, respectively. For such a case, obviously, both methods would gain the same optimal solution.
(3) According to the connections between the TH and SO methods, it can be seen that these two methods are both capable of yielding a comprise solution between the min operator and the weighted sum operator depending on the value of the compensation coefficient , but with different weights between the terms and . In other words, both methods can generate balanced and unbalanced compromised solutions via manipulating the value of parameters and based on the decision maker’s preferences. More specifically, the higher value of means more attention is paid to obtain a higher lower bound of the satisfaction degrees of all objectives, thus yielding a more balanced solution for decision makers. While , both methods only seek to minimize , i.e., the lower bound of the satisfaction levels with respect to all objectives.