New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems

: In multi-criteria group decision-making (MCGDM), one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value) of each criterion when he/she needs to rank the alternatives, the objective function in the ﬁrst model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efﬁcient algorithm for the optimal weights. Some applications are employed to show the signiﬁcance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models


Introduction
Multi-criteria group decision-making (MCGDM) is a familiar decision activity, such as investment decision-making, medical diagnosis, personnel examination and military system efficiency evaluation (see [1]).In MCGDM, different experts usually give different judgments on some alternatives over a set of evaluation criteria, which are used to rank the alternatives.
Denote A i and C j , i = 1, 2, • • • , n, j = 1, 2, • • • , m, the alternatives i and the criterion j, respectively.An MCGDM problem with n alternatives, p experts and m criteria can be mathematically described as follows.Let x k ij be the k-th expert's evaluation on the i-th alternative by the j-th criterion, where ) is a given vector, where u k j is referred to as the weight of the j-th criterion by the k-th expert.Let w = (w 1 , w 2 , • • • , w p ) be a given weight of the experts, whose component reflects the importance of the evaluation of the k expert.Then, an overall evaluation of alternative i is obtained by: where: It is clear that the weight vectors u k and w play a fundamental role in ranking the preference of the alternatives on the basis of the given matrices X k , k = 1, 2, • • • , p.However, the determination of the weight vectors has been regarded as one of the main difficulties in solving the MCGDM problem.
In the existing results, the methods to specify the weight vectors can be classified as the subjective and objective ones.For example, Ramanathan and Ganesh in [2] presented a simple method, which uses the decision-makers' own subjective opinions to calculate the expert's weights.In [3], Parreiras et al. proposed a flexible consensus scheme to establish the order.Bodily in [4] established another decision-making term to give weights to the initial decision-making members and worked out the weights by measuring the additional preference value deviations.Xu in [5] improved Bodily's method and proposed a more direct method to calculate the weights.Since the above methods are associated with the experience of the decision-makers, they basically belong to the subjective method.However, we should reduce decision bias and improve the objectiveness of decision-making although, experience is an important reference.
In contrast, as an objective method, the obvious feature is to choose the suitable weights on the basis of computational models, which can mine useful information from the score matrices.For example, in [6], the variation coefficient method was presented to choose the weights of criteria by calculating the standard deviation.The entropy-based method (EBM) was proposed in [7] to calculate the weights of criteria based on the concept of entropy.In [8], a distance-based group decision-making methodology was proposed to solve unconventional multi-person multi-criteria emergency decision-making problems.The above three methods are designed to obtain the weights, such that the degree of discrimination and deviation from each alternative is improved.Since the weights in these methods are determined directly based on the data themselves, they are relatively objective and against the possible bias of the decision-maker.However, the existing objective methods seem to neglect the fact that each expert holds a uniform viewpoint of the importance (weights) of each criterion when he/she needs to rank all the alternatives.
For an MCGDM problem with incomplete information on attribute weights, linear programming models were established to find the compromise weights in [9,10].By virtue of the notion of the significance degree, a zero-one mixed integer linear programming model was constructed in [11] to identify the weights.In [12], a linear programming model was also constructed to obtain the weights for the MCGDM problem with imprecise information.
In some existing results available in the literature, it is also often that the subjective judgment and the objective model are combined to choose the weight vectors.For example, Herrera et al. in [13] proposed a linguistic ordered weighted averaging operator to calculate the weights.In [14], there were four different operators presented.Honert in [15] presented the so-called REMBRANDT method (ratio estimation in magnitudes of decibels to rate alternatives which are non-dominated) to determine the weights.Actually, the REMBRANDT method is a combination of AHP (Analytic Hierarchy Process) and the sample multi-attribute rating technique (SMART) to quantify the decision-makers' experiences.The linguistic probabilistic weighted average (LPWA) was presented in [16].
In addition, the above methods have also been extended to solve uncertain group decision-making problems.For example, in [17], a model was constructed by maximizing the comprehensive membership coefficient to determine the weights of decision-makers, as the expert's score matrices are involved with intuitionistic interval fuzzy information.The basic idea is to determine the weights by different definitions on the degree of discrimination and deviation, as done in a certain environment (see [18,19]).
In summary, compared to the objective method, subjective judgment is often deemed a possible threat to the fairness of ranking in practice.However, as for an objective method, it is still a challenging task to construct a more reasonable computational model to mine the information from the score matrices.
Different from all the methods mentioned above, it is noted that in [20,21], a so-called robust portfolio modeling (RPM) method was presented to solve the multi-criterion project portfolio problems without the determination of weights in advance.However, compared to the first type of methods, RPM needs an efficient algorithm to find all the non-dominated portfolios to compute the core index of each portfolio.As pointed out in [20], the search for all the non-dominated portfolios is far more difficult than the solution of a knapsack problem.Thus, no polynomial-time algorithm exists in general to find the optimal project portfolio.
Owing to the advantage of the methods with predetermined weights in MCGDM, instead of the RPM approach, this paper intends to present two Min-Max optimization models to determine the weights of criteria and experts, respectively.In addition, based on the existing smooth optimization techniques, efficient and convergent algorithms will also be developed to solve the Min-Max problems in this paper.It is clear that with the optimal point estimates of weights, the complexity in solving the problem of ranking the alternatives is greatly reduced.Specifically, to choose the weights of criteria, we will take into consideration the uniform viewpoint possessed by the same expert on the importance (weighted value) of each criterion when he/she needs to rank all the alternatives.Examples will be constructed to show the difference between our method and the ones available in the literature, which are based on the degree of discrimination and deviation from each alternative.On the other hand, the weights of experts will be optimized such that the overall score of each alternative, calculated by the obtained weights, can collect the perspectives of the experts as many as possible.In this case, the objective function is to minimize the maximal variation between the actual and ideal score vectors of the p experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative.Finally, applications of the models will be employed to show the significance of the models and algorithm.
The rest of paper is organized as follows.In next section, two Min-Max models are constructed to optimize the weight vectors.Section 3 is devoted to the solution method of the Min-Max models.In Section 4, the significance of the proposed approach is shown by its applications.Specifically, some comparisons will be made with the other methods available in the literature.Final remarks are given in the last section.

Min-Max Models for Determination of Weights
In this section, two Min-Max models are constructed to optimize the choice of weight vectors.

Min-Max Model for the Weights of Criteria
Since any expert should possess a uniform viewpoint on the importance (weight) of each criterion when he/she ranks all the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert.For example, if an expert is asked to evaluate the academic output of some professors by two criteria: (1) the results in scientific research, such as the number of journal articles, and (2) the completed assignments in teaching, then in evaluating each professor, the expert uses the same weights in principle for the above two evaluation criteria.
Mathematically, for the given expert k, the above idea can be specified by constructing the following nonlinear optimization model: It is noted that there have been three main methods to determine the weights of criteria in the literature, which include the variation coefficient method (VCM) in [6], the entropy-based method (EBM) was proposed to calculate the weights of criteria in [7] and the distanced-based method (DBM) in [8].In order to facilitate a comparison between Model (3) and these three methods, we will summarize these methods in detail in Appendix and show that the weights determined by any one of the above three methods are not the solution of Model (3) in general.In other words, these methods cannot guarantee the uniformity of the importance (weights) of a criterion for the same expert.
Actually, in the case study given in Section 4, we first obtain the weights of the criteria by different methods (see Table 3 in Section 4).Then, we compute the values of the objective function in (3) corresponding to these weights for all four methods, respectively.
In Table 1, DM i , i = 1, 2, 3, denote the i-th decision-maker.From the results in Table 1, it is clear that the uniformity degree of the objective function in Model ( 3) is different for the four methods.The last row in Table 1 indicates that the uniformity obtained by our method (denoted by Min-Max in Table 1) is the most satisfactory compared to the other three methods.

Min-Max Model for the Weights of Experts
We are now in a position to design a model to optimize the weights of experts.Since the obtained overall evaluation for each alternative should collect the judgments of the experts as many as possible, the objective function is to minimize the maximum variation between the actual vector of evaluations and the ideal one for the p experts, such that the optimal weights can reduce the difference of the experts in evaluating the same alternative.
Mathematically, the optimization model for the choice of the weights of experts reads: It is noted that in [8], Yu and Lai recently proposed a distance-based optimization approach for the determination of the weights of experts.The objective function is to minimize the sum of the squared distance from one decision result to another, such that a maximum agreement is achieved.Specifically, for the i-th alternative, the squared distance between experts s and t is defined by: where i = 1, 2, • • • , n.Using this squared distance, the following optimization model is constructed in [8] to determine the weights: where: Remark 1.We can show that Model (4) is not equivalent to Model (6).Actually, in Section 4.2, we will show that the weights of experts are different for the same score matrix of the alternatives from each expert.

Min-Max Approach to MCGDM Problems
In this section, we will first develop an efficient algorithm to solve Models ( 3) and ( 4) on the basis of the properties of the models.Then, we will present the computer procedure to solve the MCGDM problem.

Efficient Algorithm for the Min-Max Models
In general, it is not easy to find a solution for a constrained Min-Max problem.We now develop an efficient algorithm to solve Models (3) and (4).
Define a function F : R m → R. For any where f i : R m → R, given by: It is clear that Clearly, we can write Model (3) in a compact form.
where l : R m → R 2m+2 , specified by: For Model (9), we define: where t > 0 is a given perturbation parameter.As t → 0, the solution of the following unconstrained optimization problem tends to that of Model ( 9): Similarly, we define a function H : R p → R, specified by: where h k : R p → R, and for any ω ∈ R p , It is clear that We write Model (4) in a compact form. min where Φ : R p → R 2p+2 , specified by: For Model (13), we define: e Φκ(ω)/(θ 2 t) , As t → 0, we can obtain an approximate solution of Model ( 13) by solving the following unconstrained optimization problem: With the above preparation, we are now in a position to state the following framework of the algorithm to solve the optimization Models ( 9) and ( 13).

Algorithm 1.
Step 0. Given an initial guess of solution x 0 , choose Step 1. Solve Problem (11) (or Problem (15)) by the modified conjugate gradient algorithms in [22][23][24].Its optimal solution is referred to as Step 2. If u k * and ω * are the solutions of Models ( 9) and ( 13), then the algorithm stops.Otherwise, go to Step 3.
Remark 2. In the practical implementation of Algorithm 1, if the optimal solution of Problem ( 11) at the ν-th iteration, being referred to as u k * (ν), approximately satisfies the constraints in Model (9), and the difference of the optimal solutions corresponding to ν and ν − 1 satisfies u k * (ν) − u k * (ν − 1) ≤ 0.5 × 10 −4 , then Algorithm 1 stops.

Min-Max Approach to MCGDM
With Algorithm 1, we are in a position to state a computer procedure to solve the MCGDM problems.

Algorithm 2. (Min-Max algorithm for MCGDM)
Step 0 (Initialization).Input the given score matrices Step 1 (Weights of criteria).Solve Model (3) by implementing Algorithm 1 to obtain the weights of the criteria of each expert.The optimal solution is referred to as u k for Expert k.
Step 2 (Overall scores of alternatives from each expert).By (2), compute the overall scores of Alternative i given by Expert Step 3 (Weights of experts).Solve Model (4) by implementing Algorithm 1 to obtain the weights of experts.The optimal solution is referred to as Step 4 (Final scores of alternatives).By (1), compute the final scores of Alternatives i, i = 1, 2, • • • , n, such that the ranking value of each alternative is obtained.
Remark 3. It is easy to see that the methods to determine the weight vectors of criteria and experts in Algorithm 2 are different from any other ones available in the literature.In next section, we shall further show the advantages, as well as some new ideas being incorporating into the construction of the models.

Numerical Experiments and Applications
In this section, we will apply the models and algorithm in some practical problems, especially in comparison with the existing approaches.
For all the algorithms, the codes of computer procedures are written in MATLAB and are implemented on Lenovo PC with a 2.9-GHz CPU processor (made in Beijing, China), 4 GB RAM memory and the Windows 7 operation system (made by Microsoft Corporation, Redmond, WA, USA).

Weights of the Criteria by Model (3)
We first present a simple example of score matrices, which is directly from [8] (see Table 2).With the same score matrices, we intend to study the difference between our method and the existing methods when they are applied to determine the weights of criteria.
Table 2. Score matrices.By solving Model (3) and implementing the computing procedures of the other three methods (VCM, EBM and DBM) in Appendix, we can obtain the weights of the different criteria for each expert.Numerical results are reported in Table 3. From the results in Table 3, it is easy to compute the range of the weights of criteria given by all experts.For the four methods (VCM, EBM, DBM and Min-Max), these ranges are 0.44, 0.6348, 0.1999 and 0.1765, respectively.Clearly, the weights of criteria given by our model are the most uniform, as well as being helpful to rank the alternatives.In other words, in order to rank the alternatives, the other methods have to allocate a relatively large weight to a criterion and a small one to another criterion.
For the obtained weights of criteria in Table 3, Table 1 in Section 2 has reported the values of the objective function in Model (3) corresponding to these weights, respectively.Since the values of the objective function reflect the evaluation uniformity of each expert on the same criteria, the last row in Table 1 demonstrates that our method (Min-Max) outperforms all of the other three methods from the viewpoint of uniformity.In addition, DBM seems to be better than VCM and EBM.
By virtue of the criterion weights obtained from Model (3), we can calculate the overall score matrix of all the alternatives by each expert (see Table 4).Next, we intend to study what the difference between our method and the existing methods is in determining the weights of experts.
For the same overall score matrix of the alternatives from each expert, we will compute the weights of experts by Model (4) in this paper and Model ( 6) in [8], respectively.In Table 5, we first fix an overall score matrix, which is the same as in [8], to make a comparison between our method and that in [8].Using Algorithm 1 to solve Model (4), we obtain the weights of experts: w = (0.4823, 0.3325, 0.3333) T .
Corresponding to w and w, we compute the errors of the experts between the actual vector of evaluations and the ideal one for all of the experts.The error vector of the three experts by our Model (4) is: (0.5933, 0.5929, 0.5957) × 10 −3 .
The ranges of the two error vectors are 2.8 × 10 −6 and 4.577 × 10 −4 , respectively.This indicates that by our method, the obtained overall scores for each alternative can reflect the evaluations of the experts as many as possible.
However, the different weights of experts obtained by the above two methods do not seriously affect the final scores of the alternatives for the fixed initial score matrix in Table 5.Actually, Table 6 shows that the ranking is the same by the two methods.

Application in a Machine Selection Problem
We now apply our method to study a machine selection problem for a manufacturing company, which is engaged in manufacturing precision machined components required for automotive and general engineering industries, such as automobile industries and textile machine manufacturers.
To meet the customer demand, it is important to enhance the manufacturing capability of the company.The machine selection problem is to select the best machine from the set of some feasible alternative proposals.To make the best selection, three criteria, flexibility, quality and productivity, will be taken into consideration to evaluate the various alternative machines.In addition, four experts from inside and outside the company are involved in the decision-making process.We suppose that the initial score matrices by all the experts are given in Table 7 (also see [7]).
We implement Algorithm 2 to solve the above machine selection problem.From Steps 1 and 2 of Algorithm 2, we obtain the overall score matrix of the four experts in Table 8, where DM i , i = 1, 2, 3, 4, denote the i-th decision-maker.
From Steps 3 and 4 of Algorithm 2, the final rank of the alternatives is obtained (see Table 9), where IDNN represents the improved decision neural network method in [7].

Application in Chemical Spill Emergency Management
At the end of this section, we will verify the effectiveness of Algorithm 2 by solving a practical problem of chemical spill emergency decision-making.
All of the relevant data are from the "Community Contact" emergency exercise organized by the Brandon Emergency Support Team (BSET), held on Wednesday, 21 June 2006, in Brandon, Manitoba (also see [25]).In this exercise, there are four key experts, including the Brandon Police Service (DM 1 ), the Brandon Fire Division (DM 2 ), the Western Manitoba Hazardous Materials Technical Team (DM 3 ) and the Brandon School Division (DM 4 ), as a GDM framework.They are required to evaluate six emergency response alternatives A i (i = 1, 2, • • • , 6) under three criteria C j (j = 1, 2, 3).C 1 represents physiological discomfort; C 2 represents emergency cost; and C 3 represents the safety criterion (in terms of the expected number of lives saved).During the release of hazardous airborne material, the "shelter-in-place alternative" (A 1 ) is the practice of staying inside (or going indoors as quickly as possible) and moving to an area of maximum safety.On the other hand, "evacuation" involves transporting the victims to a nearby destination (A 2 ) or the more distant Brandon Keystone Center (A 3 ).A 1 , followed by A 2 , gives rise to the fourth alternative of sheltering in place followed by an evacuation to a nearby location, which is referred to as A 4 .If A 1 is followed by A 3 , an alternative of sheltering-in-place followed by an evacuation to the Keystone Center, it is named A 5 .Finally, A 6 is the alternative of "do-nothing".
Table 10 shows the initial score matrices.For the defective score matrix of the alternatives from the experts, we modify Model (4) by: where: In Step 3 of Algorithm 2, we solve Model (16) to obtain the weights of experts.Then, the final rank of the alternatives is obtained as in Table 12.In Table 12, we denote GANP the group analytic network process approach in [25].The results in Table 12 shows that there exists a ranking difference of the alternatives by the different methods.From the final alternative scores in Table 12, we can compute the standard deviations of these scores, which reflect the data dispersion.The standard deviations are 0.1161, 0.0343 and 0.0962, respectively.The result indicates that our method has the highest degree of discrimination for the alternatives.
Although we have reported that for the given same initial score matrix, the different weights of experts, obtained by Model (4) in this paper and Model (6) in [8], respectively, seem not to seriously affect the final ranking result for the alternatives, the different methods to determine the weights of criteria may result in a different ranking of the alternatives.Actually, by virtue of the optimization Model (3), Table 12 shows that the ranking result is different from those by the other methods.Furthermore, the difference between Min-Max and DBM is less than that between Min-Max and the method in [25].In other words, construction of optimization models to determine the weights is more promising for offering a believable priority of the alternatives.
However, if we partition the ranking results into three groups according to the final scores in Table 12, then there exists a bit of similarity between Min-Max and DBM.The group ranked first is {A 2 , A 3 }; the group ranked second is {A 1 , A 4 , A 5 }; and the third group is {A 6 }.Table 12 demonstrates that the method (Min-Max) in this paper and DBM in [8] have the same partition.

Final Remarks
In this paper, two Min-Max models have been constructed to optimize the weights of criteria and the experts for multi-criteria group decision-making support.The obtained optimal weights of the criteria can minimize the maximal variation between the actual vector of evaluations and the ideal one for the alternatives.The optimal weights of experts can collect as many perspectives of the experts as possible, such that the difference of the experts in evaluating the same alternative reduces.To overcome the difficulty in solving the constrained Min-Max problems, an efficient algorithm has been developed to determine the optimal weights.
The numerical results indicate that the proposed Min-Max models more effectively solve the MCGDM problems, even in the case of the incomplete score matrices, compared to the existing methods.Actually, by our method, the evaluation uniformity of each expert on the same criteria can be guaranteed, and the final evaluation for each alternative is useful to collect the evaluations of all the experts as many as possible.It has also been proven that our method has the highest degree of discrimination for the alternatives.

Table 1 .
Uniformity of the criteria's importance.

Table 3 .
Weights of the criteria.

Table 4 .
Score matrix of alternatives by each expert.

Table 5 .
Overall score matrix of experts.

Table 6 .
Rank of alternatives.

Table 8 .
Overall score matrix with the Min-Max model.DM 1 DM 2 DM 3 DM 4

Table 9 .
Ranking in the machine selection problem.

Table 10 .
Initial score matrices in emergency management.Different from ordinary score matrices, the information of the score matrices in Table10is incomplete.Since Model (3) does not depend on the completeness of the initial information, we can obtain the overall score matrix of the four experts by Steps 1 and 2 of Algorithm 2 (see Table11) as follows.

Table 11 .
Score matrix of the alternatives for each expert.DM 1 DM 2 DM 3 DM 4