Next Article in Journal
The Effects of Cu-doped TiO2 Thin Films on Hyperplasia, Inflammation and Bacteria Infection
Previous Article in Journal
Study on Sintering System of Calcium Barium Sulphoaluminate by XRD Quantitative Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems

School of Mathematics and Statistics, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2015, 5(4), 998-1015; https://doi.org/10.3390/app5040998
Submission received: 26 August 2015 / Revised: 21 October 2015 / Accepted: 27 October 2015 / Published: 3 November 2015

Abstract

:
In multi-criteria group decision-making (MCGDM), one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value) of each criterion when he/she needs to rank the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efficient algorithm for the optimal weights. Some applications are employed to show the significance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models more effectively solve the MCGDM problems including the ones with incomplete score matrices, compared with the methods available in the literature. Specifically, by the proposed method, (1) the evaluation uniformity of each expert on the same criteria is guaranteed; (2) The overall evaluation for each alternative can collect the judgements of the experts as many as possible; (3) The highest discrimination degree of the alternatives is obtained.

1. Introduction

Multi-criteria group decision-making (MCGDM) is a familiar decision activity, such as investment decision-making, medical diagnosis, personnel examination and military system efficiency evaluation (see [1]). In MCGDM, different experts usually give different judgments on some alternatives over a set of evaluation criteria, which are used to rank the alternatives.
Denote A i and C j , i = 1 , 2 , , n , j = 1 , 2 , , m , the alternatives i and the criterion j, respectively. An MCGDM problem with n alternatives, p experts and m criteria can be mathematically described as follows. Let x i j k be the k-th expert’s evaluation on the i-th alternative by the j-th criterion, where k = 1 , 2 , , p . We call X k = ( x i j k ) n × m , k = 1 , 2 , , p , the score matrices. u k = ( u 1 k , u 2 k , , u m k ) is a given vector, where u j k is referred to as the weight of the j-th criterion by the k-th expert. Let w = ( w 1 , w 2 , , w p ) be a given weight of the experts, whose component reflects the importance of the evaluation of the k expert. Then, an overall evaluation of alternative i is obtained by:
S i = k = 1 p a i k w k ,
where:
a i k = j = 1 m x i j k u j k .
It is clear that the weight vectors u k and w play a fundamental role in ranking the preference of the alternatives on the basis of the given matrices X k , k = 1 , 2 , , p . However, the determination of the weight vectors has been regarded as one of the main difficulties in solving the MCGDM problem.
In the existing results, the methods to specify the weight vectors can be classified as the subjective and objective ones. For example, Ramanathan and Ganesh in [2] presented a simple method, which uses the decision-makers’ own subjective opinions to calculate the expert’s weights. In [3], Parreiras et al. proposed a flexible consensus scheme to establish the order. Bodily in [4] established another decision-making term to give weights to the initial decision-making members and worked out the weights by measuring the additional preference value deviations. Xu in [5] improved Bodily’s method and proposed a more direct method to calculate the weights. Since the above methods are associated with the experience of the decision-makers, they basically belong to the subjective method. However, we should reduce decision bias and improve the objectiveness of decision-making although, experience is an important reference.
In contrast, as an objective method, the obvious feature is to choose the suitable weights on the basis of computational models, which can mine useful information from the score matrices. For example, in [6], the variation coefficient method was presented to choose the weights of criteria by calculating the standard deviation. The entropy-based method (EBM) was proposed in [7] to calculate the weights of criteria based on the concept of entropy. In [8], a distance-based group decision-making methodology was proposed to solve unconventional multi-person multi-criteria emergency decision-making problems. The above three methods are designed to obtain the weights, such that the degree of discrimination and deviation from each alternative is improved. Since the weights in these methods are determined directly based on the data themselves, they are relatively objective and against the possible bias of the decision-maker. However, the existing objective methods seem to neglect the fact that each expert holds a uniform viewpoint of the importance (weights) of each criterion when he/she needs to rank all the alternatives.
For an MCGDM problem with incomplete information on attribute weights, linear programming models were established to find the compromise weights in [9,10]. By virtue of the notion of the significance degree, a zero-one mixed integer linear programming model was constructed in [11] to identify the weights. In [12], a linear programming model was also constructed to obtain the weights for the MCGDM problem with imprecise information.
In some existing results available in the literature, it is also often that the subjective judgment and the objective model are combined to choose the weight vectors. For example, Herrera et al. in [13] proposed a linguistic ordered weighted averaging operator to calculate the weights. In [14], there were four different operators presented. Honert in [15] presented the so-called REMBRANDT method (ratio estimation in magnitudes of decibels to rate alternatives which are non-dominated) to determine the weights. Actually, the REMBRANDT method is a combination of AHP (Analytic Hierarchy Process) and the sample multi-attribute rating technique (SMART) to quantify the decision-makers’ experiences. The linguistic probabilistic weighted average (LPWA) was presented in [16].
In addition, the above methods have also been extended to solve uncertain group decision-making problems. For example, in [17], a model was constructed by maximizing the comprehensive membership coefficient to determine the weights of decision-makers, as the expert’s score matrices are involved with intuitionistic interval fuzzy information. The basic idea is to determine the weights by different definitions on the degree of discrimination and deviation, as done in a certain environment (see [18,19]).
In summary, compared to the objective method, subjective judgment is often deemed a possible threat to the fairness of ranking in practice. However, as for an objective method, it is still a challenging task to construct a more reasonable computational model to mine the information from the score matrices.
Different from all the methods mentioned above, it is noted that in [20,21], a so-called robust portfolio modeling (RPM) method was presented to solve the multi-criterion project portfolio problems without the determination of weights in advance. However, compared to the first type of methods, RPM needs an efficient algorithm to find all the non-dominated portfolios to compute the core index of each portfolio. As pointed out in [20], the search for all the non-dominated portfolios is far more difficult than the solution of a knapsack problem. Thus, no polynomial-time algorithm exists in general to find the optimal project portfolio.
Owing to the advantage of the methods with predetermined weights in MCGDM, instead of the RPM approach, this paper intends to present two Min-Max optimization models to determine the weights of criteria and experts, respectively. In addition, based on the existing smooth optimization techniques, efficient and convergent algorithms will also be developed to solve the Min-Max problems in this paper. It is clear that with the optimal point estimates of weights, the complexity in solving the problem of ranking the alternatives is greatly reduced.
Specifically, to choose the weights of criteria, we will take into consideration the uniform viewpoint possessed by the same expert on the importance (weighted value) of each criterion when he/she needs to rank all the alternatives. Examples will be constructed to show the difference between our method and the ones available in the literature, which are based on the degree of discrimination and deviation from each alternative. On the other hand, the weights of experts will be optimized such that the overall score of each alternative, calculated by the obtained weights, can collect the perspectives of the experts as many as possible. In this case, the objective function is to minimize the maximal variation between the actual and ideal score vectors of the p experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. Finally, applications of the models will be employed to show the significance of the models and algorithm.
The rest of paper is organized as follows. In next section, two Min-Max models are constructed to optimize the weight vectors. Section 3 is devoted to the solution method of the Min-Max models. In Section 4, the significance of the proposed approach is shown by its applications. Specifically, some comparisons will be made with the other methods available in the literature. Final remarks are given in the last section.

2. Min-Max Models for Determination of Weights

In this section, two Min-Max models are constructed to optimize the choice of weight vectors.

2.1. Min-Max Model for the Weights of Criteria

Since any expert should possess a uniform viewpoint on the importance (weight) of each criterion when he/she ranks all the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. For example, if an expert is asked to evaluate the academic output of some professors by two criteria: (1) the results in scientific research, such as the number of journal articles, and (2) the completed assignments in teaching, then in evaluating each professor, the expert uses the same weights in principle for the above two evaluation criteria.
Mathematically, for the given expert k, the above idea can be specified by constructing the following nonlinear optimization model:
min u k max 1 i n j = 1 m x i j k - u j k j = 1 m x i j k 2 s . t . j = 1 m u j k = 1 , 0 u j k 1 , j = 1 , 2 , , m .
It is noted that there have been three main methods to determine the weights of criteria in the literature, which include the variation coefficient method (VCM) in [6], the entropy-based method (EBM) was proposed to calculate the weights of criteria in [7] and the distanced-based method (DBM) in [8]. In order to facilitate a comparison between Model (3) and these three methods, we will summarize these methods in detail in Appendix and show that the weights determined by any one of the above three methods are not the solution of Model (3) in general. In other words, these methods cannot guarantee the uniformity of the importance (weights) of a criterion for the same expert.
Actually, in the case study given in Section 4, we first obtain the weights of the criteria by different methods (see Table 3 in Section 4). Then, we compute the values of the objective function in (3) corresponding to these weights for all four methods, respectively.
In Table 1, D M i , i = 1 , 2 , 3 , denote the i-th decision-maker. From the results in Table 1, it is clear that the uniformity degree of the objective function in Model (3) is different for the four methods. The last row in Table 1 indicates that the uniformity obtained by our method (denoted by Min-Max in Table 1) is the most satisfactory compared to the other three methods.
Table 1. Uniformity of the criteria’s importance.
Table 1. Uniformity of the criteria’s importance.
Methods DM 1 DM 2 DM 3
VCM0.10960.20910.2168
EBM0.26560.36550.3057
DBM0.06310.04890.1099
Min-Max0.01240.02220.0423

2.2. Min-Max Model for the Weights of Experts

We are now in a position to design a model to optimize the weights of experts.
Since the obtained overall evaluation for each alternative should collect the judgments of the experts as many as possible, the objective function is to minimize the maximum variation between the actual vector of evaluations and the ideal one for the p experts, such that the optimal weights can reduce the difference of the experts in evaluating the same alternative.
Mathematically, the optimization model for the choice of the weights of experts reads:
min w max 1 k p i = 1 n a i k - k = 1 p w k a i k 2 s . t . k = 1 p w k = 1 , 0 w k 1 , k = 1 , 2 , , p .
It is noted that in [8], Yu and Lai recently proposed a distance-based optimization approach for the determination of the weights of experts. The objective function is to minimize the sum of the squared distance from one decision result to another, such that a maximum agreement is achieved. Specifically, for the i-th alternative, the squared distance between experts s and t is defined by:
d s t 2 = i = 1 n ( a i s w s - a i t w t ) 2 , s , t = 1 , 2 , , p ,
where i = 1 , 2 , , n . Using this squared distance, the following optimization model is constructed in [8] to determine the weights:
min w D s . t . k = 1 p w p = 1 , 0 w k 1 ,
where:
D = u = 1 , u v p v = 1 p d u v 2 = u = 1 , u v p v = 1 p i = 1 n ( a i u w u - a i v w v ) 2 .
Remark 1. We can show that Model (4) is not equivalent to Model (6). Actually, in Section 4.2, we will show that the weights of experts are different for the same score matrix of the alternatives from each expert.

3. Min-Max Approach to MCGDM Problems

In this section, we will first develop an efficient algorithm to solve Models (3) and (4) on the basis of the properties of the models. Then, we will present the computer procedure to solve the MCGDM problem.

3.1. Efficient Algorithm for the Min-Max Models

In general, it is not easy to find a solution for a constrained Min-Max problem. We now develop an efficient algorithm to solve Models (3) and (4).
Define a function F : R m R . For any u k R m ,
F ( u k ) = max 1 i n { f i ( u k ) } ,
where f i : R m R , given by:
f i ( u k ) = j = 1 m x i j k - u j k j = 1 m x i j k 2 .
It is clear that f i , i = 1 , 2 , , n , are smooth quadratic functions in u R m .
Clearly, we can write Model (3) in a compact form.
min F ( u k ) s . t . l ( u k ) 0 , u k R m ,
where l : R m R 2 m + 2 , specified by:
l ( u k ) = j = 1 m u j k - 1 1 - j = 1 m u j k - u 1 k - u m k u 1 k - 1 u m k - 1 .
For Model (9), we define:
F ( u k , t ) = θ 1 t ln i = 1 n e f i ( u k ) / ( θ 1 t ) , G ( u k , t ) = θ 2 t ln κ = 1 2 m + 2 e l κ ( u k ) / ( θ 2 t ) ,
where t > 0 is a given perturbation parameter. As t 0 , the solution of the following unconstrained optimization problem tends to that of Model (9):
min L ( u k , t ) = F ( u k , t ) - t ln ( - G ( u k , t ) ) .
Similarly, we define a function H : R p R , specified by:
H ( ω ) = max 1 k p { h k ( ω ) } ,
where h k : R p R , and for any ω R p ,
h k ( ω ) = i = 1 n a i k - k = 1 p ω k a i k 2 .
It is clear that h k , k = 1 , 2 , , p , are smooth quadratic functions in ω R p .
We write Model (4) in a compact form.
min H ( ω ) s . t . Φ ( ω ) 0 , ω R p ,
where Φ : R p R 2 p + 2 , specified by:
Φ ( ω ) = k = 1 p ω k - 1 1 - k = 1 p ω k - ω 1 - ω p ω 1 - 1 ω p - 1 .
For Model (13), we define:
F ( ω , t ) = θ 1 t ln k = 1 p e h k ( ω ) / ( θ 1 t ) , G ( ω , t ) = θ 2 t ln κ = 1 2 p + 2 e Φ κ ( ω ) / ( θ 2 t ) ,
As t 0 , we can obtain an approximate solution of Model (13) by solving the following unconstrained optimization problem:
min L ( ω , t ) = F ( ω , t ) - t ln ( - G ( ω , t ) ) .
With the above preparation, we are now in a position to state the following framework of the algorithm to solve the optimization Models (9) and (13).
Algorithm 1. Step 0. Given an initial guess of solution x 0 , choose 0 < t < 1 , 0 < θ 1 1 , 0 < θ 2 1 . Set ν : = 0 .
Step 1. Solve Problem (11) (or Problem (15)) by the modified conjugate gradient algorithms in [22,23,24]. Its optimal solution is referred to as u k * (or ω * ), k = 1 , 2 , , p .
Step 2. If u k * and ω * are the solutions of Models (9) and (13), then the algorithm stops. Otherwise, go to Step 3.
Step 3. Set ν : = ν + 1 , t : = t ν . Go to Step 1.
Remark 2. In the practical implementation of Algorithm 1, if the optimal solution of Problem (11) at the ν-th iteration, being referred to as u k * ( ν ) , approximately satisfies the constraints in Model (9), and the difference of the optimal solutions corresponding to ν and ν - 1 satisfies u k * ( ν ) - u k * ( ν - 1 ) 0 . 5 × 10 - 4 , then Algorithm 1 stops.

3.2. Min-Max Approach to MCGDM

With Algorithm 1, we are in a position to state a computer procedure to solve the MCGDM problems.
Algorithm 2. ( Min-Max algorithm for MCGDM)
Step 0 (Initialization). Input the given score matrices X k = ( x i j k ) R n × m , k = 1 , 2 , , p .
Step 1 (Weights of criteria). Solve Model (3) by implementing Algorithm 1 to obtain the weights of the criteria of each expert. The optimal solution is referred to as u k for Expert k.
Step 2 (Overall scores of alternatives from each expert). By (2), compute the overall scores of Alternative i given by Expert k, i = 1 , 2 , , n , k = 1 , 2 , , p .
Step 3 (Weights of experts). Solve Model (4) by implementing Algorithm 1 to obtain the weights of experts. The optimal solution is referred to as ω k , k = 1 , 2 , , p .
Step 4 (Final scores of alternatives). By (1), compute the final scores of Alternatives i, i = 1 , 2 , , n , such that the ranking value of each alternative is obtained.
Remark 3. It is easy to see that the methods to determine the weight vectors of criteria and experts in Algorithm 2 are different from any other ones available in the literature. In next section, we shall further show the advantages, as well as some new ideas being incorporating into the construction of the models.

4. Numerical Experiments and Applications

In this section, we will apply the models and algorithm in some practical problems, especially in comparison with the existing approaches.
For all the algorithms, the codes of computer procedures are written in MATLAB and are implemented on Lenovo PC with a 2.9-GHz CPU processor (made in Beijing, China), 4 GB RAM memory and the Windows 7 operation system (made by Microsoft Corporation, Redmond, WA, USA).

4.1. Weights of the Criteria by Models (3)

We first present a simple example of score matrices, which is directly from [8] (see Table 2). With the same score matrices, we intend to study the difference between our method and the existing methods when they are applied to determine the weights of criteria.
Table 2. Score matrices.
Table 2. Score matrices.
Alternatives DM 1 DM 2 DM 3
c 1 c 2 c 3 c 1 c 2 c 3 c 1 c 2 c 3
A 1 0.240.330.430.400.200.400.150.240.61
A 2 0.300.350.350.450.180.370.280.160.56
A 3 0.280.330.390.350.250.400.230.440.33
A 4 0.420.260.320.250.400.350.350.200.45
A 5 0.250.320.430.300.300.400.440.180.38
By solving Model (3) and implementing the computing procedures of the other three methods (VCM, EBM and DBM) in Appendix, we can obtain the weights of the different criteria for each expert. Numerical results are reported in Table 3.
Table 3. Weights of the criteria.
Table 3. Weights of the criteria.
Method u 1 (DM 1 ) u 2 ( DM 2 ) u 3 (DM 3 )
VCM ( 0 . 5082 , 0 . 2255 , 0 . 2663 ) ( 0 . 3658 , 0 . 5371 , 0 . 0971 ) ̲ ( 0 . 3479 , 0 . 4222 , 0 . 2299 )
EBM ( 0 . 6599 , 0 . 1442 , 0 . 1959 ) ( 0 . 3200 , 0 . 6574 , 0 . 0226 ) ̲ ( 0 . 3702 , 0 . 4712 , 0 . 1586 )
DBM ( 0 . 3719 , 0 . 3806 , 0 . 2223 ) ( 0 . 4306 , 0 . 3103 , 0 . 2591 ) ( 0 . 4013 , 0 . 2014 , 0 . 3973 ) ̲
Min-Max ( 0 . 3300 , 0 . 2948 , 0 . 3752 ) ( 0 . 3500 , 0 . 2900 , 0 . 3600 ) ( 0 . 2728 , 0 . 2779 , 0 . 4493 ) ̲
From the results in Table 3, it is easy to compute the range of the weights of criteria given by all experts. For the four methods (VCM, EBM, DBM and Min-Max), these ranges are 0.44, 0.6348, 0.1999 and 0.1765, respectively. Clearly, the weights of criteria given by our model are the most uniform, as well as being helpful to rank the alternatives. In other words, in order to rank the alternatives, the other methods have to allocate a relatively large weight to a criterion and a small one to another criterion.
For the obtained weights of criteria in Table 3, Table 1 in Section 2 has reported the values of the objective function in Model (3) corresponding to these weights, respectively. Since the values of the objective function reflect the evaluation uniformity of each expert on the same criteria, the last row in Table 1 demonstrates that our method (Min-Max) outperforms all of the other three methods from the viewpoint of uniformity. In addition, DBM seems to be better than VCM and EBM.
By virtue of the criterion weights obtained from Model (3), we can calculate the overall score matrix of all the alternatives by each expert (see Table 4).
Table 4. Score matrix of alternatives by each expert.
Table 4. Score matrix of alternatives by each expert.
Alternatives DM 1 DM 2 DM 3
A 1 0.33780.34200.3817
A 2 0.33350.34290.3725
A 3 0.33600.33900.3333
A 4 0.33530.32950.3532
A 5 0.33820.33600.3408

4.2. Weights of the Experts by Model (4)

Next, we intend to study what the difference between our method and the existing methods is in determining the weights of experts.
For the same overall score matrix of the alternatives from each expert, we will compute the weights of experts by Model (4) in this paper and Model (6) in [8], respectively. In Table 5, we first fix an overall score matrix, which is the same as in [8], to make a comparison between our method and that in [8].
Table 5. Overall score matrix of experts.
Table 5. Overall score matrix of experts.
Alternatives DM 1 DM 2 DM 3
A 1 0.30580.29830.3009
A 2 0.32390.30300.3066
A 3 0.31760.30520.3338
A 4 0.35730.33850.3196
A 5 0.30850.31220.3277
Using Algorithm 1 to solve Model (4), we obtain the weights of experts:
w = ( 0 . 4823 , 0 . 3325 , 0 . 3333 ) T .
With the function “fmincon” in MATLAB to solve Model (6), the weight vector of experts is:
w ¯ = ( 0 . 3342 , 0 . 0300 , 0 . 4877 ) T .
Corresponding to w and w ¯ , we compute the errors of the experts between the actual vector of evaluations and the ideal one for all of the experts. The error vector of the three experts by our Model (4) is:
( 0 . 5933 , 0 . 5929 , 0 . 5957 ) × 10 - 3 .
The error vector by Model (6) is:
( 0 . 6005 , 0 . 2751 ̲ , 0 . 7328 ̲ ) × 10 - 3 .
The ranges of the two error vectors are 2 . 8 × 10 - 6 and 4 . 577 × 10 - 4 , respectively. This indicates that by our method, the obtained overall scores for each alternative can reflect the evaluations of the experts as many as possible.
However, the different weights of experts obtained by the above two methods do not seriously affect the final scores of the alternatives for the fixed initial score matrix in Table 5. Actually, Table 6 shows that the ranking is the same by the two methods.
Table 6. Rank of alternatives.
Table 6. Rank of alternatives.
AlternativesDBM [8]Min-Max
ResultRankResultRank
A 1 0.301650.30325
A 2 0.311040.31484
A 3 0.318820.32512
A 4 0.338410.33841
A 5 0.316130.31803

4.3. Application in a Machine Selection Problem

We now apply our method to study a machine selection problem for a manufacturing company, which is engaged in manufacturing precision machined components required for automotive and general engineering industries, such as automobile industries and textile machine manufacturers.
To meet the customer demand, it is important to enhance the manufacturing capability of the company. The machine selection problem is to select the best machine from the set of some feasible alternative proposals. To make the best selection, three criteria, flexibility, quality and productivity, will be taken into consideration to evaluate the various alternative machines. In addition, four experts from inside and outside the company are involved in the decision-making process. We suppose that the initial score matrices by all the experts are given in Table 7 (also see [7]).
We implement Algorithm 2 to solve the above machine selection problem. From Steps 1 and 2 of Algorithm 2, we obtain the overall score matrix of the four experts in Table 8, where D M i , i = 1 , 2 , 3 , 4 , denote the i-th decision-maker.
From Steps 3 and 4 of Algorithm 2, the final rank of the alternatives is obtained (see Table 9), where IDNN represents the improved decision neural network method in [7].
Table 7. Initial score matrices.
Table 7. Initial score matrices.
Alternatives DM 1 DM 2 DM 3 DM 4
c 1 c 2 c 3 c 1 c 2 c 3 c 1 c 2 c 3 c 1 c 2 c 3
A 1 0.310.200.490.200.400.400.800.100.100.300.500.20
A 2 0.350.140.510.300.200.500.400.100.500.200.400.40
A 3 0.320.160.520.400.100.500.300.200.500.800.100.10
A 4 0.280.280.440.300.500.200.250.450.300.400.400.20
A 5 0.400.200.400.100.700.200.200.600.200.300.050.65
A 6 0.450.150.400.150.800.050.200.700.100.150.800.05
A 7 0.250.250.500.800.100.100.300.300.400.100.700.20
A 8 0.300.290.410.300.050.650.300.050.650.400.100.50
A 9 0.280.180.540.400.400.200.200.400.400.300.200.50
Table 8. Overall score matrix with the Min-Max model.
Table 8. Overall score matrix with the Min-Max model.
Alternatives DM 1 DM 2 DM 3 DM 4
A 1 0.36900.32410.38950.3500
A 2 0.38000.31210.33320.3241
A 3 0.37800.31260.32320.3656
A 4 0.35200.35000.32640.3506
A 5 0.36000.34880.32550.2931
A 6 0.36750.36810.32950.3681
A 7 0.36250.36560.32730.3488
A 8 0.34750.29310.31700.3126
A 9 0.37700.35060.31730.3121
Table 9. Ranking in the machine selection problem.
Table 9. Ranking in the machine selection problem.
AlternativesMin-MaxIDNN [7]
ScoreRankScoreRank
A 1 0.359710.295245
A 2 0.343970.247298
A 3 0.346250.249719
A 4 0.345260.344543
A 5 0.340380.366592
A 6 0.358720.383201
A 7 0.353230.294086
A 8 0.321390.251017
A 9 0.347540.305244

4.4. Application in Chemical Spill Emergency Management

At the end of this section, we will verify the effectiveness of Algorithm 2 by solving a practical problem of chemical spill emergency decision-making.
All of the relevant data are from the “Community Contact” emergency exercise organized by the Brandon Emergency Support Team (BSET), held on Wednesday, 21 June 2006, in Brandon, Manitoba (also see [25]). In this exercise, there are four key experts, including the Brandon Police Service ( D M 1 ), the Brandon Fire Division ( D M 2 ), the Western Manitoba Hazardous Materials Technical Team ( D M 3 ) and the Brandon School Division ( D M 4 ), as a GDM framework. They are required to evaluate six emergency response alternatives A i ( i = 1 , 2 , , 6 ) under three criteria C j ( j = 1 , 2 , 3 ). C 1 represents physiological discomfort; C 2 represents emergency cost; and C 3 represents the safety criterion (in terms of the expected number of lives saved). During the release of hazardous airborne material, the “shelter-in-place alternative” ( A 1 ) is the practice of staying inside (or going indoors as quickly as possible) and moving to an area of maximum safety. On the other hand, “evacuation” involves transporting the victims to a nearby destination ( A 2 ) or the more distant Brandon Keystone Center ( A 3 ). A 1 , followed by A 2 , gives rise to the fourth alternative of sheltering in place followed by an evacuation to a nearby location, which is referred to as A 4 . If A 1 is followed by A 3 , an alternative of sheltering-in-place followed by an evacuation to the Keystone Center, it is named A 5 . Finally, A 6 is the alternative of “do-nothing”.
Table 10 shows the initial score matrices.
Table 10. Initial score matrices in emergency management.
Table 10. Initial score matrices in emergency management.
Alternatives DM 1 DM 2 DM 3 DM 4
c 1 c 2 c 3 c 1 c 2 c 3 c 1 c 2 c 3 c 1 c 2 c 3
A 1 0.150.250.150.200.670.25
A 2 0.550.750.200.200.200.050.500.220.250.450.440.20
A 3 0.450.250.800.200.150.050.300.110.500.250.330.20
A 4 0.100.100.30 0.200.220.40
A 5 0.100.050.40 0.100.110.20
A 6 0.250.250.05
Different from ordinary score matrices, the information of the score matrices in Table 10 is incomplete. Since Model (3) does not depend on the completeness of the initial information, we can obtain the overall score matrix of the four experts by Steps 1 and 2 of Algorithm 2 (see Table 11) as follows.
Table 11. Score matrix of the alternatives for each expert.
Table 11. Score matrix of the alternatives for each expert.
Alternatives DM 1 DM 2 DM 3 DM 4
A 1 0.17730.3966
A 2 0.50.13860.30460.3680
A 3 0.50.12500.29880.2619
A 4 0.1818 0.2697
A 5 0.2091 0.1349
A 6 0.1682
For the defective score matrix of the alternatives from the experts, we modify Model (4) by:
min w max 1 k p i = 1 n b i k a i k - k = 1 p w k a i k k = 1 p w k b i k 2 s . t . k = 1 p w k = 1 , 0 w k 1 , k = 1 , 2 , , p ,
where:
b i k = 0 , if a i k is defective ; 1 , if a i k is given .
In Step 3 of Algorithm 2, we solve Model (16) to obtain the weights of experts. Then, the final rank of the alternatives is obtained as in Table 12.
Table 12. Final optimal decisions.
Table 12. Final optimal decisions.
AlternativesMin-MaxGANP [25]DBM [8]
ScoreRankScoreRankScoreRank
A 1 0.113130.145540.13504
A 2 0.319010.140660.26702
A 3 0.312320.144650.30371
A 4 0.083150.226310.13813
A 5 0.0956140.189820.10705
A 6 0.076960.153530.05706
In Table 12, we denote GANP the group analytic network process approach in [25]. The results in Table 12 shows that there exists a ranking difference of the alternatives by the different methods. From the final alternative scores in Table 12, we can compute the standard deviations of these scores, which reflect the data dispersion. The standard deviations are 0.1161, 0.0343 and 0.0962, respectively. The result indicates that our method has the highest degree of discrimination for the alternatives.
Although we have reported that for the given same initial score matrix, the different weights of experts, obtained by Model (4) in this paper and Model (6) in [8], respectively, seem not to seriously affect the final ranking result for the alternatives, the different methods to determine the weights of criteria may result in a different ranking of the alternatives. Actually, by virtue of the optimization Model (3), Table 12 shows that the ranking result is different from those by the other methods. Furthermore, the difference between Min-Max and DBM is less than that between Min-Max and the method in [25]. In other words, construction of optimization models to determine the weights is more promising for offering a believable priority of the alternatives.
However, if we partition the ranking results into three groups according to the final scores in Table 12, then there exists a bit of similarity between Min-Max and DBM. The group ranked first is { A 2 , A 3 } ; the group ranked second is { A 1 , A 4 , A 5 } ; and the third group is { A 6 } . Table 12 demonstrates that the method (Min-Max) in this paper and DBM in [8] have the same partition.

5. Final Remarks

In this paper, two Min-Max models have been constructed to optimize the weights of criteria and the experts for multi-criteria group decision-making support. The obtained optimal weights of the criteria can minimize the maximal variation between the actual vector of evaluations and the ideal one for the alternatives. The optimal weights of experts can collect as many perspectives of the experts as possible, such that the difference of the experts in evaluating the same alternative reduces. To overcome the difficulty in solving the constrained Min-Max problems, an efficient algorithm has been developed to determine the optimal weights.
The numerical results indicate that the proposed Min-Max models more effectively solve the MCGDM problems, even in the case of the incomplete score matrices, compared to the existing methods. Actually, by our method, the evaluation uniformity of each expert on the same criteria can be guaranteed, and the final evaluation for each alternative is useful to collect the evaluations of all the experts as many as possible. It has also been proven that our method has the highest degree of discrimination for the alternatives.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 71210003) and the Hunan Provincial Innovation Foundation For Postgraduate(CX2015B038).
The authors would like to express their thanks to the anonymous referees for their constructive comments on this paper, which have greatly improved its presentation.

Author Contributions

Ming Chen contributed to the construction of model and the numerical computation. Zhong Wan contributed to the research plan, the development of algorithm and the manuscript preparation. Xiaohong Chen contributed to the result analysis and the improvement of presentation.

Conflicts of Interest

All of the authors have declared that there is no conflict of interest.

Appendix

Summary on Some Main Methods for the Determination of Weights

In [6], the variation coefficient method was presented to compute the weight of the criteria as follows.
Step 1 (Normalization). Each component of the evaluation matrix is normalized to a value in the interval [ 0 , 1 ] :
x ^ i j k = x ^ i j k - x i j k i = 1 n x i j k .
Step 2 (Averaging). For the j-th criterion, calculate the average value by:
x j k ¯ = 1 n i = 1 n x ^ i j k .
Step 3 (Deviation). For the j-th criterion, calculate the standard deviation by:
Θ ( x j k ) = 1 n i = 1 n x ^ i j k - x j k ¯ 2 .
Step 4 (Weights of criteria). For the j-th criterion, calculate the weights of criteria by:
u j k = δ j k / i = 1 n δ j k ,
where:
δ j k = Θ ( x j k ) / x j k ¯ .
In [7], the entropy-based method was presented to choose the weights of criteria as follows.
Step 1 (Normalization). Each component of the evaluation matrix is normalized by Equation (A1).
Step 2 (Computation of entropy). For the j-th criterion, calculate the entropy by:
E n j k = - λ i = 1 n x ^ i j k log ( x ^ i j k ) ,
where:
λ = 1 / log ( m ) .
Step 3 (Weights of criteria). For the j-th criterion, calculate the weights of criteria by:
u j k = ( 1 - E n j k ) / j = 1 m ( 1 - E n j k )
In [8], a distance-based method was presented to determine the criteria weights as follows.
Step 1 (Normalization). Each component of the evaluation matrix is normalized by Equation (A1).
Step 2 (Classification of criteria). Denote J 1 as the positive criteria (e.g., profit) and J 2 as the negative criteria (e.g., cost). For the j-th criterion, compute its optimistic and pessimistic values by u + = ( u 1 + , u 2 + , , u m + ) and u - = ( u 1 - , u 2 - , , u m - ) , where:
u j + k = max 1 i n x ^ i j k j ( J 1 ) min 1 i n x ^ i j k j ( J 2 )
u j - k = min 1 i n x ^ i j k j ( J 1 ) max 1 i n x ^ i j k j ( J 2 )
Step 3 (Computation of deviation). For the j-th criterion, calculate the deviation by:
d j + k = i = 1 n ( x ^ i j k - u j + k ) ,
d j - k = i = 1 n ( x ^ i j k - u j - k ) .
Step 4 (Weights of criteria). For the j-th criterion, calculate the weights of criteria by:
u j k = ξ j k / j = 1 m ξ j k ,
where:
ξ j k = d j + k / ( d j + k + d j - k ) .

References

  1. Costa, C.A.B.; Oliveira, M.D. A multicriteria decision analysis model for faculty evaluation. Omega 2012, 40, 424–436. [Google Scholar] [CrossRef]
  2. Ramanathan, R.; Ganesh, L.S. Group preference aggregation methods employed in AHP: An evaluation and an intrinsic process for deriving members’ weightages. Eur. J. Oper. Res. 1994, 79, 249–265. [Google Scholar] [CrossRef]
  3. Parreiras, R.O.; Ekel, P.Y.; Martini, J.S. A flexible consensus scheme for multicriteria group decision making under linguistic assessments. Inf. Sci. 2010, 180, 1075–1089. [Google Scholar] [CrossRef]
  4. Bodily, S.E. A delegation process for combining individual utility functions. Manag. Sci. 1979, 25, 1035–1041. [Google Scholar] [CrossRef]
  5. Xu, Z.S. A method for multiple attribute decision making with incomplete weight information in linguistic setting. Knowl.-Based Syst. 2007, 20, 719–725. [Google Scholar] [CrossRef]
  6. Pomerol, J.C.; Romero, S.B. Multicriteria Decision in Management: Principle and Practice; Kluwer Academic Publishers: Dordrecht, Netherlands, 2000. [Google Scholar]
  7. Singh, R.K.; Choudhury, A.K.; Tiwari, M.K.; Shankar, R. Improved decision neural network (IDNN) based consensus method to solve a multi-objective group decision making problem. Adv. Eng. Inform. 2007, 21, 335–348. [Google Scholar] [CrossRef]
  8. Yu, L.; Lai, K.K. A distance-based group decision-making methodology for multi-person multi-criteria emergency decision support. Decis. Support Syst. 2011, 51, 307–315. [Google Scholar] [CrossRef]
  9. Malakooti, B. Ranking and screening multiple criteria alternatives with partial information and use of ordinal and cardinal strength of preferences. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 2000, 30, 355–368. [Google Scholar] [CrossRef]
  10. Ahn, B.S. Extending Malakooti’s model for ranking multi-criteria alternatives with preference strength and partial information. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 2003, 33, 281–287. [Google Scholar]
  11. Park, K.S. Mathematical programming models for characterizing dominance and potential optimality when multi-criteria alternative values and weights are simultaneously incomplete. IEEE Trans. Syst. Man Cybern.-Part A: Syst. Hum. 2004, 34, 601–614. [Google Scholar] [CrossRef]
  12. Yang, L. Procedure for group multiple attribute decision making with incomplete information. Syst. Eng. Theory Pract. 2007, 27, 172–176. [Google Scholar]
  13. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. Direct approach processed in group decision making using linguistic OWA operators. Fuzzy Sets Syst. 1996, 79, 175–190. [Google Scholar] [CrossRef]
  14. Xu, Z.S. Deviation measures of linguistic preference relations in group decision making. Omega 2005, 33, 249–254. [Google Scholar] [CrossRef]
  15. Honert, R.C.V.D. Decisional power in group decision making: A note on the allocation of group members’ weights in the multiplicative AHP and SMART. Group Decis. Negot. 2001, 10, 275–286. [Google Scholar] [CrossRef]
  16. Merigó, J.; Marqués, D.P.; Zeng, S.Z. Subjective and objective information in linguistic multi-criteria group decision making. Eur. J. Oper. Res. 2016, 248, 522–531. [Google Scholar]
  17. Wang, J.Q.; Han, Z.; Zhang, H.Y. Multi-criteria group decision-making method based on intuitionistic interval fuzzy information. Group Decis. Negot. 2014, 23, 715–733. [Google Scholar] [CrossRef]
  18. Wang, J.Q.; Peng, J.J.; Zhang, H.Y.; Liu, T.; Chen, X.H. An uncertain linguistic multi-criteria group decision-making method based on a cloud model. Group Decis. Negot. 2015, 24, 171–192. [Google Scholar] [CrossRef]
  19. Wang, J.Q.; Peng, L.; Zhang, H.Y.; Chen, X.H. Method of multi-criteria group decision-making based on cloud aggregation operators with linguistic information. Inform. Sci. 2014, 274, 177–191. [Google Scholar] [CrossRef]
  20. Liesiö, J.; Mild, P.; Salo, A. Preference programming for robust portfolio modeling and project selection. Eur. Oper. Res. 2007, 181, 1488–1505. [Google Scholar]
  21. Mild, P.; Liesiö, J.; Salo, A. Selecting infrastructure maintenance projects with Robust Portfolio Modeling. Decis. Support Syst. 2015, 77, 21–30. [Google Scholar] [CrossRef]
  22. Deng, S.H.; Wan, Z.; Chen, X.H. An improved spectral conjugate gradient algorithm for nonconvex unconstrained optimization problems. J. Optim. Theory Appl. 2013, 157, 820–842. [Google Scholar] [CrossRef]
  23. Huang, S.; Wan, Z.; Deng, S.H. A modified projected conjugate gradient method for unconstrained optimization problems. ANZIAM J. 2013, 54, 143–152. [Google Scholar] [CrossRef]
  24. Deng, S.H.; Wan, Z. A three-term conjugate gradient algorithm for large-scale unconstrained optimization problems. Appl. Numer. Math. 2015, 92, 70–81. [Google Scholar] [CrossRef]
  25. Levy, J.K.; Taji, K. Group decision support for hazards planning and emergency management: A group analytic network process (GANP) approach. Math. Comput. Model 2007, 46, 906–917. [Google Scholar]

Share and Cite

MDPI and ACS Style

Chen, M.; Wan, Z.; Chen, X. New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems. Appl. Sci. 2015, 5, 998-1015. https://doi.org/10.3390/app5040998

AMA Style

Chen M, Wan Z, Chen X. New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems. Applied Sciences. 2015; 5(4):998-1015. https://doi.org/10.3390/app5040998

Chicago/Turabian Style

Chen, Ming, Zhong Wan, and Xiaohong Chen. 2015. "New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems" Applied Sciences 5, no. 4: 998-1015. https://doi.org/10.3390/app5040998

Article Metrics

Back to TopTop