Next Article in Journal
On the Approximation by Balázs–Szabados Operators
Previous Article in Journal
An Equivalent Linear Programming Form of General Linear Fractional Programming: A Duality Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

BW-MaxEnt: A Novel MCDM Method for Limited Knowledge

1
School of Business, Central South University, Changsha 410083, China
2
School of Business, Guilin University of Technology, Guilin 541004, China
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(14), 1587; https://doi.org/10.3390/math9141587
Submission received: 24 May 2021 / Revised: 23 June 2021 / Accepted: 1 July 2021 / Published: 6 July 2021

Abstract

:
With the development of the social economy and an enlarged volume of information, the application of multiple-criteria decision making (MCDM) has become increasingly wide and deep. As a brilliant MCDM technique, the best–worst method (BWM) has attracted many scholars’ attention because it can determine the weights of criteria with less comparison time and higher consistency between judgments than analytic hierarchy process. However, the effectiveness of the BWM is based on complete comparison information among criteria. Considering the fact that the decision makers may have limited time and energy to study all criteria, they cannot construct a complete comparison system. In this paper, we propose a novel MCDM method named BW-MaxEnt that combines BWM and the maximum entropy method (MaxEnt) to identify the weights of unfamiliar criteria with incomplete decision information. The model can be translated into a convex optimization problem that can be solved effectively and has an overall optimal solution. Finally, a practical application concerning the procurement of GPU workstations illustrates the feasibility of the proposed BW-MaxEnt method.

1. Introduction

In past years, many popular and topical MCDM (multiple-criteria decision making) methods have been proposed by researchers, such as AHP (Analytic Hierarchy Process) [1], TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) [2], ELECTRE (Elimination and Choice Expressing Reality) [3], VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje) [4], TODIM (Tomada de Decisão Iterativa Multicritério) [5], PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluations)[6], GDM (Grey Decision Making) [7], and SWARA (Step-wise Weight Assessment Ratio Analysis) [8].
In general, the steps taken in MCDM to solve a practical problem can be divided into three parts [9].
The first step is to obtain decision information, including a criterion weight vector w = w 1 , w 2 , , w n and a matrix of scores for alternatives, as follows:
p 11 p 12     p 1 n p 21 p 22     p 2 n   p n 1 p n 2   p nn ,
where p ij is the normalized value for i with respect to criterion j .
The second step is to aggregate information V i by a certain approach, and most MCDM methods use the following method:
V i = i = 1 n w i p ij
The last step is to sort alternatives by the V i and select the best alternative.
In the above process, how to determine the weights of criteria is a crucial problem [10]. Generally, limited knowledge makes it difficult for decision makers (DMs) to directly specify the weights. At present, some weighting methods based on DMs’ preferences have been proposed. Among them, the AHP method, generating the subjective weight of each item by comparing all criteria in pairs, is the most extensively used one [11]. However, if the number of criteria is large, massive pairwise comparisons will increase the complexity of problems and the inconsistency of expert judgments. To address the challenges of AHP, Rezaei [12] proposed a new technique based on structured pairwise comparisons, named the Best Worst Method (BWM). In the BWM, only reference comparisons are necessary. Specifically, DMs need to first select the best and worst criteria as benchmarks or references, and then express their preference for the best one over the remaining items and the remaining items over the worst one using a number between 1 and 9, finally obtaining the optimal weights of criteria by solving a max–min nonlinear mathematical model. Compared to AHP methods, the BWM requires fewer pairwise comparisons and produces more reliable results with higher consistency [13]. However, when the pairwise comparisons are not fully consistent, the min–max nonlinear model will produce multiple optimal solutions. To overcome this disadvantage, Rezaei [14] transformed this model into a linear one to provide a unique solution. Given the advantages of the BWM, it has been combined with other MCDM methods without a weight derivation process, such as TODIM [15], VIKOR [16], TOPSIS [17], etc., and has been widely applied in many areas.
In some situations, the decision environment in the real world is complex and fast-changing. Limited knowledge and experience mean that DMs cannot give crisp pairwise comparison values for some criteria [18]. To address this issue, researchers extended the BWM to the fuzzy environment by describing DMs’ preferences with various fuzzy information types, such as fuzzy sets [19], intuitionistic fuzzy sets [20], interval type-2 fuzzy sets [21], probabilistic hesitant fuzzy sets [22], Z-numbers [23], rough fuzzy sets [24], etc. Although the fuzzy extensions of the BWM can handle the ambiguity and uncertainty of expert judgement, the collection of DMs’ preferences in these methods is still based on the assumption that DMs are familiar with all criteria [25]. However, sometimes, the DMs do not have enough time or energy to study all criteria before making a decision, and they cannot express the preference relation between some criteria, even with fuzzy information. For example, in the case of selecting a suitable car, there are many criteria that consumers need to take into consideration. Some criteria are probably known to consumers, like price, style, brand, fuel consumption, warranty, and so on. Some criteria may not be common for consumers but still important, such as ABS (anti-lock braking system), wheelbase (the distance between the front and back wheels of a car or other vehicle; the longer the wheelbase, the better the driving stability of the vehicle), and maximum torque (this determines the acceleration and climbing performance of the car). In this situation, as a first-time buyer, after looking up and studying the related information about cars, he/she may give high marks for a wheelbase length greater than 3000 and low marks for a wheelbase length lower than 2400. However, the buyer may not give the right answer about the proportion of the wheelbase in the evaluation of the car. If some unfamiliar but actually important criteria are abandoned, it may cause a certain degree of deviation in decision making. Thus, the DM must make a decision quickly with all criteria (both familiar and unfamiliar) taken into consideration.
In these cases, the weights of unfamiliar criteria cannot be determined by the aforementioned BWM method. Therefore, the main question in this study is, when the DMs have no knowledge of certain criteria, how do we determine the weights of these unfamiliar criteria with minimum risk?
Entropy theory is an efficient tool to deal with decision-making problems in uncertain environments. Entropy is a measure of information uncertainty [26]. The maximum entropy principle is to maintain maximum uncertainty to ensure minimum risk. Applications of the maximum entropy principle are everywhere [27]. For example, it is often said, ‘Do not put all your eggs in one basket’. This choice keeps to the principle of maximum entropy, so the risks of decision making are minimized. According to this principle, when predicting the probability distribution of a random event, the prediction result should satisfy all known conditions, and any subjective assumptions about unknown information should be avoided. In this case, the probability distribution is the most uniform and the prediction risk is the smallest. Therefore, for calculating the weights of unfamiliar criteria, the principle of maximum entropy can be adopted to ensure objectivity of the results.
Based on the above analysis, this paper introduces the maximum entropy principle into the BWM method to assign weights for the unfamiliar criteria. The main contributions of this paper are as follows. Firstly, we propose a novel MCDM method named BW-MaxEnt, and we demonstrate the steps of this method to resolve the problem of criterion weights. The BW-MaxEnt method can determine the weights of unfamiliar criteria with minimum risk to ensure the reliability of decision making. Secondly, we prove that the model based on BW-MaxEnt can be converted into a convex optimization problem with an overall optimal solution, which shows the rationality of the proposed model. Finally, several numerical examples and a real-world application are executed to illustrate the effectiveness and superiority of the proposed BW-MaxEnt method.
The rest of this study is organized as follows. In Section 2, we provide an overview of the BWM and the principle of maximum entropy. In Section 3, BW-MaxEnt is proposed. In Section 4, to make BW-MaxEnt more comprehensible, we apply BW-MaxEnt to a real-world problem: the procurement of GPU workstations. Conclusions and suggestions for future research directions are presented in Section 5.

2. Preliminaries

In this section, the BWM and the maximum entropy principle are introduced.

2.1. BWM

BWM is a simple, effective, and robust MCDM method proposed by Rezaei [12]. It is a pairwise-comparison-based method that has been applied in many areas, such as facility location selection [28], green supplier selection [29], and water security sustainability evaluation [30]. To determine the weights of the criteria, the DMs need to first select the best criterion (the most desirable, the most preferred, or the most important) and the worst criterion (the least desirable, the least preferred, or the least important). Then, they compare the best criterion (alternative) to the other criteria (alternatives) and all the other criteria (alternatives) to the worst criterion (alternative). The result of the comparisons can be represented as two vectors. Through solving an optimization model constructed based on the two comparison vectors, the optimal weights of the criteria can be obtained. The process of BWM can be divided into five steps:
Step 1. 
The criteria C 1 , C 2 , , C n need to be identified to make a decision.
Step 2. 
DMs are asked to selected the best (most important) criterion ( C B ) and the worst (least important) criterion C W to be used for the decision environment.
Step 3. 
DMs need to determine the preference of the best criterion over all the other criteria using a number ranging from 1 to 9. The comparison results of Best-to-Others can be represented with a vector:
  A Bo = a B 1 ,     a B 2 ,   , a Bn   ,
where a Bj   indicates the preference for the best criterion over criterion j .
Step 4. 
Similarly, the comparison results of Others-to-Worst vector would be
A ow = a 12 ,     a 2 w ,   , a nw ,
where a jW   indicates the preference for criterion j over the worst criterion.
Step 5. 
To determine the optimal weights of criteria, the maximum absolute differences W B w j a Bj   , W j w w a jw   for all j should be minimized. Considering the non-negative and sum-to-one constraints of the weights, the model can be formulated as follows:
Min   max w B w j a Bj   , w j w w a jw   s . t . j = 1 n w j = 1 w j 0 ,   for   all   j
Suppose ξ = max W B w j a Bj   , W j w w a jw   ; then Problem (3) can be transferred to the following problem:
min   ξ s . t . w B w j a Bj   ξ w j w w a jw   ξ j = 1 n w j = 1 w j 0 ,   for   all   j
Because Problem (4) could result in multiple optimal solutions, Rezaei [14] proposed a linear model for BWM that has a unique solution:
min   ξ L s . t . w B w j a Bj     ξ L ,   for   all   j w j w w a jw     ξ L ,   for   all   j j = 1 n w j = 1     w j 0 ,   for   all   j
By solving Problem (5), the optimal weights w = w 1 , w 2 , , w n and ξ L can be obtained. In this model, ξ L is considered the consistency ratio of the comparison system. The closer the value of ξ L is to zero, the higher the consistency level of the comparison system.

2.2. Entropy Theory and the Principle of Maximum Entropy

Information entropy is a measurement of the amount of information in a system; it originated in thermodynamics and was introduced into information theory by Shannon [31]. The formula for calculating information entropy is as follows:
H p = i = 1 n p i log p i
where p i is the probability of event i , and H p is entropy. The bigger the entropy, the larger the quantity of information in the system is. One of the basic functions of information is to eliminate the uncertainty in the system [31]. If there is more information in the system, the system contains lower uncertainty. Thus, we can draw a conclusion that the higher the entropy, the greater the reliability of the system is.
The principle of maximum entropy was first provided by Jaynes [32,33]. It states that, subject to precisely stated prior data, the probability distribution that best represents the current state of knowledge is the one with the largest entropy. The principle of maximum entropy is commonly applied in two ways: obtaining prior probability distributions for Bayesian inference and Maximum entropy models.
The maximum entropy model (MaxEnt) is a general-purpose method for making predictions or inferencing from incomplete information [34]. It can be regard as a criterion in probabilistic model learning. When we infer the random variable probability distributions, the probability distribution model that conforms to the known observation data and has the maximum entropy is the best. Because the maximum entropy distribution is the probability distribution with the smallest subjective bias and the greatest uncertainty, even though it may slightly deviate from the actual distribution, it is strong in robustness.
In MCDM problems, the weights of criteria can be considered as random variables. The weights are subject to non-negative and sum-to-one constraints. Thus, a simple model for calculating criterion weights based on MaxEnt is proposed, as follows:
min   H p = i = 1 n w j ln w j s . t . j = 1 n w j = 1 0 ,   for   all   j

3. BW-MaxEnt

With the increasingly rapid pace of modern life, it is more important for people to make decisions quickly according to their experience and limited knowledge. It becomes uneconomical to put so much energy and time into gathering information and doing research about criteria for a small decision. DMs need to make decisions quickly after scanning the relevant information. In this situation, the BWM is not suitable. For instance, in the case of buying a camera, the decision criteria are C 1 : price, C 2 : weight, and C 3 : pixel count. Supposed the customer is not familiar with pixels and his preference information is   C 1 : C 2 = 2 : 1 . The model for BWM is shown in the following:
  min   ξ L s . t . w 1 2 w 2   ξ L w 1 + w 2 + w 3   = 1 w 1 , w 2 , w 3   0
It is easy to prove that the model (8) has infinite optimal solutions. We can obtain an optimal solution of w 1 = 0 , w 2 = 0 , w 3   = 1 . However, this is an absurd solution as it assigns 1 to the weight of criterion C 3 , which is not compared.
MaxEnt is a method that focuses on the practical information. However, some information is difficult to obtain, and DMs need to make judgments based on their own experience and preference. Sometimes, results generated by MaxEnt are unrealistic and unexplainable. Therefore, we propose an integrative method based on BWM and MaxEnt.

3.1. The Step of BW-MaxEnt

Here, we describe the steps of BW-MaxEnt, which comprises seven steps. Figure 1 illustrates the process of BW-MaxEnt.
Step 1. 
Determine the criteria of the decision.
Summon all DMs and clarify the issues to them. All DMs are free to come up with all possible criteria that have an influence on the evaluation results. Suppose there are altogether n -many criteria proposed, and they can be represented as a set N = C 1 , C 2 , , C n .
Step 2. 
Select the familiar criteria.
The DM will be asked to choose criteria that they are familiar with. Suppose that m -many criteria are selected, which can be represented as a set M = C 1 , C 2 , , C m . M is a subset of N .
Step 3. 
Choose the best and worst criteria.
The DM is asked to select the best (most important) criterion ( C B ) and the worst (least important) criterion ( C W ) from M . It is important to note that only the criteria are considered and not the values of the criteria.
Step 4. 
Make comparisons between the best criterion and all the other criteria.
The DM needs to determine their preference for the best criterion over all the other criteria using a number ranging from 1 to 9. The comparison results of Best-to-Others (BO) can be represented with a vector:
  A Bo = a B 1 ,     a B 2 ,   , a Bm   ,
where a Bj   indicates the preference for the best criterion over criterion C j   ,   C j M .
Step 5. 
Make comparisons between each of the other criteria and the worst criterion.
Similarly, the resulting Others-to-Worst (OW) vector would be
A oW = a 1 W ,     a 2 W ,   , a mW ,
where a jW   indicates the preference of the criterion C j ,   C j M , over the worst criterion.
Step 6. 
Determine the confidence score.
The DM is asked to give a confidence score μ for the result of the comparison. μ is within the range from 0 to 1.
Step 7. 
Establish the mathematical model.
Minimizing the maximum absolute difference W B w j a Bj   , W j w w a jw   and maximizing the weight entropy i = 1 n w j ln w j are taken as objective functions. Considering the non-negative and sum-to-one constraints of the weights, the mathematical model is as follows:
min   μ ξ + 1 μ i = 1 n w j ln w j ln n s . t . w B w i a Bi   ξ ,   for   all   C i M w i w w a iw   ξ , for   all   C i M j = 1 n w j = 1     w j 0 ,   for   all   C j N
where μ 0 , 1 ; when μ = 1 , Problem (9) degenerates to Problem (2).
Compared with BWM, the model for BW-MaxEnt is a multiobjective programming problem. On the one hand, the absolute difference between the decision preference and ratio of weights is considered, which means that the results become more in line with the expectations of DMs [12]. On the other hand, by employing the objective function for MaxEnt, we solve the problem of assigning weights to unknown criteria. This also improves the robustness and stability of decision making [35].

3.2. BW-MaxEnt and Convex Optimization Problems

Because Problem (9) is difficult to solve and could result in multiple optimal solutions, instead of minimizing the maximum value among the set of minimizing maximum absolute differences W B W j a Bj   , W j W W a jW   , we minimize the maximum among the set w B w j a Bj   ,   w j w W a jW   and remove the absolute value symbol in the inequality constraints. Then, the problem can be formulated as follows:
  min   μ ξ C + 1 μ i = 1 n w j ln w j ln n s . t . w B w i a Bi   ξ C 0 ,   for   all     C i M w i a Bi   w B ξ C 0 ,   for   all     C i Mw i w w a iw   ξ C 0 ,   for   all     C i M w w a iw   w i ξ C 0 ,   for   all   C i M j = 1 n w j = 1 w j 0 ,   for   all   C j N
Next, we prove that Problem (10) is a convex optimization problem. For a convex optimization problem, any locally optimal point is also globally optimal [36]. There are several sophisticated methods to solve convex optimization problems effectively, like the Interior-point method [37], Subgradient method, and Bundle method [38].
Theorem 1. 
If the optimization problem satisfies the following conditions, it is a convex optimization problem [39]:
(1)
The feasible region is convex;
(2)
The objective function f is convex.
According to Theorem 1, if we prove that the feasible region and objective function of Problem (10) are convex, then Problem (10) is a convex optimization problem.
Theorem 2. 
If a set is the intersection of a finite number of linear inequalities and equalities, the set is a polyhedron. A polyhedron is a convex set [36] and can be represented as follows:
P = x | a j T x b j , j = 1 , , m , a j T x = d j , j = 1 , , p
Firstly, according to Theorem 2, the feasible region of Problem (10) is a polyhedron because its equality constraint is affine and its inequality constraints are linear.
Then, for any w   0 , 1 ,   i n ,   1 w j > 0 , the objective function of Problem (10) is twice differentiable and its Hessian matrix HM w exists at each point in its feasible region, as follows.
HM w = 1 μ ln n 1 w 1 1 μ ln n 1 w n
p i is the i th principal minor of HM w .
p i = 1 μ ln n 1 w 1 1 μ ln n 1 w i
D p i is the determinant of p i .
D p i = 1 μ ln n i j = 1 i 1 w j
For a problem with more than three criteria, ln n > 0 ; for μ 0 , 1 , 1 μ > 0 , so D p i > 0 . It can thus be obtained that all minors of HM w are positive. So, HM w is positive definite, and the objective function of Problem (9) is convex. Finally, according to Theorem 1, Problem (10) is therefore a convex optimization problem.

3.3. Consistency and Robust Analysis

By solving Problem (10), the optimal weights w = w 1 , w 2 , , w n , absolute differences ξ c , and entropy H   ( w ) are obtained. Although ξ c can be regarded as an indicator of the consistency of the comparisons, it does not provide a standard for different comparison scales. Rezaei [12] proposed a method that applies ξ * to calculate the consistency ratio (CR), and ξ * can be calculated as follows.
ξ * = max w B w j a Bj   , w j w W a jW  
Then, he used the maximum values a BW as a consistency index (CI), which is shown in Table 1.
Finally, the CR can be calculated as follows
CR = ξ * consistency   index

3.4. Numerical Examples

Example 1.
This simple example involves a customer who wants to buy a cell phone for himself but is not familiar with all the parameters of the mobile phone. The main decision criteria can be C 1 : price, C 2 : style, C 3 : camera, and    C 4 : processor.Table 2shows the vectors of comparison results for the customer. The second line is the BO vector, and the third line is the OW vector. C 1   is the best criterion, and C 3 is the worst criterion.
According to the BWM method proposed by Rezaei [14], the following model can be established:
min   ξ C s . t . w 1 3 w 2 ξ C 0 3 w 2 w 1 ξ C 0 w 1 8 w 3 ξ C 0 8 w 3 w 1 ξ C 0 w 1 2 w 4 ξ C 0 2 w 4 w 1 ξ C 0 w 2 3 w 3 ξ C 0 3 w 3 w 2 ξ C 0 w 4 4 w 3 ξ C 0 4 w 3 w 4 ξ C 0 w 1 + w 2 + w 3 + w 4 = 1 w 1 , w 2 , w 3 , w 4 0
By solving this model, we can get: w 1 *   = 0.5062, w 2 *   = 0.1728, w 3 *   = 0.0617, w 4 *   = 0.2593, and ξ *   = 0.1994. As a BW = a 13 = 8 , CI is identified as 4.47, so CR = 0.04. In this situation, DMs have sufficient knowledge for all criteria, and the reference comparison values are fully given. A reasonable optimal solution can be obtained using the conventional BWM method.
Example 2. 
If the customer only has partial knowledge for a certain criterion, they may not be able to give all comparison values. As shown inTable 3, the comparison value between C 4 and the worst criterion is unknown, which is marked with * .
In this situation, the two constraints w 4 4 w 3 ξ C 0 and 4 w 3 w 4 ξ C 0 are deleted in the model, but the other two constraints on w 4 still remain. Solving this model using the BWM method results in: w 1 *   = 0.5062, w 2   * = 0.1728, w 3 *   = 0.0617, w 4 *   = 0.2593, ξ *   = 0.1994, and CR = 0.04. It can be seen that when DMs can only give partial information for certain criteria, the optimal solution generated by the conventional BWM method is still reasonable.
Example 3.
Sometimes, due to the complexity of MCDM issues, DMs are not familiar with some important criteria at all, and they cannot provide any information for these criteria. As shown inTable 4, the customer uses *  to mark the criterion    C 4 that they are not familiar with. In this situation, the model we built does not contain any inequality constraints on w 4 . If the model is solved by the conventional BWM method, the optimal solution obtained is  w 1 * = 0, w 2 * = 0, w 3 * = 0, w 4 * = 1, and  ξ * = 0, which is unreasonable.
Using the proposed BW-MaxEnt method, we can establish the model for this problem based on the vectors from Table 4, as follows:
min   μ ξ C + 1 μ ln n i = 1 n w j ln w j s . t w 1 3 w 2 ξ C 0 3 w 2 w 1 ξ C 0 w 1 8 w 3 ξ C 0 8 w 3 w 1 ξ C 0 w 2 3 w 3 ξ C 0 3 w 3 w 2 ξ C 0 w 1 + w 2 + w 3 + w 4 = 1 w 1 , w 2 , w 3 , w 4 0
To test the sensitivity of the BW-MaxEnt method, we considered different values for μ 0 , 0.1 , , 1 to see the resulting change in the weights of the criteria and CR. From Table 5, changing μ resulted in relative changes in the weights of the criteria.
Figure 2 illustrates the weights according to BW-MaxEnt over μ . When μ = 0 ,   w 1 = w 2 = w 3 = w 4 = 0.25 , which means the DM has no confidence in the results of the comparison. When you know nothing about the criteria, assigning an equal weight to each criterion is in line with the maximum entropy principle. When μ = 1 , BW-MaxEnt was equivalent to the conventional BWM. It also can be noticed that when μ 0.2 , 0.9 , the weights changed little, which illustrates that the proposed method is not sensitive to the value of μ and can generate stable results.
Figure 3 shows the CR over μ; as μ increased, the CR decreased, and the veracity of the comparisons increased. When μ   0.3 , all the calculating results satisfied the consistency demand, and CR was less than 0.1, which shows good consistency.
It can be concluded that the conventional BWM model is no longer applicable when DMs cannot provide any comparative information for unfamiliar criteria. Using the proposed BW-MaxEnt method can use the advantages of entropy theory to reduce the risk caused by information uncertainty, thereby assigning reasonable weights for unfamiliar criteria.

4. An Illustrative Application

To make the proposed methodology more comprehensible, in this section, we discuss a real-world application: the project of GPU workstation purchase for a Business School.
With the rapid development of information techniques, artificial intelligence has become one of the hottest topics in recent years. Many scholars in the Business School of Central South University (CSU) are pursuing relevant research, like recommender systems, knowledge mapping analysis, social network analysis, and so on. These studies usually require a large amount of computing resources, especially if they are based on deep learning. It is difficult to meet the demand for computing resources with a PC. Therefore, the CSU Business School decided to purchase a batch of GPU workstations and build a cloud computing platform.

4.1. Data Collection

After understanding the demands of the business school, the computer supplier, who has established a long-term partnership with the CSU Business School, offered two types of GPU workstations: the Leadtek W2030 and the Stend IW 4213. The detailed configuration information of the GPU workstations is shown in Table 6.
Central Processing Unit (CPU): The CPU is also named the main processor and is as necessary to a computer as a brain is to a human. It is one of the most important parts of the workstation because it controls the operation of the computers.
Graphics Processing Unit (GPU): This can greatly speed up the training of a deep learning model.
Price: The price for the SD is almost double the price for the LT.
Hard Drive (HD): This is used for storing and retrieving data.
Solid State Drive (SSD): This is also used for storing and retrieving data, but it is far faster than a regular hard drive.
Memory: The amount of space in a computer for storing information. Algorithms for the recommended system will consume a lot of memory.
Warranty: If a GPU workstation fails within a certain time, the computer supplier will repair it or replace it free of charge.
Operating System (OS): The SD provides a selection of multiple operating systems.
Power Supply (PS): Redundant power will make sure that the GPU workstation has a normal run time.
There are some factors that need to be considered when procuring GPU workstations. First of all, the performance of the GPU station must meet the needs of the scientific research work. Second, the expense of the GPU workstations should not exceed the budget. Lastly, the GPU workstations should be easy to install, maintain, and use. A total of three individuals contributed to the decision, those being the director of the Big Data and Intelligent Decision Research Center of the Business School (Respondent 1), the Senior Purchaser who is in charge of purchasing for the Business School (Respondent 2), and a professor who has been studying recommendation systems (Respondent 3).
After learning about the configuration information of the GPU workstations, we introduced the comparison process of BW-MaxEnt to the respondents in detail. Then, they were asked to fill in a questionnaire designed based on BW-MaxEnt. The questionnaire can be divided into two parts: One part is the comparison between criteria, where we provided a full description of the 1–9 scale for the comparison. The other part is the evaluation of the GPU workstation configurations, where respondents were asked to provide an evaluation of each configuration by giving an integer number ranging from 1 to 10.

4.2. Calculation Process and Results

The comparison results from the respondents are shown in Table 7. For each respondent, the first column is the comparison result of the BO vector, and the second column is the comparison result of the OW vector. Respondent 2, as a purchaser, is not as skillful with computers as the other respondents. He was able to use * to mark the criteria that he knew less about, and he did not make comparisons of these criteria to other criteria. Respondent 3 marked “Power Supply” with *. The evaluation scores for the workstation configurations are shown in Table 8. For each respondent, the first column is the scores for the LT, and the second column is the scores for the SD. The respective confidence scores μ given by the respondents were 1, 0.5, and 0.8.
In this situation, the conventional BWM method cannot appropriately model the problems for Respondents 2 and 3. Thus, the weights were determined using the BW-MaxEnt model for the three different respondents. Table 9 shows the optimal weights of each criterion for each respondent. As the main purpose of this purchase, GPU was the most important criterion for Respondents 1 and 3 and the second most important for Respondent 2. Price was the most import criterion for Respondent 1, because his main task is to control the purchasing budget.
As can be seen from the results, for Respondents 1 and 3, GPU was the most important criterion for GPU workstation selection, followed by CPU and Memory, because the respondents are familiar with the demands of the CSU Business School for computing resources. For Respondent 2, price was the most important criterion because one of his responsibilities was to make sure the purchase was under budget. These are well-aligned with the previous factors which should be taken into consideration. For all respondents, HD and SSD did not receive a large weight, because the demand for computing power is far more urgent than the demand for storage.
The CR is the consistency indicator for the comparisons. As shown in Table 10, the comparisons show a high consistency as the value of the CR is less than 0.1 [40].
Figure 4 shows the GPU workstation scores for each respondent. It can be seen that the Stend IW 4213 had higher evaluation scores (6.1165, 5.6180, 7.0653) than the Leadtek W2030 (4.7126, 5.4846, 5.8224). Thus, the Stend IW 4213 should be selected as the first choice. Finally, the CSU Business School purchased four Stend IW 4213 GPU workstations and one Leadtek W2030 for the platform.

4.3. Comparison Analysis and Discussion

In this paper, we proposed a new MCDM method called BW-MaxEnt to determine the weights of criteria. The case study shown above illustrated the feasibility of the BW-MaxEnt method. The advantages of the proposed method can be further discussed through the following comparison analysis with existing research.
(1)
Comparison with the conventional BWM method: For Respondent 1, all comparison values were given, so using the conventional BWM method to calculate the criterion weights will obtain the same results as the BW-MaxEnt method with μ = 1 . If μ 1 , it means that the DM is not fully confident in his judgment, and the criterion weights obtained by the BW-MaxEnt method will tend to obey a uniform distribution to a certain extent according to the value of μ . From this perspective, it can be said that the BWM method is a special case of the proposed BW-MaxEnt method when μ = 1 , and DMs can express their confidence through the value of μ . For Respondent 2, the information for the three criteria “Memory”, “OS”, and “PS” is unknown, so solving the problem by the conventional BWM method will get unreasonable results: the weights of “Memory”, “OS”, or “PS” will equal 1, and the weights of the remaining criteria will all equal 0. For Respondent 3, he is unfamiliar with the criterion “PS”, which will be assigned a weight of 1 and the other criteria will have a weight of 0 if the conventional BWM method is applied. Obviously, this result is also unreasonable. Therefore, in a situation where the DMs (like Respondents 2 and 3) are unfamiliar with certain criteria, the proposed BW-MaxEnt method is more effective and efficient for MCDM problem modeling than the conventional BWM method.
(2)
Comparison with the extended fuzzy BWM method: Instead of crisp values, fuzzy numbers are always adopted to express DMs’ preferences in the fuzzy BWM method. However, like the BWM method, the solution process of the fuzzy BWM method also requires DMs to be familiar with all criteria; that is, the preference values in comparison vectors should be complete. Thus, for Respondents 2 and 3, using fuzzy BWM methods to model the MCDM problem cannot generate reasonable results. In addition, we should note that although fuzzy information can express the uncertainty of DMs’ judgement, it increases the inconsistency of pairwise comparisons. In the proposed method, if DMs are doubtful about their judgment, they can show their confidence by changing the value of μ , which maintains the consistency of the model. Therefore, we can conclude that the BW-MaxEnt method can not only deal with the uncertainty of information, but also shows better performance than the BW-MaxEnt method in maintaining consistency between judgments, which shows the superiority of the proposed method from another perspective.

5. Conclusions and Future Research

With the development of the social economy and information technology, people are facing increasingly various decision-making tasks in daily life. In MCDM problems, the determination of the criterion weights is a crucial issue. The BWM is a brilliant technique for solving this task because it requires fewer comparisons and obtains more consistent results than AHP. Although the way of making comparisons is consistent with rational human thinking, the BWM requires DMs to be familiar with all the criteria to give specific preference degrees. However, with limited time and energy, DMs will not have full knowledge of decision-making criteria. They may be unfamiliar with some important criteria and sometimes have to make decisions in the short term based on their limited knowledge and experience. Therefore, the BWM may be not suitable for this faster decision making.
To solve this problem, considering the effectiveness of the maximum entropy method in handling uncertain information, a novel MCDM method named BW-MaxEnt combining the BWM and entropy theory is proposed in this study. BW-MaxEnt can be used to solve problems with limited knowledge and it has wider application prospects than BWM. When μ = 1, our model can be converted to the BWM model. In other words, BWM can be regarded as a special case where the decision makers are familiar with all the criteria and they are confident in their decisions. We proved that the model of BW-MaxEnt can be translated into a convex optimization problem that can be resolved effectively. Lastly, we applied BW-MaxEnt in a real-world task to make it more comprehensible.
How to determine the value of μ in a more reasonable way will be the subject of our subsequent research work. This is also a shortcoming in our current research. In a multiobjective programming problem, the combination coefficient influences the reliability, robustness, and accuracy of the proposed model. The values of μ should be determined by the number n of all criteria, the number m of criteria that DMs are familiar with, and the value of the best criterion over the worst criterion, a BW . However, in this study, μ was provided by the DMs.
In future research, we also suggest applying BW-MaxEnt in other real-world scenarios where decisions need to be made quickly, like online recommendations. According to a BW-MaxEnt-based questionnaire filled out by customers, a background server could calculate customers’ preference information quickly and accurately; then businesses could provide customers with suitable products and solutions. This application of BW-MaxEnt would save a lot of time, improve decision-making efficiency, and improve satisfaction for customers.

Author Contributions

Formal analysis, X.-K.W.; Investigation, W.-H.H., C.S., M.-H.D. and Y.-Y.L.; Methodology, X.-K.W., W.-H.H., C.S., Y.-Y.L. and J.-Q.W.; Resources, Y.-Y.L. and J.-Q.W.; Supervision, J.-Q.W.; Writing—original draft, X.-K.W., C.S. and M.-H.D.; Writing—review and editing, W.-H.H. and Y.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 71871228).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available in a publicly accessible repository.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  2. Behzadian, M.; Khanmohammadi Otaghsara, S.; Yazdani, M.; Ignatius, J. A state-of the-art survey of TOPSIS applications. Expert Syst. Appl. 2012, 39, 13051–13069. [Google Scholar] [CrossRef]
  3. Roy, B. Classement et choix en présence de points de vue multiples. Revue Française Inform. Rech. Opérationnelle 1968, 2, 57–75. [Google Scholar] [CrossRef]
  4. Opricovic, S.; Tzeng, G.-H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Op. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  5. Leoneti, A.B.; Gomes, L.F.A.M. A novel version of the TODIM method based on the exponential model of prospect theory: The ExpTODIM method. Eur. J. Op. Res. 2021. [Google Scholar] [CrossRef]
  6. Brans, J.P.; Vincke, P.; Mareschal, B. How to select and how to rank projects: The PROMETHEE method. Eur. J. Op. Res. 1986, 24, 228–238. [Google Scholar] [CrossRef]
  7. Julong, D. Introduction to grey system theory. J. Grey Syst. 1989, 1, 1–24. [Google Scholar]
  8. Keršuliene, V.; Zavadskas, E.K.; Turskis, Z. Selection of rational dispute resolution method by applying new step-wise weight assessment ratio analysis (Swara). J. Bus. Econ. Manag. 2010, 11, 243–258. [Google Scholar] [CrossRef]
  9. Wang, X.-K.; Wang, S.-H.; Zhang, H.-Y.; Wang, J.-Q.; Li, L. The recommendation method for hotel selection under traveller preference characteristics: A cloud-based multi-criteria group decision support model. Group Decis. Negot. 2021. [Google Scholar] [CrossRef]
  10. Hou, W.-H.; Wang, X.-K.; Zhang, H.-Y.; Wang, J.-Q.; Li, L. Safety risk assessment of metro construction under epistemic uncertainty: An integrated framework using credal networks and the EDAS method. Appl. Soft Comput. 2021, 108, 107436. [Google Scholar] [CrossRef]
  11. Mi, X.; Tang, M.; Liao, H.; Shen, W.; Lev, B. The state-of-the-art survey on integrations and applications of the best worst method in decision making: Why, what, what for and what’s next? Omega 2019, 87, 205–225. [Google Scholar] [CrossRef]
  12. Rezaei, J. Best-worst multi-criteria decision-making method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  13. Badri Ahmadi, H.; Kusi-Sarpong, S.; Rezaei, J. Assessing the social sustainability of supply chains using Best Worst Method. Resour. Conserv. Recycl. 2017, 126, 99–106. [Google Scholar] [CrossRef]
  14. Rezaei, J. Best-worst multi-criteria decision-making method: Some properties and a linear model. Omega 2016, 64, 126–130. [Google Scholar] [CrossRef]
  15. Wang, X.K.; Zhang, H.Y.; Wang, J.Q.; Li, J.B.; Li, L. Extended TODIM-PROMETHEE II method with hesitant probabilistic information for solving potential risk evaluation problems of water resource carrying capacity. Expert Syst. 2021, e12681. [Google Scholar] [CrossRef]
  16. Tian, Z.-P.; Wang, J.-Q.; Zhang, H.-Y. An integrated approach for failure mode and effects analysis based on fuzzy best-worst, relative entropy, and VIKOR methods. Appl. Soft Comput. 2018, 72, 636–646. [Google Scholar] [CrossRef]
  17. Omrani, H.; Alizadeh, A.; Emrouznejad, A. Finding the optimal combination of power plants alternatives: A multi response Taguchi-neural network using TOPSIS and fuzzy best-worst method. J. Clean. Prod. 2018, 203, 210–223. [Google Scholar] [CrossRef] [Green Version]
  18. Xu, Y.; Zhu, X.; Wen, X.; Herrera-Viedma, E. Fuzzy best-worst method and its application in initial water rights allocation. Appl. Soft Comput. 2021, 101, 107007. [Google Scholar] [CrossRef]
  19. Dong, J.; Wan, S.; Chen, S.-M. Fuzzy best-worst method based on triangular fuzzy numbers for multi-criteria decision-making. Inf. Sci. 2021, 547, 1080–1104. [Google Scholar] [CrossRef]
  20. Mou, Q.; Xu, Z.; Liao, H. An intuitionistic fuzzy multiplicative best-worst method for multi-criteria group decision making. Inf. Sci. 2016, 374, 224–239. [Google Scholar] [CrossRef]
  21. Wu, Q.; Zhou, L.; Chen, Y.; Chen, H. An integrated approach to green supplier selection based on the interval type-2 fuzzy best-worst and extended VIKOR methods. Inf. Sci. 2019, 502, 394–417. [Google Scholar] [CrossRef]
  22. Mi, X.; Liao, H. An integrated approach to multiple criteria decision making based on the average solution and normalized weights of criteria deduced by the hesitant fuzzy best worst method. Comput. Ind. Eng. 2019, 133, 83–94. [Google Scholar] [CrossRef]
  23. Aboutorab, H.; Saberi, M.; Asadabadi, M.R.; Hussain, O.; Chang, E. ZBWM: The Z-number extension of Best Worst Method and its application for supplier development. Expert Syst. Appl. 2018, 107, 115–125. [Google Scholar] [CrossRef]
  24. Chen, Z.; Ming, X.; Zhou, T.; Chang, Y.; Sun, Z. A hybrid framework integrating rough-fuzzy best-worst method to identify and evaluate user activity-oriented service requirement for smart product service system. J. Clean. Prod. 2020, 253, 119954. [Google Scholar] [CrossRef]
  25. Pamučar, D.; Ecer, F.; Cirovic, G.; Arlasheedi, M.A. Application of improved Best Worst Method (BWM) in real-world problems. Mathematics 2020, 8, 1342. [Google Scholar] [CrossRef]
  26. Xiao, F. EFMCDM: Evidential fuzzy multicriteria decision making based on belief entropy. IEEE Trans. Fuzzy Syst. 2020, 28, 1477–1491. [Google Scholar] [CrossRef]
  27. Kabluchko, Z.; Prochno, J. The maximum entropy principle and volumetric properties of Orlicz balls. J. Math. Anal. Appl. 2021, 495, 124687. [Google Scholar] [CrossRef]
  28. Kheybari, S.; Kazemi, M.; Rezaei, J. Bioethanol facility location selection using best-worst method. Appl. Energy 2019, 242, 612–623. [Google Scholar] [CrossRef]
  29. Tian, Z.-P.; Zhang, H.-Y.; Wang, J.-Q.; Wang, T.-L. Green supplier selection using improved TOPSIS and best-worst method under intuitionistic fuzzy environment. Informatica 2018, 29, 773–800. [Google Scholar] [CrossRef] [Green Version]
  30. Nie, R.-X.; Tian, Z.-P.; Wang, J.-Q.; Zhang, H.-Y.; Wang, T.-L. Water security sustainability evaluation: Applying a multistage decision support framework in industrial region. J. Clean. Prod. 2018, 196, 1681–1704. [Google Scholar] [CrossRef]
  31. Shannon, C.E. Weaver: A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  32. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  33. Jaynes, E.T. Information theory and statistical mechanics. II. Phys. Rev. 1957, 108, 171. [Google Scholar] [CrossRef]
  34. Elith, J.; Yates, C.J. A statistical explanation of MaxEnt for ecologists. Divers. Distrib. 2015, 17, 43–57. [Google Scholar] [CrossRef]
  35. Zhang, K.; Yao, L.; Meng, J.; Tao, J. Maxent modeling for predicting the potential geographical distribution of two peony species under climate change. Sci. Total Environ. 2018, 634, 1326–1334. [Google Scholar] [CrossRef] [PubMed]
  36. Boyd, S.; Vandenberghe, L. Convex optimization. IEEE Trans. Autom. Control 2006, 51, 1859. [Google Scholar]
  37. Nesterov, Y.E.; Todd, M.J. Self-scaled barriers and interior-point methods for convex programming. Math. Op. Res. 1997, 22, 1–45. [Google Scholar] [CrossRef] [Green Version]
  38. Kiwiel, K.C. Proximal level bundle methods for convex nondifferentiable optimization, saddle-point problems and variational inequalities. Math. Program. 1995, 69, 89–109. [Google Scholar] [CrossRef]
  39. Ben-Tal, A.; Nemirovski, A. Robust convex optimization. Math. Op. Res. 1998, 23, 769–805. [Google Scholar] [CrossRef] [Green Version]
  40. Saaty, T.L.; Vargas, L.G. Models, methods, concepts & applications of the analytic hierarchy process. International 2017, 7, 159–172. [Google Scholar]
Figure 1. The process of BWM-MaxEnt.
Figure 1. The process of BWM-MaxEnt.
Mathematics 09 01587 g001
Figure 2. Weights according to BW-MaxEnt over μ.
Figure 2. Weights according to BW-MaxEnt over μ.
Mathematics 09 01587 g002
Figure 3. CR over μ .
Figure 3. CR over μ .
Mathematics 09 01587 g003
Figure 4. Evaluation scores for the GPU workstations.
Figure 4. Evaluation scores for the GPU workstations.
Mathematics 09 01587 g004
Table 1. Consistency index table.
Table 1. Consistency index table.
a B W 123456789
Consistency index00.4411.632.303.003.734.475.23
Table 2. The vectors of comparison results.
Table 2. The vectors of comparison results.
BO/OWBest/Worst C 1 C 2 C 3 C 4
BO C 1 1382
OW C 3 8314
Table 3. The vectors of comparison results.
Table 3. The vectors of comparison results.
BO/OWBest/Worst C 1 C 2 C 3 C 4
BO C 1 1382
OW C 3 831*
where * represents unknown information.
Table 4. The vectors of comparison results.
Table 4. The vectors of comparison results.
BO/OWBest/Worst C 1 C 2 C 3 C 4
BO C 1 138*
OW C 3 831*
where * represents unknown information.
Table 5. Different values of w 1 * ,   w 2 * , w 3 * ,   w 4 * ,   ξ * , and CR over μ .
Table 5. Different values of w 1 * ,   w 2 * , w 3 * ,   w 4 * ,   ξ * , and CR over μ .
μ
w 1 *
w 2 *
w 3 *
w 4 *
ξ *
CR
00.25000.25000.25000.25007.00001.57
0.10.33800.27060.10140.28984.66841.04
0.20.43410.18830.07060.30691.85230.41
0.30.46690.16240.0609 0.30960.33330.07
0.40.46550.16010.06070.31170.36220.08
0.50.46900.16050.05720.31350.20000.04
0.60.46730.15950.05700.31600.20000.04
0.70.46450.15860.05670.32010.20000.04
0.80.45870.15660.05590.32860.20000.04
0.90.44100.15050.05380.35450.2000010.04
1000100
Table 6. Configuration information of the GPU workstations.
Table 6. Configuration information of the GPU workstations.
Leadtek W2030 (LT)Stend IW 4213 (SD)
CPU2*Intel Xeon silver 41142*Intel Xeon gold 6136
GPU2*NVIDIA RTX 2080TI4* NVIDIA RTX 2080TI
Price54,050¥99,400¥
Hard DriveSeagate 4 TBSeagate 4 TB
SSD480 G1.92 T
Memory4*Samsung 32 GB 2666 MHz6* Samsung 32 GB 2666 MHz
Warranty3 years2 years
Operating SystemUbuntu 16.04Windows/Ubuntu/Centos
Power Supply2200 W with a redundant power supply2000 W
where * is the sign of multiplication.
Table 7. BO vectors and OW vectors for the three respondents.
Table 7. BO vectors and OW vectors for the three respondents.
Respondent 1Respondent 2Respondent 3
BOOWBOOWBOOW
CPU253324
GPU192419
Price241842
HD334233
SSD338133
Memory24**24
Warranty913391
OS91**91
PS42****
where * represents unknown information.
Table 8. Evaluation scores for GPU workstation configurations.
Table 8. Evaluation scores for GPU workstation configurations.
Respondent 1Respondent 2Respondent 3
LTSDLTSDLTSD
CPU675878
GPU48510610
Price356254
HD556666
SSD463726
Memory564657
Warranty758376
OS554856
PS727584
Table 9. Optimal weights of the seven criteria.
Table 9. Optimal weights of the seven criteria.
Respondent 1Respondent 2Respondent 3
CPU0.14170.08270.1305
GPU0.27330.12400.2710
Price0.13160.23780.0703
HD0.09450.062040.0936
SSD0.09450.03100.0936
Memory0.13160.12650.1405
Warranty0.03040.08270.0312
OS0.03150.12650.0301
PS0.07080.12650.1348
Table 10. Indicators to evaluate the outputs of the model.
Table 10. Indicators to evaluate the outputs of the model.
Respondent ξ * CR
Respondent 10.33880.0648
Respondent 20.33230.0743
Respondent 30.33550.0642
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.-K.; Hou, W.-H.; Song, C.; Deng, M.-H.; Li, Y.-Y.; Wang, J.-Q. BW-MaxEnt: A Novel MCDM Method for Limited Knowledge. Mathematics 2021, 9, 1587. https://doi.org/10.3390/math9141587

AMA Style

Wang X-K, Hou W-H, Song C, Deng M-H, Li Y-Y, Wang J-Q. BW-MaxEnt: A Novel MCDM Method for Limited Knowledge. Mathematics. 2021; 9(14):1587. https://doi.org/10.3390/math9141587

Chicago/Turabian Style

Wang, Xiao-Kang, Wen-Hui Hou, Chao Song, Min-Hui Deng, Yong-Yi Li, and Jian-Qiang Wang. 2021. "BW-MaxEnt: A Novel MCDM Method for Limited Knowledge" Mathematics 9, no. 14: 1587. https://doi.org/10.3390/math9141587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop