Next Article in Journal
Two-Step Solver for Nonlinear Equations
Next Article in Special Issue
Analysis of the Digital Divide Using Fuzzy Forecasting
Previous Article in Journal
Mechanical Vibration for the Control of Membrane Fouling in Direct Contact Membrane Distillation
Previous Article in Special Issue
Methods for MADM with Picture Fuzzy Muirhead Mean Operators and Their Application for Evaluating the Financial Investment Risk
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information

1
School of Management, Hefei University of Technology, Hefei 230009, China
2
School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, China
Symmetry 2019, 11(2), 127; https://doi.org/10.3390/sym11020127
Submission received: 5 December 2018 / Revised: 15 January 2019 / Accepted: 21 January 2019 / Published: 22 January 2019
(This article belongs to the Special Issue Multi-Criteria Decision Aid methods in fuzzy decision problems)

Abstract

:
The traditional multi-attribute group decision making (MAGDM) method needs to be improved to the integration of assessment information under multi-granular probabilistic linguistic environments. Some novel distance measures between two multi-granular probabilistic linguistic term sets (PLTSs) are proposed, and distance measures are proved to be reasonable. To calculate the weights of the alternative attributes, the extended cross-entropy method for multi-granular probabilistic linguistic term sets is proposed. Then, a novel extended MAGDM algorithm based on prospect theory (PT) is proposed. Two case studies of decision making (DM) on purchasing a car is provided to illustrate the application of the extended MAGDM algorithm. The case analyses are proposed to illustrate the novelty, feasibility, and application of the proposed MAGDM algorithm by comparing the other three algorithms based on TOPSIS, VIKOR, and Pang Qi et al.’s method. The analyses results demonstrate that the proposed algorithm based on PT is superior.

1. Introduction

MAGDM is a hot issue, which aims at finding the optimal alternative with multi-attributes [1]. MAGDM is applied in the real world widely, such as in enterprise strategy planning [2], in the choosing of appropriate hospitals [3], in quality assessments [4], in the selection of investment strategies [5], and so on. Integrating decision makers’ (DMs’) preferences information on different attributes is an important prerequisite [6]. Since Zadeh proposed the fuzzy set as the basic fuzzy decision-making model first in 1965 [7], more and more authors have focused on fuzzy MAGDM. In practical DM problems, the DMs express their preferences on the considered alternatives by linguistic terms, such as “bad”, “medium”, or “good”, and so on. Then they make the optimal decision by some appropriate DM methods.
In practice, the same object may be assessed on different platforms, or the same object is described on the same platform by different granular fuzzy linguistic information in the real world. The traditional method of group decision making (GDM) under only the same granular linguistic information cannot be used to integrate hybrid assessment information. Therefore, multi-granular linguistic term sets need to be described efficiently. Then, the basic distance measures of probabilistic linguistic term sets (PLTSs) with multi-granular probabilistic linguistic information are proposed firstly. Under the distance measures methods, a novel algorithm of MAGDM based on prospect theory (PT) is given in this paper. Then, two practical case studies on purchasing a car are employed to illustrate the application of the extended algorithm based on PT. The three other algorithms based on TOPSIS and VIKOR are given to compare.
Until now, there has been lots of research on linguistic decision making (LDM) [8]. For example, Herrera and Verdegay [9] proposed the linguistic assessments in GDM in 1993. Then, Herrera et al. [10,11] made some GDM models with linguistic settings. Later, Xu proposed the goal programming model for multi-attribute decision making (MADM) under a linguistic environment [12]. Ben-Arieh and Chen [13] gave the aggregation of opinions and consensus measures in linguistic GDM. Xu [14] gave a method based on the uncertain linguistic ordered weighted geometric (LOWG) and the uncertain LOWG operators to GDM with uncertain multiplicative linguistic preference relations. Liu et al. [15] gave a MAGDM approach based on a prioritized aggregation operator with hesitant intuitionistic fuzzy linguistic environments. Liao et al. [16] proposed another MAGDM approach under two intuitionistic multiplicative distance measures.
However, DMs express their preferences inaccurately under linguistic environment because of fuzziness and uncertainties of human beings’ thinking. DMs may be hesitant between some possible linguistic terms. Then, Rodriguez et al. [17] gave hesitant fuzzy linguistic term sets (HFLTSs) under hesitant fuzzy sets (HFSs) [18] and linguistic term sets (LTSs) [19], which allows a DM to give several possible values for a linguistic variable. In most of the current research of HFLTSs, the DMs give all possible values with equal importance or weight. Obviously, it is different from the real world. In the problems of both individual DM and GDM, the DMs can prefer some of the possible linguistic terms leading to the set of possible values with different importance degrees. Then, the assessment information includes both several possible linguistic terms and the associated probabilistic information. This information can be described as probabilistic distribution [20,21,22], importance degree [23], belief degree [24,25], and so on. The ignorance of this information may lead to erroneous decision results. We can get accurate preference information of the DMs under the probabilistic linguistic term sets (PLTSs). Then, Wang and Hao [26] proposed two proportional linguistic terms. Under a general form of probabilistic distributions, Zhang et al. [22] and Wu and Xu [21] improved the model. Moreover, Yang and Xu [25] proposed partial ignorance as well with the framework of evidential reasoning. Therefore, lots of research results of PLTSs in MAGDM have been proposed now [27,28]. Xu and Zhou [29,30] improved a group of DMs under the hesitant probabilistic fuzzy environment. Kobina et al. [31] made the operators of probabilistic linguistic power aggregation for Multi-Criteria Group Decision Making (MCGDM).
Some methods have been proposed with multi-granularity linguistic information in GDM [32,33]. However, there are few researches on MAGDM with multi-granular probabilistic linguistic information. Then, how do we measure two PLTSs? Although, there have been lots of research about the distance measures, such as fuzzy sets [6], interval-valued fuzzy sets [34], intuitionistic fuzzy sets [35], interval-valued intuitionistic fuzzy sets [36], hesitant fuzzy sets [18], interval-valued hesitant fuzzy sets [37], hesitant fuzzy linguistic term sets [17], and so on, there are still few researches on the distance measures of the PLTSs with multi-granular linguistic information. In order to solve these problems, distance measures of PLTSs with multi-granular linguistic information are proposed. Based on the feasibility of prospect theory, many scholars use prospect theory (PT) to solve practical problems. For example, Wang et al. [38] proposed a GDM method based on PT for emergency situations. Yao et al. [39] solved the GDM problem for the green supply chain. In this paper, a novel algorithm for MAGDM based on PT is proposed.

2. Preliminaries

DMs can use LTSs to express their preferences on the considered objects. The additive LTS is used most widely, which is defined as follows [40]:
S = { S α | α = 0 , 1 , , g 1 } ,
where S is a g -granular fuzzy linguistic set, S α is a linguistic variable with S 0 and S g denoting the lower and upper limits of the linguistic terms, and g is a positive integer. The linguistic term S α has the characteristics as follows:
The ordered set is defined if α > β , S α > S β ; the negation operator is defined neg ( S α ) = S β , where α + β = g 1 .
Because the DMs may hesitate in several possible values in DM, Rodriguez et al. [23] proposed the definitions of HFLTSs as follows.
Definition 1.
Let S = { S 0 , S 1 , , S g 1 } be an LTS; then, a HFLTs b S is an ordered finite subset of the consecutive linguistic terms of S [41].
Definition 2.
Let S = { S 0 , S 1 , , S g 1 } be an LTS. A PLTS can be defined as
L ( P ) = { L ( k ) ( P ( k ) ) | L ( k ) S , P ( k ) 0 . k = 1 , 2 , , # L ( P ) , k = 1 # L ( P ) P ( k ) 1 } ,
where L ( k ) ( P ( k ) ) is the linguistic term L ( k ) associated with the probability P ( k ) and # L ( P ) is the number of all the different linguistic terms in L ( P ) [27].
If k = 1 # L ( P ) P ( k ) = 1 , then we get the complete information on the probabilistic distribution with all the possible linguistic terms; if k = 1 # L ( P ) P ( k ) < 1 , then partial ignorance exists because of current insufficient assessment information. Especially, k = 1 # L ( P ) P ( k ) = 0 means complete ignorance. Therefore, handling the ignorance of L ( P ) is crucial research for the application of PLTSs.
Definition 3.
Given a PLTS L ( P ) with k = 1 # L ( P ) P ( k ) < 1 , then the associated PLTS L ˙ ( P ) is defined by
L ˙ ( P ) = { L ( k ) ( P ˙ ( k ) ) | k = 1 , 2 , , # L ( P ) } ,
where P ˙ ( k ) = P ( k ) / k = 1 # L ( P ) P ( k ) for all k = 1 , 2 , , # L ( P ) [27].
The numbers of linguistic terms in PLTSs are usually different for a DM. Therefore, the number of linguistic terms for the PLTSs need to be added, which numbers are relatively small. Then, the numbers of linguistic terms are the same.

3. Main Results

3.1. Definitions of Multi-Granular Probabilistic Linguistic Term Sets

Inspired by Reference [27], some definitions of multi-granular probabilistic linguistic term sets are proposed as follows.
Definition 4.
Let S = { S 0 , S 1 , , S g 1 } be g -granular LTS and S = { S 0 , S 1 , , S g 1 } be g -granular LTS. L 1 ( P ) and L 2   ( P ) are two different granular PLTSs on the attribute set X = { x 1 , x 2 , , x n } ; multi-granular PLTSs can be defined as
L 1 ( P ) = { L 1 ( k 1 ) ( P 1 ( k 1 ) ) | L 1 ( k 1 ) S , P 1 ( k 1 ) 0 . k 1 = 1 , 2 , , # L 1 ( P 1 ) , k 1 = 1 # L 1 ( P ) P 1 ( k 1 ) 1 } ,
L 2 ( P ) = { L 2 ( k 2 ) ( P 2 ( k 2 ) ) | L 2 ( k 2 ) S , P 2 ( k 2 ) 0 . k 2 = 1 , 2 , , # L 2 ( P 2 ) , k 2 = 1 # L 2 ( P ) P 2 ( k 2 ) 1 }
where L 1 ( k 1 ) ( P 1 ( k 1 ) ) is the linguistic term L 1 ( k 1 ) associated with the probability P 1 ( k 1 ) a n d   L 2 ( k 2 ) ( P 2 ( k 2 ) ) is the linguistic term L 2 ( k 2 ) associated with the probability P 2 ( k 2 ) .
The numbers of L 1 ( P )   a n d   L 2 ( P ) are denoted as # L 1 ( P )   a n d   # L 2 ( P ) respectively. If # L 1 ( P ) > # L 2 ( P ) , then # L 1 ( P ) # L 2 ( P ) linguistic terms are added to L 2 ( P ) , leading to the numbers of L 1 ( P ) and L 2 ( P ) to be equal. The added linguistic terms are the smallest ones in L 2 ( P ) , and the probabilities of all the linguistic terms are zero.
Definition 5.
Let L 1 ( P ) and L 2   ( P ) be two multi-granular PLTSs, then the normalization processes are as follows:
(1) 
If k i = 1 # L i ( P ) P i ( k i ) < 1 , then by Equation (2), we calculate L i ˙ ( P ) ,   i = 1 , 2 .
(2) 
If # L 1 ( P ) # L 2 ( P ) , then by Definition 4, we add some elements to the one with the smaller number of elements.
The PLTSs obtained by Definition 5 are named by the normalized PLTSs. Conveniently, the normalized PLTSs are denoted by L 1 ( P )   and   L 2   ( P ) as well.
Because the positions of elements in a PLTS are arbitrary, we need to get the ordered PLTSs first, which leads to the operational results in PLTSs being determined directly.
Definition 6.
Let S = { S 0 , S 1 , , S g 1 } be g -granular LTS. Given a PLTS, L ( P ) = { L ( k ) ( P ( k ) ) | L ( k ) S , k = 1 # L ( P ) P ( k ) 1 k = 1 , 2 , , # L ( P ) } , and r ( k ) ( L ( k ) ) is the subscript of linguistic term L ( k ) . L ( P ) is named an ordered multi-granular PLTS if the linguistic terms L ( k ) ( P ( k ) ) ( k = 1 , 2 , , # L ( P ) ) are arranged by the values of r ( k ) ( L ( k ) )   g × P ( k ) ( k = 1 , 2 , , # L ( P ) ) in descending order.

3.2. Distance Measures between Multi-Granular PLTSs

According to the normalized distance measures, the normalized distance measures are extended and the generalized distance measures between two multi-granular PLTSs in discrete cases are proposed as follows.
Definition 7.
Let L 1 ( P ) and L 2   ( P ) be two PLTSs, then the distance measures between them is defined as d ( L 1 ( P ) , L 2   ( P ) ) [27], which satisfies the following three conditions:
(1) 
0 d ( L 1 ( P ) , L 2   ( P ) ) 1;
(2) 
d ( L 1 ( P ) , L 2   ( P ) ) = 0, if and only if L 1 ( P ) = L 2   ( P ) ; and
(3) 
d ( L 1 ( P ) , L 2   ( P ) ) =   d ( L 2   ( P ) , L 1 ( P ) ) .
If L 1 ( P ) and L 2   ( P ) are normalized ordered PLTSs as in Definition 5 and Definition 6, then the distance measured between two multi-granular PLTSs are defined as follows.
Definition 8.
Let L 1 ( k ) ( P 1 ( k ) ) L 1 ( P ) , and L 2 ( k ) ( P 2 ( k ) )   L 2 ( P ) be two PLTEs as in Definition 4, then the distance measured between them is defined as
d ( L 1 ( k ) ( P 1 ( k ) ) , L 2 ( k ) ( P 2 ( k ) ) ) = | r ( L 1 ( k ) ) g × P 1 ( k ) r ( L 2 ( k ) ) g × P 2 ( k ) | ,
where r ( L 1 ( k ) ) is the subscript of linguistic term L 1 ( k ) and r ( L 2 ( k ) ) is the subscript of linguistic term L 2 ( k ) .
Obviously, Definition 8 satisfies the three conditions of the distance definition as in Definition 7.
Definition 9.
Let L 1 ( k ) ( P 1 ( k ) ) L 1 ( P ) and L 2 ( k ) ( P 2 ( k ) )   L 2 ( P ) be two PLTEs on the attribute set X = { x 1 , x 2 , , x n } , where x j is the j th attribute of the alternatives and j = 1 , 2 , n ; then, the generalized Hamming distance measured between L 1 ( P ) and L 2   ( P ) is defined as
d h d ( L 1 ( P ) ,   L 2   ( P )   ) = 1 L k = 1 L d ( L 1 ( k ) ( P 1 ( k ) ) , L 2 ( k ) ( P 2 ( k ) ) ) ,
where # L 1 ( P ) = # L 2 ( P ) = L .
The generalized Euclidean distance between L 1 ( P ) and L 2   ( P ) can be given as
d e d ( L 1 ( P )   ,   L 2   ( P )   ) = [ 1 L k = 1 L ( d ( L 1 ( k ) ( P 1 ( k ) ) , L 2 ( k ) ( P 2 ( k ) ) ) ) 2 ] 1 / 2 .
Inspired by the generalized idea proposed by Yager [42], this paper gives the generalized distance as
d g d ( L 1 ( P )   ,   L 2   ( P )   ) = [ 1 L k = 1 L ( d ( L 1 ( k ) ( P 1 ( k ) ) , L 2 ( k ) ( P 2 ( k ) ) ) ) λ ] 1 / λ ,
where λ > 0 .
Especially if λ = 1 , then the generalized distance reduces to the generalized Hamming distance. If λ = 2 , then it reduces to the generalized Euclidean distance. Then, Definition 9 extends the normalized Hamming distance and Euclidean distance.

3.3. A MAGDM Algorithm Based on PT

A MAGDM problem with multi-granular probabilistic linguistic information is described as follows.
There are a set of m alternatives, A = { A 1 , A 2 , , A m } , and the weight vector w = ( w 1 , w 2 , , w n ) T of n attributes X = ( x 1 , x 2 , , x n ) T , where x j is the j th attribute of the alternatives and where 0 w j 1   and j = 1 n w j = 1 . The DMs assess m alternatives on n attributes by utilizing linguistic term set to get a set of linguistic decision matrices.
Then, the assessment linguistic information is used to make up a multi-granular probabilistic linguistic decision matrix as follows:
R = [ L i j ( P ) ] m × n = [ L 11 ( P ) L 12 ( P ) L 1 n ( P ) L 21 ( P ) L 22 ( P ) L 2 n ( P ) L m 1 ( P ) L m 2 ( P ) L m n ( P ) ] ,  
where L i j ( P ) = { L i j ( k i j ) ( P i j ( k i j ) ) | L i j ( k i j ) S i , P i j ( k i j ) 0 , k i j = 1 # L i j ( P ) P i j ( k i j ) 1 , k i j = 1 , 2 , , # L i j ( P ) } is a multi-granular PLTS denoting the degree of the alternative A i on the attribute x j , S i = { S 0 , S 1 , , S g i 1 } is a g i -granular fuzzy linguistic set, and r i j ( k i j ) is the subscript of the linguistic term L i j ( k i j ) ( P i j ( k i j ) ) , which is associated with the probability P i j ( k i j ) , i = 1 , 2 , m , j = 1 , 2 , n .
Since all the numbers of the probabilistic linguistic elements of the PLTSs in R are different usually, the PLTSs should be normalized by Definition 5. Then, suppose L i j ( k i j ) ( P i j ( k i j ) ) is an ordered PLTS as in Definition 6.
In MAGDM problems, the attributes can be classified into two types: benefits and costs. The higher a benefit attribute is, the better the situation is, while a cost attribute is the reverse [43]. In this paper, we suppose the attributes are benefits.
Definition 10.
Prospect Theory [44]: Δ x implies the gain ( Δ x 0 ) or the loss ( Δ x < 0 ) of the outcome relative to the reference point (RP). The prospect value function V( Δ x ) is given by
V ( Δ x ) = { Δ x α , Δ x 0 ; θ ( Δ x ) β , Δ x < 0 ,
where α is a parameter that represents the decision maker’s sensitivity degree on gain, β is a parameter that represents the decision maker’s sensitivity degree on loss, and 0 α , β 1 , θ ( θ > 1 ) is a parameter that represents the decision maker’s loss aversion degree.
Inspired by the definition of RP, the theory of TOPSIS is extended as follows.
Definition 11.
Generalized prospect value function V ( L 1 ) based on TOPSIS is given by
V ( L 1 )   = { ( d ( L 1 ( P ) , L 2 ( P ) ) ) α , L 1 ( P ) L 2 ( P ) ; θ ( d ( L 1 ( P ) , L 2 ( P ) ) ) β , L 1 ( P ) < L 2 ( P ) ,  
where L 1 ( P ) and L 2   ( P ) are normalized ordered multi-granular PLTSs.
Under Definition 11, a novel MAGDM algorithm based on PT diagram is shown as follows. See Figure 1.
The specific steps of the algorithm are as follows.
Step 1. Information gathering process: individual reference points (RPs) over the alternatives on different attributes provided by experts are gathered as R = [ L ij ( P ) ] m × n by Equation (9).
Inspired by Reference [45], the extended cross-entry method is proposed to calculate the attributes’ weights.
Step 2. Compute the weight vector w = ( w 1 , w 2 , , w n ) T of n attributes X = ( x 1 , x 2 , , x n ) T as follows. See Equation (12).
E j = 1 m i = 1 m ( 1 2 L j T i = 1 L j ( ( ( 1 + q r i j ( k i j ) ) l n ( 1 + q r i j ( k i j ) ) + ( 1 + q ( 1 r ( L j i + 1 ) j ( k i j ) ) ) ) L n ( 1 + q ( 1 r ( L j i + 1 ) j ( k i j ) ) ) ) / 2 ( ( 2 + q r i j ( k i j ) + q ( 1 r ( L j i + 1 ) j ( k i j ) ) ) / 2 ) l n ( ( 2 + q r i j ( k i j ) + q ( 1 r ( L j i + 1 ) j ( k i j ) ) ) / 2 ) ) ) , w j = 1 E j n j = 1 n E j , i = 1 , 2 , , m , j = 1 , 2 , , n , L j = # L i j ( P )
where T = ( 1 + q ) l n ( 1 + q ) ( 2 + q ) ( l n ( 2 + q ) l n 2 ) , q > 0 , x j is the j th attribute of the alternatives, 0 w j 1   , and j = 1 n w j = 1 . In this paper, let q = 2 [46].
Step 3. Aggregation process: get weighted DM matrix R * . See Equation (13).
R * = [   L i j * ( P ) ] m × n = [ L 11 * ( P ) L 12 * ( P ) L 1 n * ( P ) L 21 * ( P ) L 22 * ( P ) L 2 n * ( P ) L m 1 * ( P ) L m 2 * ( P ) L m n * ( P ) ]
where L m n * ( P ) = ( r i j ( k i j ) ( L i j ( k i j ) ) g i × P i j ( k i j ) ) × w j , i = 1 , 2 , , m , j = 1 , 2 , , n .
Step 4. Calculate the positive ideal solution and the negative ideal solution respectively.
Then, the definitions of the probabilistic linguistic positive ideal solution (PLPIS) L + and the probabilistic linguistic negative ideal solution (PLNIS) L are defined respectively as follows.
The PLPIS of the alternatives is
L + = ( L 1 ( P ) + , L 2 ( P ) + , , L n ( P ) + ) ,  
where L j ( P ) + = L Δ , Δ = m a x i , j , k { r i j ( k i j ) ( L i j ( k i j ) ) g i × P i j ( k i j )   } .
The PLNIS of the alternatives is
L = ( L 1 ( P ) , L 2 ( P ) , , L n ( P ) ) ,  
where L j ( P ) = L , = m i n i , j , k { r i j ( k i j ) ( L i j ( k i j ) ) g i × P i j ( k i j ) } .
In order to select a preferred alternative or to rank all the alternatives, we should compute the distance between A i and L + and the distance between A i and L . Certainly, a better A i should be closer to L + and also farther from L .
Step 5. Calculate the gains value and the losses value: gains and losses are calculated with respect to the group reference points of the different alternatives.
Then, the prospect value is
V ( d ( L i j , L j + ) ) = θ ( d ( L i j , L j + ) ) β .  
V + ( d ( L i j , L j ) ) = ( d ( L i j , L j ) ) α .  
where θ = 2.25, α = 0.88, and β = 0.88 [47].
Step 6. Calculate the ration between the gains value and the losses value of each alternative.
C i = | j = 1 n V + ( d ( L i j , L j ) ) | | j = 1 n V ( d ( L i j , L j + ) ) | , i = 1 , 2 , , m , j = 1 , 2 , , n .
Rank the alternatives by the values C i of A i . Certainly, the bigger the closeness degree is, the better the alternative is.

4. Case Studies

With the popularity of the Internet, e-commerce has become an indispensable field in daily life. For example, if people want to purchase a car, they will be concerned with all kinds of information about cars on the Internet. People take all kinds of information into consideration to decide which car to buy, such as scoring data, word-of-mouth data, forum reviews, and so on. For example, there are seven new energy cars to choose. People collect assessment information of these cars from users by the three ways (scoring data, word-of-mouth data, and forum reviews) on eight attributes, which are space, power, manipulate, power consumption, comfort, appearance, interior decoration, and cost performance on the “Auto Home” website. Note them as x 1 ,   x 2 , x 3 , x 4 , x 5 , x 6 ,   x 7 , and x 8 respectively. The seven cars are ZhiXuan ( A 1 ), WeiChiFS ( A 2 ), LiWei ( A 3 ), FeiDu ( A 4 ), RuiNa RV ( A 5 ), KiaK2 ( A 6 ), and JinRui ( A 7 ) respectively. Since scoring data online is a 5-point system, the scoring data to assessment information is operated to 5-granular linguistic term sets. The word-of-mouth data of the overall assessment for cars can be mapped to 7-granular linguistic term sets. Due to the complexity of the community review information, this information can be operated to 9-granular linguistic term sets.

4.1. The Applications of the Algorithm

The applications of the algorithm based on PT are shown as follows.
Step 1. Collect the users’ assessment information on the “Auto Home” website until May 20 in 2018. See Table 1, Table 2 and Table 3.
Here, the scoring data (see Table 1) are the final average values of the seven cars on eight attributes from the scoring data. The assessment information (see Table 2) is the general impression from the word-of-mouth data. The assessment information (see Table 3) is from forum reviews data. These data are obtained on the “Auto Home” website.
Then, we get the users’ overall assessment probabilistic linguistic term sets. See Table 4.
Then, we get the normalized DM matrix by Definition 5. See Table 5.
Here, S α i is defined as follows: α i = r i j ( k i j ) ( L i j ( k i j ) ) g i × P i j ( k i j ) ,   i = 1 , 2 , , m , j = 1 , 2 , , n . For example, α i = 4.71 5 × 1 3 = 4.71 15 .
Step 2. Calculate the weight vector w = ( w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , w 7 , w 8 ) T on the eight attributes X = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ) T by Equation (12). See Table 6.
Step 3. Get a weighted DM matrix. See Table 7.
Here, in order to calculate conveniently, we only note α i instead of S α i .
Step 4. Calculate the PLPIS and PLNIS respectively.
The normalized PLPIS is as follows (see Table 8):
The normalized PLNIS is as follows (see Table 9):
Step 5. The results of V ( A i , L ) and V ( A i , L + ) ( i = 1 , 2 , , 7 ) are as follows. See Table 10 and Table 11.
Step 6. Calculate the closeness coefficient C i of A i   ( i = 1 , 2 , , 7 ) by Equation (18). The results are as follows. See Table 12 and Figure 2.
Rank the alternatives by the values C i of A i   ( i = 1 , 2 , , 7 ) . See Table 13.

4.2. Sensitivity Analysis

In order to analyze the sensitivity of the parameters, take the different parameters [38] and rank A i   ( i = 1 , 2 , , 7 ) by the algorithm based on PT as in Section 4.1. Then, the results are as follows.
Calculate the closeness coefficient C i of each alternative A i ( i = 1 , 2 , , 7 ) by Equation (18). See Table 14.
Rank the alternatives by the values C i of A i   ( i = 1 , 2 , , 7 ) . See Table 15.
From Table 14 and Table 15, we can find the rankings of the alternatives A i   ( i = 1 , 2 , , 7 ) are different but only when α = 0.85 , β = 0.85 , and θ = 4.1 ( λ = 2 ) and α = 0.725 , β = 0.717 , and θ = 2.04   ( λ = 2 ) , respectively; the results are coincident by the algorithm based on PT. The ranking results are different as the parameters ( θ ,   α ,   and   β   ) are changed. This result is consistent with the meaning of the parameters ( θ ,   α ,   and   β ) . Here, α   and   β are power parameters related to gains and losses, respectively. θ is the risk-aversion parameter, which has the characteristic of being steeper for losses than for gains when θ > 1 : The larger the value, the greater the degree of risk. Then the ranking of A i is different. Therefore, the algorithm based on PT proposed in Section 4.1 is scientific.

4.3. Comparative Analysis

In order to illustrate the feasibility and efficiency of the algorithm-based PT, we calculate the other results by the other three algorithms based on TOPSIS, VIKOR, and Pang Qi et al.’s method respectively.
The results of the algorithm based on TOPSIS are as follows:
The closeness coefficient of A i   ( i = 1 , 2 , , 7 ) is
C D i = ( 1 δ ) d ( A i , L ) δ   d ( A i , L + ) + ( 1 δ ) d ( A i , L ) .
The parameter δ [ 0 , 1 ] represents the risk preferences of the decision maker. If δ > 0.5 , then it means that the DMs are optimistic. If δ < 0.5 , then it means they are pessimistic. The value of δ should be given by DMs beforehand. Here, let δ = 0.5 . The higher C D i is, the better the alternative is.
Calculate the closeness coefficient C D i by Equation (19), and the results are as follows. See Table 16 and Figure 3.
Rank the alternatives by the values C D i of A i ( i = 1 , 2 , , 7 ). See Table 17.
The results of the algorithm based on VIKOR are as follows:
The compromise index:
M C i = v M U i M U + M U M U + + ( 1 v ) M R i M R + M R M R +   ,
The whole benefit index:
M U i = j = 1 n w j d ( L i j ( P ) , L j ( P ) + ) d ( L j ( P ) + , L j ( P ) ) ,
The compromise index:
M R i = max [ w j d ( L i j ( P ) , L j ( P ) + ) d ( L j ( P ) + , L j ( P ) ) ] ,
where M U + = max { M U i } ,   M U = min { M U i } , M R + = max { M R i } ,   M R = min { M R i } , i = 1 , 2 , , 7 , j = 1 , 2 , 8 . The parameter v denotes the weight of the strategy of the maximum whole benefits, where 1 v is the weight of the individual regret strategy. Here, let v = 0.5 . The higher M C i is, the better the alternative is.
We calculate the compromise index M C i of A i ( i = 1 , 2 , , 7 ) by Equations (20)–(22) and obtain the results. See Table 18 and Figure 4.
Rank the alternatives by the compromise index M C i of A i ( i = 1 , 2 , , 7 ). See Table 19.
The results of the algorithm based on Pang Qi et al.’s method [27] are as follows:
The closeness coefficient of A i   ( i = 1 , 2 , , 7 ) is
C I i = d ( A i , L ) d m a x ( A i , L ) d ( A i , L + ) d m i n ( A i , L + )
Calculate the closeness coefficient C I i by Equation (23), and obtain the results. See Table 20 and Figure 5.
Rank the alternatives by the closeness coefficient C I i of A i ( i = 1 , 2 , , 7 ). See Table 21.
From the comparative analysis, we can find when λ = 1 , the ranking of A i ( i = 1 , 2 , , 7 ) is “ A 5 A 2 A 7 A 3 A 6 A 4 A 1 ” and when λ = 2 , the ranking of A i ( i = 1 , 2 , , 7 ) is “ A 5 A 7 A 2 A 3 A 6 A 4 A 1 ”, which illustrates the ranking results of the algorithm based on PT are consistent with that of the algorithm based on TOPSIS. See Table 13 and Table 17. Although the parameters ( θ ,   α ,   and   β ) are changed, the ranking of A i are still “ A 5 A 2 A 7 A 3 A 6 A 4 A 1 ” ( λ = 1 ) and “ A 5 A 7 A 2 A 3 A 6 A 4 A 1 ” ( λ = 2 ) respectively. See Table 15 and Table 17. However, the rankings of A i are different in the other three algorithms. See Table 19 and Table 21. The comparative analysis results demonstrate the algorithm based on PT is superior to the other three traditional algorithms.

4.4. The Second Case Study

In order to illustrate the feasibility and validity of the algorithm based on PT better, a second case study is given. There are seven cars to choose on the “Auto Home” website. The seven cars are ATENZA ( A 1 ), CAMRY ( A 2 ), ACCORD ( A 3 ), LAMANDO ( A 4 ), SAGITAR ( A 5 ), LAVIDA( A 6 ), and BORA ( A 7 ). The computational and analytical processes are the same as in Section 4.1.
Collect the users’ assessment information on the “Auto Home” website until January 5 in 2019. See Table 22, Table 23 and Table 24.
By the same method of Section 4.1, the weight vector w = ( w 1 , w 2 , w 3 , w 4 , w 5 , w 6 , w 7 , w 8 ) T on the eight attributes X = ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ) T is given by Equation (12). See Table 25.
Get the weighted DM matrix. See Table 26.
Since the calculation process of this example is the same as that of Section 4.1, then only the final calculation results are given as follows. See Table 27.
The ranking of A i is as follows. See Table 28.
This paper proposes a generalized distance measures method between two PLTs with multi-granular linguistic information, which are helpful to deal with multi-granular MAGDM problems. These distance measures improve the accuracy of multi-granular linguistic information in the MAGDM problems, even some assessment information is null. Especially, the parameter λ of the extended distance measures method is a variable, which can be used to obtain different distance measures formula according to people’s need. Under these distance measures, the extended MAGDM algorithm based on PT is proposed. From the sensitivity analyses of the parameters (θ, α, and β), we can find the ranking of the alternatives A i   ( i = 1 , 2 , , 7 ) are different but only when α = 0.85 , β = 0.85 , and θ = 4.1 for λ = 2 and   α = 0.725 , β = 0.717 , and θ = 2.04 for λ = 2 ; the results are coincident by the algorithm based on PT. The ranking results are different when the parameters ( θ ,   α ,   and   β   )   are changed. This result is consistent with the meaning of the parameters ( θ ,   α ,   and   β   ) . Therefore, the algorithm based on PT proposed in Section 4.1 is illustrated to be scientific. Two case studies of purchasing a car is given to demonstrate the algorithm based on PT is valid and applied by comparing the extended TOPSIS, VIKOR, and Pang Qi et al.’s algorithms. Here, the parameters of δ and v can be selected by what we need in actual problems. From the comparative analyses, we can find when λ = 1 , the ranking of A i ( i = 1 , 2 , , 7 ) is “ A 5 A 2 A 7 A 3 A 6 A 4 A 1 ” and when λ = 2 , the ranking of A i ( i = 1 , 2 , , 7 ) is “ A 5 A 7 A 2 A 3 A 6 A 4 A 1 ”, which illustrates the ranking results of the algorithm based on PT are consistent with that of the algorithm based on TOPSIS. See Table 13 and Table 17. Although the parameters ( θ ,   α ,   and   β ) are changed, the ranking of A i are still “ A 5 A 2 A 7 A 3 A 6 A 4 A 1 ” ( λ = 1 ) and “ A 5 A 7 A 2 A 3 A 6 A 4 A 1 ” ( λ = 2 ) respectively. However, the rankings of A i are different in the other three algorithms. Therefore, the comparing analyses demonstrate the novelty, feasibility, and validity of the proposed MAGMD method based on PT. The novel method of MAGMD in this paper can be used to deal with some practical MAGDM problems under multi-granular probabilistic linguistic environments.
The MAGDM algorithm based on PT proposed in this paper also has some limitations: If there are too many attributes or alternatives, the size of Section 4.1 might be quite big. However, we can use some software packages to solve it, such as by crawler technology, matlab, python, and so on, so this is not a big problem in the use of this method. Whether there are more appropriate ways to measure the distances between two PLTs with multi-granular linguistic information is a valued question. There are some directions for further investigation: Firstly, how to select an appropriate parameter λ to calculate the distance between two PLTSs is a valued problem; secondly, the applications of these distance measures are interesting to research in other fields, such as cluster analysis, MCGDM problems, and so on; and finally, the linguistic information is also expressed by hesitant fuzzy numbers, interval fuzzy number, etc. to represent the fitting accuracy. These issues should be focused on further in the future.

Funding

This study was funded by the Major Program of the National Natural Science Foundation of China (71490725), the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (71521001), the National Natural Science Foundation of China (71722010,91546114,91746302,71501057), the National Key Research and Development Program of China (2017YF80803303), the National Natural Science Foundation of China (71571002), and the Key Projects of Natural Science Research in Anhui Colleges and Universities (KJ2016A151).

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper. This research does not involve any human or animal participation.

References

  1. Zhou, L.G.; Chen, H.Y. Continous ordered linguistic distance measure and its application to multiple attribute group decision making. Group Decis. Negot. 2013, 22, 739–758. [Google Scholar] [CrossRef]
  2. Parreiras, R.O.; Ekel, P.Y.; Martni, J.S.C.; Palhares, R.M. A flexible consensus scheme for multicriteria group decision making under linguistiic assessments. Inf. Sci. 2010, 180, 1075–1089. [Google Scholar] [CrossRef]
  3. Grossglauser, M.; Saner, H. Data–driven healthcare: From patterns to actions. Eur. J. Prev. Cardiol. 2014, 21, 14–17. [Google Scholar] [CrossRef]
  4. Celotto, A.; Loia, V.; Senatore, S. Fuzzy linguistic approach to quality assessment model for electricity network infrastructure. Inf. Sci. 2015, 304, 1–15. [Google Scholar] [CrossRef]
  5. Tao, Z.F.; Chen, H.Y.; Zhou, L.G.; Liu, J.P. 2-Tuple linguistic soft set and its application to group decision making. Soft Comput. 2015, 19, 1201–1213. [Google Scholar] [CrossRef]
  6. Sengupta, A.T.; Pal, K.; Zhou, L.G.; Chen, H.Y. Fuzzy Preference Ordering of Interval Numbers in Decision Problems; Springer: Heidelberg, Germany, 2009; Volume 238, pp. 140–143. [Google Scholar]
  7. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 3, 338–353. [Google Scholar] [CrossRef]
  8. Xu, Z.S. Linguistic Decision Making: Theory and Methods; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  9. Herrera, F.; Verdegay, J.L. Linguistic assessments in group decision. In Proceedings of the First European Congress on Fuzzy and Intelligent Technologies, Aachen, Germany, 7–10 September 1993; pp. 941–948. [Google Scholar]
  10. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A model of consensus in group decision making under linguistic assessments. Fuzzy Sets Syst. 1996, 78, 73–87. [Google Scholar] [CrossRef]
  11. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A rational consensus model in group decision making using linguistic assessments. Fuzzy Sets Syst. 1997, 88, 31–49. [Google Scholar] [CrossRef]
  12. Xu, Z.S. A method for multiple attribute decision making with incomplete weight information in linguistic setting. Knowl.-Based Syst. 2007, 20, 719–725. [Google Scholar] [CrossRef]
  13. Arieh, D.B.; Chen, Z. Linguistic group decision-making: Opinion aggregation and measures of consensus. Fuzzy Opt. Decis. Mak. 2006, 5, 371–386. [Google Scholar] [CrossRef]
  14. Xu, Z.S. An approach based on the uncertain LOWG and induced uncertain LOWG operators to group decision making with uncertain multiplicative linguistic preference relations. Decis. Support Syst. 2006, 41, 488–499. [Google Scholar] [CrossRef]
  15. Liu, P.D.; Mahmood, T.; Khan, Q. Multi-Attribute Decision-Making Based on Prioritized Aggregation Operator under Hesitant. Symmetry 2017, 9, 11. [Google Scholar] [CrossRef]
  16. Liao, H.C.; Zhang, C.; Luo, L. A multiple attribute group decision making method based on two novel intuitionistic multiplicative distance measures. Inf. Sci. 2018, 467, 766–783. [Google Scholar] [CrossRef]
  17. Rodríguez, R.M.; Martínez, L.; Herrera, F. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  18. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  19. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning—I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  20. Dong, Y.C.; Wu, Y.Z.; Zhang, H.J.; Zhang, G.Q. Multi-granular unbalanced linguistic distribution assessments with interval symbolic proportions. Knowl.-Based Syst. 2015, 82, 139–151. [Google Scholar] [CrossRef]
  21. Wu, Z.B.; Xu, J.P. Possibility distribution-based approach for MAGDM with hesitant fuzzy linguistic information. IEEE Trans. Cybern. 2016, 46, 694–705. [Google Scholar] [CrossRef]
  22. Zhang, G.Q.; Dong, Y.C.; Xu, Y.F. Consistency and consensus measures for linguistic preference relations based on distribution assessments. Inf. Fusion 2014, 17, 46–55. [Google Scholar] [CrossRef]
  23. Liu, H.B.; Rodriguez, R.M. A fuzzy envelope for hesitant fuzzy linguistic term set and its application to multi-criteria decision making. Inf. Sci. 2014, 258, 220–238. [Google Scholar] [CrossRef]
  24. Yang, J.B. Rule and utility based evidential reasoning approach for multi-attribute decision analysis under uncertainties. Eur. J. Oper. Res. 2001, 131, 31–61. [Google Scholar] [CrossRef]
  25. Yang, J.B.; Xu, D.L. On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2002, 32, 289–304. [Google Scholar] [CrossRef]
  26. Wang, J.H.; Hao, J.Y. A new version of 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2006, 14, 435–445. [Google Scholar] [CrossRef]
  27. Pang, Q.; Wang, H.; Xu, Z.S. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  28. Lin, M.W.; Xu, Z.S.; Zhai, Y.L.; Yao, Z.Q. Multi-attribute group decision-making under probabilistic uncertain linguistic environment. J. Oper. Res. Soc. 2017, 1–15. [Google Scholar] [CrossRef]
  29. Xu, Z.S.; Zhou, W. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Opt. Decis. Mak. 2017, 16, 481–503. [Google Scholar] [CrossRef]
  30. Lin, M.W.; Xu, Z.S. Probabilistic Linguistic Distance Measures and Their Applications in Multi-criteria Group Decision Making. In Soft Computing Applications for Group Decision-Making and Consensus Modeling; Springer: Cham, Switzerland, 2018; Volume 357, pp. 411–440. [Google Scholar]
  31. Kobina, A.; Liang, D.C. Probabilistic linguistic power aggregation operators for multi-criteria group decision making. Symmetry 2017, 9, 12. [Google Scholar] [CrossRef]
  32. Xu, Z.S.; Wang, H. Managing multi-granularity linguistic information in qualitative group decision making: An overview. Granul. Comput. 2016, 1, 21–35. [Google Scholar] [CrossRef]
  33. Wang, H.; Xu, Z.S.; Zeng, X.J. Hesitant fuzzy linguistic term sets for linguistic decision making: Current developments, issues and challenges. Inf. Fusion 2018, 43, 1–12. [Google Scholar] [CrossRef]
  34. Gorzalczany, M.B. An interval-valued fuzzy inference method-some basic properties. Fuzzy Sets Syst. 1989, 31, 243–251. [Google Scholar] [CrossRef]
  35. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  36. Atanassov, K.T.; Gargov, G. Interval–valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  37. Chen, N.; Xu, Z.S.; Xia, M.M. Interval-valued hesitant preference relations and their applications to group decision making. Knowl-Based Syst. 2013, 37, 528–540. [Google Scholar] [CrossRef]
  38. W, L.; W, Y.M.; Martinez, L. A group decision method based on prospect theory for emergency situations. Inf. Sci. 2017, 418, 119–135. [Google Scholar]
  39. Yao, S.; Yu, D.; Song, Y.; Yao, H.; Hu, Y.; Guo, B. Dry bulk carrier investment selection through a dual group decision fusing mechanism in the green supply chain. Sustainability 2018, 10, 4528. [Google Scholar] [CrossRef]
  40. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A sequential selection process in group decision making with a linguistic assessment approach. Inf. Sci. 1995, 85, 223–239. [Google Scholar] [CrossRef]
  41. Rodríguez, R.M.; Martínez, L.; Herrera, F. A group decision making model dealing with comparative linguistic expressions based on hesitant fuzzy linguistic term sets. Inf. Sci. 2013, 241, 28–42. [Google Scholar] [CrossRef]
  42. Yager, R.R. Generalized OWA aggregation operators. Fuzzy Opt. Decis. Mak. 2004, 3, 93–107. [Google Scholar] [CrossRef]
  43. Ma, J.; Fan, Z.P.; Huang, L.H. A subjective and objective integrated approach to determine attribute weights. Eur. J. Oper. Res. 1999, 112, 397–404. [Google Scholar] [CrossRef]
  44. Kahneman, D.; Tversky, A. Prospect theory: An analysis of decision under risk. Econometrica 1979, 47, 263–292. [Google Scholar] [CrossRef]
  45. Xu, Z.S.; Xia, M.M. Hesitant Fuzzy entropy and cross-entropy and their use in multi-attribute decision-making. Int. J. Intell. Syst. 2012, 27, 799–822. [Google Scholar] [CrossRef]
  46. Wang, L.Z.; Zhang, X.; Wang, Y.M. A prospect theory-based interval dynamic reference point method for emergency decision making. Expert Syst. Appl. 2015, 42, 9379–9388. [Google Scholar] [CrossRef]
  47. Zhang, X.L.; Xu, Z.S. The TODIM analysis approach based on novel measured functions under hesitant fuzzy environment. Knowl.-Based Syst. 2014, 61, 48–58. [Google Scholar] [CrossRef]
Figure 1. The algorithm diagram.
Figure 1. The algorithm diagram.
Symmetry 11 00127 g001
Figure 2. The ration C i of A i .
Figure 2. The ration C i of A i .
Symmetry 11 00127 g002
Figure 3. The Closeness coefficient C D i .
Figure 3. The Closeness coefficient C D i .
Symmetry 11 00127 g003
Figure 4. The compromise index M C i .
Figure 4. The compromise index M C i .
Symmetry 11 00127 g004
Figure 5. The closeness coefficient C I i .
Figure 5. The closeness coefficient C I i .
Symmetry 11 00127 g005
Table 1. The assessment information from scoring data by S 5 .
Table 1. The assessment information from scoring data by S 5 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 5 5 S 5 5 S 5 5 S 4 5 S 5 5 S 5 5 S 4 5 S 5 5
A 2 S 4.79 5 S 4.76 5   S 4.78 5 S 4.93 5 S 4.45 5   S 4.87 5   S 4.15 5   S 4.88 5
A 3 S 4.59 5   S 4.33 5   S 4.39 5   S 4.90 5 S 4.04 5 S 4.34 5 S 3.68 5 S 4.35 5
A 4 S 4.83 5 S 4.89 5   S 4.13 5   S 4.79 5   S 3.41 5   S 4.64 5   S 3.56 5 S 4.42 5
A 5 S 4.60 5 S 4.04 5 S 4.06 5 S 4.15 5 S 3.68 5 S 4.85 5 S 3.79 5 S 4.49 5
A 6 S 4.25 5 S 3.80 5 S 4.38 5 S 4.37 5 S 3.63 5 S 4.63 5 S 4.05 5 S 4.44 5
A 7 S 3.55 5 S 3.60 5 S 4.38 5 S 4.28 5 S 3.53 5   S 4.62 5 S 3.34 5 S 4.40 5
Table 2. The assessment information from word-of-mouth data by S 7 .
Table 2. The assessment information from word-of-mouth data by S 7 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 6 7 S 3 7 S 5 7 S 2 7 S 7 7 S 2 7 S 4 7
A 2 S 6 7 S 3 7 S 7 7 S 5 7 S 4 7 S 7 7 S 2 7 S 5 7
A 3 S 5 7 S 3 7 S 5 7 S 4 7 S 6 7 S 4 7 S 6 7
A 4 S 7 7 S 3 7   S 5 7 S 5 7 S 2 7 S 7 7   S 5 7
A 5 S 5 7 S 4 7 S 6 7 S 5 7 S 3 7 S 7 7 S 2 7 S 6 7
A 6 S 6 7 S 3 7 S 7 7 S 6 7 S 5 7 S 6 7 S 3 7 S 4 7
A 7 S 5 7 S 5 7 S 4 7 S 3 7   S 5 7 S 3 7 S 5 7
Table 3. The assessment information from forum reviews data by S 9 .
Table 3. The assessment information from forum reviews data by S 9 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 8 9   S 6 9 S 7 9 S 8 9 S 4 9   S 8 9   S 7 9   S 7 9
A 2 S 7 9   S 7 9   S 7 9 S 8 9   S 6 9   S 7 9   S 6 9   S 9 9
A 3 S 9 9   S 6 9   S 9 9   S 8 9   S 4 9   S 4 9   S 1 9   S 7 9
A 4 S 9 9 S 4 9 S 4 9 S 9 9 S 3 9 S 6 9 S 1 9 S 6 9
A 5 S 8 9 S 5 9 S 3 9 S 3 9 S 4 9 S 7 9 S 4 9 S 7 9
A 6 S 4 9 S 6 9 S 9 9 S 9 9 S 2 9 S 8 9 S 4 9 S 8 9
A 7 S 6 9 S 5 9 S 8 9 S 5 9 S 6 9 S 8 9 S 8 9 S 9 9
Table 4. The assessment probabilistic linguistic assessment matrix.
Table 4. The assessment probabilistic linguistic assessment matrix.
Alternative x 1 x 2 x 3 x 4
A 1 { S 5 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 8 9 ( 1 3 ) }   { S 1 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 6 9 ( 1 3 ) }   { S 5 5 ( 1 2 ) , S 7 9 ( 1 2 ) }   { S 4 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 8 9 ( 1 3 ) }
A 2 { S 4.79 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 7 9 ( 1 3 ) } { S 4.76 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 7 9 ( 1 3 ) } { S 4.78 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 7 9 ( 1 3 ) } { S 4.93 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 8 9 ( 1 3 ) }
A 3 { S 4.59 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 9 9 ( 1 3 ) } { S 4.33 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 4.39 5 ( 1 2 ) , S 9 9 ( 1 2 ) }   { S 4.90 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 8 9 ( 1 3 ) }
A 4 { S 4.83 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 9 9 ( 1 3 ) } { S 4.89 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.13 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.79 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 9 9 ( 1 3 ) }
A 5 { S 4.60 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 8 9 ( 1 3 ) } { S 4.04 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 5 9 ( 1 3 ) } { S 4.06 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 3 9 ( 1 3 ) } { S 4.15 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 3 9 ( 1 3 ) }
A 6 { S 4.25 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 3.80 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 4.38 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 9 9 ( 1 3 ) } { S 4.37 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 9 9 ( 1 3 ) }
A 7 { S 3.55 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 3.60 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 5 9 ( 1 3 ) } { S 4.38 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 8 9 ( 1 3 ) } { S 4.28 5 ( 1 2 ) , S 5 9 ( 1 2 ) }
Alternative x 5 x 6 x 7 x 8
A 1 { S 5 5 ( 1 3 ) , S 2 7 ( 1 3 ) , S 4 9 ( 1 3 ) }   { S 5 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 8 9 ( 1 3 ) }   { S 4 5 ( 1 3 ) , S 2 7 ( 1 3 ) , S 7 9 ( 1 3 ) } { S 5 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 7 9 ( 1 3 ) }
A 2 { S 4.45 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 4.87 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 7 9 ( 1 3 ) } . { S 4.15 5 ( 1 3 ) , S 2 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 4.88 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 9 9 ( 1 3 ) }
A 3 { S 4.04 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.34 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 3.68 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 1 9 ( 1 3 ) } { S 4.35 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 7 9 ( 1 3 ) }
A 4 { S 3.41 5 ( 1 3 ) , S 2 7 ( 1 3 ) , S 3 9 ( 1 3 ) } { S 4.64 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 3.56 5 ( 1 2 ) , S 1 9 ( 1 2 ) } { S 4.42 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 6 9 ( 1 3 ) }
A 5 { S 3.68 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.85 5 ( 1 3 ) , S 7 7 ( 1 3 ) , S 7 9 ( 1 3 ) } { S 3.79 5 ( 1 3 ) , S 2 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.49 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 7 9 ( 1 3 ) }
A 6 { S 3.63 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 2 9 ( 1 3 ) } { S 4.63 5 ( 1 3 ) , S 6 7 ( 1 3 ) , S 8 9 ( 1 3 ) } { S 4.05 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 4 9 ( 1 3 ) } { S 4.44 5 ( 1 3 ) , S 4 7 ( 1 3 ) , S 8 9 ( 1 3 ) }
A 7 { S 3.53 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 6 9 ( 1 3 ) } { S 4.62 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 8 9 ( 1 3 ) } { S 3.34 5 ( 1 3 ) , S 3 7 ( 1 3 ) , S 8 9 ( 1 3 ) } { S 4.40 5 ( 1 3 ) , S 5 7 ( 1 3 ) , S 9 9 ( 1 3 ) }
Table 5. The normalized decision-making (DM) matrix.
Table 5. The normalized decision-making (DM) matrix.
Alternative x 1 x 2 x 3 x 4
A 1   { S 1 3 , S 8 27 , S 2 7 } { S 1 3 , S 2 9 , S 1 7 } { S 1 2 , S 7 18 , S 0 } { S 8 27 , S 4 15 , S 5 21 }  
A 2 { S 4.79 15 , S 2 7 , S 7 27 } { S 4.76 15 , S 7 27 , S 1 7 } { S 1 3 , S 4.78 15 , S 7 27 } { S 4.93 15 , S 8 27 , S 5 21 }
A 3 { S 1 3 , S 4.59 15 , S 5 21 } { S 4.33 15 , S 2 9 , S 1 7 } { S 1 2 , S 4.39 10 , S 0 } { S 4.90 15 , S 8 27 , S 5 21 }
A 4 { S 2 3 , S 4.83 15 , S 0 } { S 4.89 15 , S 4 27 , S 1 7 } { S 4.13 15 , S 5 21 , S 4 27 } { S 1 3 , S 4.79 15 , S 5 21 }
A 5 { S 4.60 15 , S 8 27 , S 5 21 } { S 4.04 15 , S 4 21 , S 5 27 } { S 2 7 , S 4.06 15 , S 1 9 } { S 4.15 15 , S 5 21 , S 1 9 }
A 6 { S 2 7 , S 4.25 15 , S 4 27 } { S 3.80 15 , S 2 9 , S 1 7 } { S 2 3 , S 4.38 15 , S 0 } { S 1 3 , S 4.37 15 , S 2 7 }
A 7 { S 5 21 , S 3.55 15 , S 2 9 } { S 3.60 15 , S 5 21 , S 5 27 } { S 8 27 , S 4.38 15 , S 4 21 } { S 4.28 10 , S 5 18 , S 0 }
Alternative x 5 x 6 x 7 x 8
A 1 { S 1 3 , S 4 27 , S 2 21 } { S 2 3 , S 8 27 , S 0 } { S 4 15 , S 7 27 , S 2 21 } { S 1 3 , S 7 27 , S 4 21 }
A 2 { S 4.45 15 , S 2 9 , S 4 21 } { S 1 3 , S 4.87 15 , S 7 27 } { S 4.15 15 , S 2 9 , S 2 21 } { S 1 3 , S 4.88 15 , S 5 21 }
A 3 { S 4.04 15 , S 4 21 , S 4 27 } { S 4.34 15 , S 2 7 , S 4 27 } { S 3.68 15 , S 4 21 , S 1 27 } { S 4.35 15 , S 2 7 , S 7 27 }
A 4 { S 3.41 15 , S 1 9 , S 2 21 } { S 1 3 , S 4.64 15 , S 2 9 } { S 3.56 10 , S 1 18 , S 0 } { S 4.42 15 , S 5 21 , S 2 9 }
A 5 { S 3.68 15 , S 4 27 , S 1 7 } { S 1 3 , S 4.85 15 , S 7 27 } { S 3.79 15 , S 4 27 , S 2 21 } { S 4.49 15 , S 2 7 , S 7 27 }
A 6 { S 3.63 15 , S 5 21 , S 2 27 } { S 4.63 15 , S 8 27 , S 2 7 } { S 4.05 15 , S 4 27 , S 1 7 } { S 8 27 , S 4.44 15 , S 4 21 }
A 7 { S 3.53 15 , S 2 9 , S 1 7 } { S 4.62 15 , S 8 27 , S 5 21 } { S 8 27 , S 3.34 15 , S 1 7 } . { S 1 3 , S 4.40 15 , S 5 21 }  
Table 6. The weights of the attributes.
Table 6. The weights of the attributes.
Attribute x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
Weight0.09220.14810.10220.10230.18150.08320.19480.0955
Table 7. The weighted normalized DM matrix.
Table 7. The weighted normalized DM matrix.
Alternative x 1 x 2 x 3 x 4
A 1 (0.0307,0.0273,0.0263)(0.0494,0.0329,0.0212)(0.0511,0.0397,0.0000)(0.0303,0.0273,0.0244)
A 2 (0.0294,0.0263,0.0239)(0.0470,0.0384,0.0212)(0.0341,0.0326,0.0265)(0.0336,0.0303,0.0244)
A 3 (0.0307,0.0282,0.0220)(0.0428,0.0329,0.0212)(0.0511,0.0449,0.0000)(0.0334,0.0303,0.0244)
A 4 (0.0615,0.0297,0.0000)(0.0483,0.0219,0.0212)(0.0281,0.0243,0.0151)(0.0341,0.0327,0.0244)
A 5 (0.0283,0.0273,0.0220)(0.0399,0.0282,0.0274)(0.0292,0.0277,0.0114)(0.0283,0.0244,0.0114)
A 6 (0.0263,0.0261,0.0137)(0.0375,0.0329,0.0212)(0.0681,0.0298,0.0000)(0.0341,0.0298,0.0292)
A 7 (0.0220,0.0218,0.0205)(0.0355,0.0353,0.0274)(0.0303,0.0298,0.0195)(0.0438,0.0284,0.0000)
Alternative x 5 x 6 x 7 x 8
A 1 (0.0605,0.0269,0.0173)(0.0555,0.0247,0.0000)(0.0519,0.0505,0.0186)(0.0318,0.0248,0.0182)
A 2 (0.0538,0.0403,0.0346)(0.0277,0.0270,0.0216)(0.0539,0.0433,0.0186)(0.0318,0.0311,0.0227)
A 3 (0.0489,0.0346,0.0269)(0.0241,0.0238,0.0123)(0.0478,0.0371,0.0072)(0.0277,0.0273,0.0248)
A 4 (0.0413,0.0202,0.0173)(0.0277,0.0257,0.0185)(0.0693,0.0108,0.0000)(0.0281,0.0227,0.0212)
A 5 (0.0445,0.0269,0.0259)(0.0277,0.0269,0.0216)(0.0492,0.0289,0.0186)(0.0286,0.0273,0.0248)
A 6 (0.0439,0.0432,0.0134)(0.0257,0.0247,0.0238)(0.0526,0.0289,0.0278)(0.0283,0.0283,0.0182)
A 7 (0.0427,0.0403,0.0259)(0.0256,0.0247,0.0198)(0.0577,0.0434,0.0278)(0.0318,0.0280,0.0227)
Table 8. The positive ideal solution.
Table 8. The positive ideal solution.
Attribute x 1 x 2 x 3 x 4
PLPIS(0.0615,0.0297,0.0263)(0.0494,0.0384,0.0274)(0.0681,0.0449,0.0265)(0.0438,0.0327,0.0292)
Attribute x 5 x 6 x 7 x 8
PLPIS(0.0605,0.0432,0.0346)(0.0555,0.0270,0.0238)(0.0693,0.0505,0.0278)(0.0318,0.0311,0.0248)
Table 9. The negative ideal solution.
Table 9. The negative ideal solution.
Attribute x 1 x 2 x 3 x 4
PLNIS(0.0220,0.0218,0.0000)(0.0355,0.0219,0.0212)(0.0281,0.0243,0.0000)(0.0283,0.0244,0.0000)
Attribute x 5 x 6 x 7 x 8
PLNIS(0.0413,0.0202,0.0134)(0.0241,0.0238,0.0000)(0.0478,0.0108,0.0000)(0.0277,0.0227,0.0182)
Table 10. The results of V ( A i , L ) .
Table 10. The results of V ( A i , L ) .
Attribute A 1 A 2 A 3 A 4 A 5 A 6 A 7
Distance Parameter
λ = 1 2.27752.35812.17912.05162.03942.18722.1285
λ = 2 2.49842.38832.28842.25352.07662.32282.2020
Table 11. The results of V ( A i , L + ) .
Table 11. The results of V ( A i , L + ) .
Attribute   A 1   A 2   A 3   A 4   A 5   A 6   A 7
Distance Parameter
λ = 1 4.82254.92774.56054.31574.20104.59764.4524
λ = 2 5.27265.00314.79564.74484.30884.86944.5975
Table 12. The ration C i of A i .
Table 12. The ration C i of A i .
Attribute   A 1   A 2   A 3   A 4   A 5   A 6   A 7
Distance Parameter
λ = 1 0.47230.47850.47780.47540.48540.47760.4781
λ = 2 0.47390.47740.47720.47790.48200.47700.4789
Table 13. The ranking of A i .
Table 13. The ranking of A i .
Distance ParameterRank
λ = 1 A 5 A 2 A 7 A 3 A 6 A 4 A 1
λ = 2 A 5 A 7 A 2 A 3 A 6 A 4 A 1
Table 14. The ration C i of A i .
Table 14. The ration C i of A i .
Distance ParameterParameters A 1 A 2 A 3 A 4 A 5 A 6 A 7
λ = 1 α = 0.85 , β = 0.85 θ = 4.1 0.25870.26200.26160.26040.26570.26150.2617
α = 0.725 β = 0.717 θ = 2.04 0.50970.51520.51460.51230.52100.51430.5142
α = 0.89 β = 0.92 θ = 2.25 0.49390.50030.50050.49850.51010.50030.5016
Distance ParameterParameters A 1 A 2 A 3 A 4 A 5 A 6 A 7
λ = 2 α = 0.85 β = 0.85 θ = 4.1 0.25950.26140.26130.26010.26380.26120.2622
α = 0.725 β = 0.717 θ = 2.04 0.51140.51410.51400.51210.51780.51390.5152
α = 0.89 β = 0.92 θ = 2.25 0.49390.49880.49910.49650.50600.49850.5019
Table 15. The ranking of A i .
Table 15. The ranking of A i .
Distance ParameterParametersRank
λ = 1 α = 0.85 , β = 0.85 θ = 4.1 A 5 A 2 A 7 A 3 A 6 A 4 A 1
α = 0.725 β = 0.717 θ = 2.04 A 5 A 2 A 3 A 6 A 7 A 4 A 1
α = 0.89 β = 0.92 θ = 2.25 A 5 A 7 A 3 A 2 = A 6 A 4 A 1
λ = 2 α = 0.85 β = 0.85 θ = 4.1 A 5 A 7 A 2 A 3 A 6 A 4 A 1
α = 0.725 β = 0.717 θ = 2.04 A 5 A 7 A 2 A 3 A 6 A 4 A 1
α = 0.89 β = 0.92 θ = 2.25 A 5 A 7 A 3 A 2 A 6 A 4 A 1
Table 16. The closeness coefficient C D i .
Table 16. The closeness coefficient C D i .
Attribute A 1 A 2 A 3 A 4 A 5 A 6 A 7
Distance Parameter
λ = 1 0.51700.52090.52020.51870.52470.52010.5206
λ = 2 0.51810.52020.52000.51860.52280.51990.5211
Table 17. The ranking of A i .
Table 17. The ranking of A i .
Distance ParameterRank
λ = 1 A 5 A 2 A 7 A 3 A 6 A 4 A 1
λ = 2 A 5 A 7 A 2 A 3 A 6 A 4 A 1
Table 18. The compromise index M C i of A i   .
Table 18. The compromise index M C i of A i   .
Attribute A 1 A 2 A 3 A 4 A 5 A 6 A 7
Distance Parameter
λ = 1 0.60450.00000.52211.00000.66630.75710.4424
λ = 2 0.35470.00880.49470.86070.70750.64590.4109
Table 19. The ranking of A i .
Table 19. The ranking of A i .
Distance ParameterRank
λ = 1 A 4 A 6 A 5 A 1 A 3 A 7 A 2
λ = 2 A 4 A 5 A 6 A 3 A 7 A 1 A 2
Table 20. The closeness coefficient C I i .
Table 20. The closeness coefficient C I i .
Attribute   A 1   A 2   A 3   A 4   A 5   A 6   A 7
Distance Parameter
λ = 1 −0.2062−0.1958−0.1821−0.1767−0.1506−0.1832−0.1754
λ = 2 −0.2585−0.2349−0.2251−0.2280−0.1903−0.2297−0.2103
Table 21. The ranking of A i .
Table 21. The ranking of A i .
Distance ParameterRank
λ = 1 A 5 A 7 A 4 A 3 A 6 A 2 A 1
λ = 2 A 5 A 7 A 3 A 4 A 6 A 2 A 1
Table 22. The assessment information from scoring data by S 5 .
Table 22. The assessment information from scoring data by S 5 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 4.45 5 S 4.82 5 S 4.93 5 S 4.72 5 S 4.13 5 S 4.92 5 S 4.52 5 S 4.78 5
A 2 S 4.65 5 S 4.53 5   S 4.70 5   S 4.75 5 S 4.61 5   S 4.88 5   S 4.57 5   S 4.56 5
A 3 S 4.88 5   S 4.71 5   S 4.61 5   S 4.60 5 S 4.26 5 S 4.84 5 S 4.4 5 S 4.54 5
A 4 S 3.96 5 S 4.77 5   S 4.77 5   S 4.63 5   S 4.34 5   S 5 5   S 4.03 5 S 4.27 5
A 5 S 4.53 5 S 4.41 5 S 4.65 5   S 4.42 5 S 4.26 5 S 4.65 5 S 4.38 5 S 4.43 5
A 6 S 4.7 5 S 3.96 5 S 4.36 5 S 4.22 5 S 4.1 5 S 4.86 5 S 3.79 5 S 4.1 5
A 7 S 4.52 5 S 3.64 5 S 4.40 5 S 4.25 5 S 4.12 5   S 4.75 5 S 3.82 5   S 3.99 5
Table 23. The assessment information from word-of-mouth data by S 7 .
Table 23. The assessment information from word-of-mouth data by S 7 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 5 7 S 7 7 S 6 7 S 7 7 S 5 7 S 2 7 S 4 7
A 2 S 7 7 S 2 7 S 3 7 S 6 7 S 6 7 S 6 7 S 7 7 S 5 7
A 3 S 7 7 S 7 7 S 4 7 S 5 7 S 6 7 S 4 7 S 5 7 S 6 7
A 4 S 3 7 S 7 7 S 6 7 S 4 7 S 7 7 S 5 7   S 6 7
A 5 S 5 7   S 2 7 S 5 7 S 6 7 S 5 7 S 5 7 S 7 7 S 7 7
A 6 S 4 7 S 1 7 S 7 7 S 6 7 S 6 7 S 7 7 S 1 7 S 5 7
A 7 S 7 7 S 1 7 S 5 7   S 6 7 S 4 7
Table 24. The assessment information from forum reviews data by S 9 .
Table 24. The assessment information from forum reviews data by S 9 .
Alternative x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
A 1 S 7 9   S 7 9 S 8 9 S 4 9 S 4 9   S 9 9   S 3 9   S 4 9
A 2 S 7 9   S 4 9   S 7 9 S 6 9   S 9 9   S 7 9   S 7 9   S 7 9
A 3 S 8 9   S 7 9   S 6 9   S 3 9   S 7 9   S 8 9   S 6 9   S 7 9
A 4 S 2 9 S 3 9 S 8 9 S 8 9 S 9 9 S 6 9 S 8 9 S 5 9
A 5 S 8 9 S 4 9 S 8 9 S 3 9 S 2 9 S 9 9 S 9 9 S 5 9
A 6 S 7 9 S 2 9 S 9 9 S 7 9 S 9 9 S 3 9 S 5 9 S 4 9
A 7 S 1 9 S 5 9 S 8 9 S 7 9 S 7 9 S 7 9 S 7 9 S 7 9
Table 25. The weights of the attributes.
Table 25. The weights of the attributes.
Attribute x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8
Weight0.12580.17740.10250.12220.11370.09530.14090.1222
Table 26. The weighted normalized DM matrix.
Table 26. The weighted normalized DM matrix.
Alternative x 1 x 2 x 3 x 4
A 1 (0.0373 0.0326 0.0300)(0.0855,0.0690,0.0000)(0.0342,0.0337,0.0304)(0.0385,0.0349,0.0181)
A 2 (0.0419,0.0390,0.0326)(0.0536,0.0263,0.0169)(0.0321,0.0266,0.0146)(0.0387,0.0349,0.0272)
A 3 (0.0419,0.0409,0.0373)(0.0591,0.0591,0.0557)(0.0315,0.0228,0.0195)(0.0375,0.0291,0.0136)
A 4 (0.0332,0.0180,0.0093)(0.0591,0.0564,0.0197)(0.0489,0.0456,0.0000)(0.0377,0.0362,0.0349)
A 5 (0.0380,0.0373,0.0300)(0.0522,0.0263,0.0169)(0.0318,0.0304,0.0244)(0.0360,0.0349,0.0136)
A 6 (0.0394,0.0326,0.0240)(0.0468,0.0131,0.0084)(0.0342,0.0342,0.0298)(0.0349,0.0344,0.0317)
A 7 (0.0419,0.0379,0.0047)(0.0430,0.0329,0.0084)(0.0456,0.0451,0.0000)(0.0519,0.0475,0.0000)
Alternative x 5 x 6 x 7 x 8
A 1 (0.0379,0.0313,0.0168)(0.0318,0.0313,0.0227)(0.0425,0.0157,0.0134)(0.0389,0.0233,0.0181)
A 2 (0.0379,0.0349,0.0325)(0.0310,0.0272,0.0247)(0.0470,0.0429,0.0365)(0.0371,0.0317,0.0291)
A 3 (0.0325,0.0323,0.0295)(0.0308,0.0282,0.0182)(0.0413,0.0335,0.0313)(0.0370,0.0349,0.0317)
A 4 (0.0379,0.0329,0.0217)(0.0318,0.0318,0.0212)(0.0417,0.0379,0.0335)(0.0349,0.0348,0.0226)
A 5 (0.0323,0.0271,0.0084)(0.0318,0.0295,0.0227)(0.0470,0.0470,0.0411)(0.0407,0.0361,0.0226)
A 6 (0.0379,0.0325,0.0311)(0.0318,0.0309,0.0106)(0.0356,0.0261,0.0067)(0.0334,0.0291,0.0181)
A 7 (0.0312,0.0295,0.0271)(0.0302,0.0272,0.0247)(0.0548,0.0538,0.0000)(0.0475,0.0325,0.0233)
Table 27. The relative values of A i .
Table 27. The relative values of A i .
AlternativeAlgorithm A 1 A 2 A 3 A 4 A 5 A 6 A 7
λ = 1 PT ( C i )0.43700.55450.65550.48020.41670.26440.3808
TOPSIS ( C D i )0.4966 0.54920.60700.52170.47900.35210.4482
VIKOR ( M C i )0.4458 0.99990.83320.39520.21240.11000.5794
Pang Qi et al.’s method ( C I i )−0.4627 −0.2415−0.0000−0.3566−0.5358−1.0676−0.6659
AlternativeAlgorithm A 1 A 2 A 3 A 4 A 5 A 6 A 7
λ = 2 PT ( C i )0.43870.60770.69420.47680.44410.30540.3709
TOPSIS ( C D i )0.49670.57550.62130.52050.49740.39160.4427
VIKOR ( M C i )0.38110.99990.90480.34830.24520.08240.3601
Pang Qi et al.’s method ( C I i )−0.5607−0.1859−0.0000−0.4418−0.5030−0.9665−0.8629
Table 28. The ranking of A i .
Table 28. The ranking of A i .
AlternativeAlgorithmRanking
λ = 1 PT ( C i ) A 3 A 2 A 4 A 1 A 5 A 7 A 6
TOPSIS ( C D i ) A 3 A 2 A 4 A 1 A 5 A 7 A 6
VIKOR ( M C i ) A 2 A 3 A 7 A 1 A 4 A 5 A 6
Pang Qi et al.’s method ( C I i ) A 3 A 2 A 4 A 1 A 5 A 7 A 6
AlternativeAlgorithmRanking
λ = 2 PT ( C i ) A 3 A 2 A 4 A 5 A 1 A 7 A 6
TOPSIS ( C D i ) A 3 A 2 A 4 A 5 A 1 A 7 A 6
VIKOR ( M C i ) A 2 A 3 A 1 A 7 A 4 A 5 A 6
Pang Qi et al.’s method ( C I i ) A 3 A 2 A 4 A 5 A 1 A 7 A 6

Share and Cite

MDPI and ACS Style

Wang, J.-X. A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information. Symmetry 2019, 11, 127. https://doi.org/10.3390/sym11020127

AMA Style

Wang J-X. A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information. Symmetry. 2019; 11(2):127. https://doi.org/10.3390/sym11020127

Chicago/Turabian Style

Wang, Ju-Xiang. 2019. "A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information" Symmetry 11, no. 2: 127. https://doi.org/10.3390/sym11020127

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop