A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information

The traditional multi-attribute group decision making (MAGDM) method needs to be improved to the integration of assessment information under multi-granular probabilistic linguistic environments. Some novel distance measures between two multi-granular probabilistic linguistic term sets (PLTSs) are proposed, and distance measures are proved to be reasonable. To calculate the weights of the alternative attributes, the extended cross-entropy method for multi-granular probabilistic linguistic term sets is proposed. Then, a novel extended MAGDM algorithm based on prospect theory (PT) is proposed. Two case studies of decision making (DM) on purchasing a car is provided to illustrate the application of the extended MAGDM algorithm. The case analyses are proposed to illustrate the novelty, feasibility, and application of the proposed MAGDM algorithm by comparing the other three algorithms based on TOPSIS, VIKOR, and Pang Qi et al.’s method. The analyses results demonstrate that the proposed algorithm based on PT is superior.


Introduction
MAGDM is a hot issue, which aims at finding the optimal alternative with multi-attributes [1].MAGDM is applied in the real world widely, such as in enterprise strategy planning [2], in the choosing of appropriate hospitals [3], in quality assessments [4], in the selection of investment strategies [5], and so on.Integrating decision makers' (DMs') preferences information on different attributes is an important prerequisite [6].Since Zadeh proposed the fuzzy set as the basic fuzzy decision-making model first in 1965 [7], more and more authors have focused on fuzzy MAGDM.In practical DM problems, the DMs express their preferences on the considered alternatives by linguistic terms, such as "bad", "medium", or "good", and so on.Then they make the optimal decision by some appropriate DM methods.
In practice, the same object may be assessed on different platforms, or the same object is described on the same platform by different granular fuzzy linguistic information in the real world.The traditional method of group decision making (GDM) under only the same granular linguistic information cannot be used to integrate hybrid assessment information.Therefore, multi-granular linguistic term sets need to be described efficiently.Then, the basic distance measures of probabilistic linguistic term sets (PLTSs) with multi-granular probabilistic linguistic information are proposed firstly.Under the distance measures methods, a novel algorithm of MAGDM based on prospect theory (PT) is given in this paper.Then, two practical case studies on purchasing a car are employed to illustrate the application of the extended algorithm based on PT.The three other algorithms based on TOPSIS and VIKOR are given to compare.
Until now, there has been lots of research on linguistic decision making (LDM) [8].For example, Herrera and Verdegay [9] proposed the linguistic assessments in GDM in 1993.Then, Herrera et al. [10,11] made some GDM models with linguistic settings.Later, Xu proposed the goal programming model for multi-attribute decision making (MADM) under a linguistic environment [12].Ben-Arieh and Chen [13] gave the aggregation of opinions and consensus measures in linguistic GDM.Xu [14] gave a method based on the uncertain linguistic ordered weighted geometric (LOWG) and the uncertain LOWG operators to GDM with uncertain multiplicative linguistic preference relations.Liu et al. [15] gave a MAGDM approach based on a prioritized aggregation operator with hesitant intuitionistic fuzzy linguistic environments.Liao et al. [16] proposed another MAGDM approach under two intuitionistic multiplicative distance measures.
However, DMs express their preferences inaccurately under linguistic environment because of fuzziness and uncertainties of human beings' thinking.DMs may be hesitant between some possible linguistic terms.Then, Rodriguez et al. [17] gave hesitant fuzzy linguistic term sets (HFLTSs) under hesitant fuzzy sets (HFSs) [18] and linguistic term sets (LTSs) [19], which allows a DM to give several possible values for a linguistic variable.In most of the current research of HFLTSs, the DMs give all possible values with equal importance or weight.Obviously, it is different from the real world.In the problems of both individual DM and GDM, the DMs can prefer some of the possible linguistic terms leading to the set of possible values with different importance degrees.Then, the assessment information includes both several possible linguistic terms and the associated probabilistic information.This information can be described as probabilistic distribution [20][21][22], importance degree [23], belief degree [24,25], and so on.The ignorance of this information may lead to erroneous decision results.We can get accurate preference information of the DMs under the probabilistic linguistic term sets (PLTSs).Then, Wang and Hao [26] proposed two proportional linguistic terms.Under a general form of probabilistic distributions, Zhang et al. [22] and Wu and Xu [21] improved the model.Moreover, Yang and Xu [25] proposed partial ignorance as well with the framework of evidential reasoning.Therefore, lots of research results of PLTSs in MAGDM have been proposed now [27,28].Xu and Zhou [29,30] improved a group of DMs under the hesitant probabilistic fuzzy environment.Kobina et al. [31] made the operators of probabilistic linguistic power aggregation for Multi-Criteria Group Decision Making (MCGDM).
Some methods have been proposed with multi-granularity linguistic information in GDM [32,33].However, there are few researches on MAGDM with multi-granular probabilistic linguistic information.Then, how do we measure two PLTSs?Although, there have been lots of research about the distance measures, such as fuzzy sets [6], interval-valued fuzzy sets [34], intuitionistic fuzzy sets [35], interval-valued intuitionistic fuzzy sets [36], hesitant fuzzy sets [18], interval-valued hesitant fuzzy sets [37], hesitant fuzzy linguistic term sets [17], and so on, there are still few researches on the distance measures of the PLTSs with multi-granular linguistic information.In order to solve these problems, distance measures of PLTSs with multi-granular linguistic information are proposed.Based on the feasibility of prospect theory, many scholars use prospect theory (PT) to solve practical problems.For example, Wang et al. [38] proposed a GDM method based on PT for emergency situations.Yao et al. [39] solved the GDM problem for the green supply chain.In this paper, a novel algorithm for MAGDM based on PT is proposed.

Preliminaries
DMs can use LTSs to express their preferences on the considered objects.The additive LTS is used most widely, which is defined as follows [40]: where is a -granular fuzzy linguistic set, is a linguistic variable with and denoting the lower and upper limits of the linguistic terms, and is a positive integer.The linguistic term has the characteristics as follows: The ordered set is defined if > , > ; the negation operator is defined neg( ) = , where + = − 1.
Because the DMs may hesitate in several possible values in DM, Rodriguez et al. [23] proposed the definitions of HFLTSs as follows.
Definition 1.Let = { , , … , } be an LTS; then, a HFLTs is an ordered finite subset of the consecutive linguistic terms of S [41].
} be an LTS.A PLTS can be defined as where ( ) ( ( ) ) is the linguistic term ( ) associated with the probability ( ) and # ( ) is the number of all the different linguistic terms in ( ) [27].
, then we get the complete information on the probabilistic distribution with all the possible linguistic terms; if , then partial ignorance exists because of current insufficient assessment information.Especially, ∑ ( ) = 0 # ( ) means complete ignorance.Therefore, handling the ignorance of ( ) is crucial research for the application of PLTSs.
The numbers of linguistic terms in PLTSs are usually different for a DM.Therefore, the number of linguistic terms for the PLTSs need to be added, which numbers are relatively small.Then, the numbers of linguistic terms are the same.

Definitions of Multi-Granular Probabilistic Linguistic Term Sets
Inspired by Reference [27], some definitions of multi-granular probabilistic linguistic term sets are proposed as follows.
(2) If # ( ) ≠ # ( ), then by Definition 4, we add some elements to the one with the smaller number of elements.
The PLTSs obtained by Definition 5 are named by the normalized PLTSs.Conveniently, the normalized PLTSs are denoted by ( ) and ( ) as well.
Because the positions of elements in a PLTS are arbitrary, we need to get the ordered PLTSs first, which leads to the operational results in PLTSs being determined directly.× ( ) ( = 1,2, … , # ( )) in descending order.

Distance Measures between Multi-Granular PLTSs
According to the normalized distance measures, the normalized distance measures are extended and the generalized distance measures between two multi-granular PLTSs in discrete cases are proposed as follows.
If ( ) and ( ) are normalized ordered PLTSs as in Definition 5 and Definition 6, then the distance measured between two multi-granular PLTSs are defined as follows.
Obviously, Definition 8 satisfies the three conditions of the distance definition as in Definition 7.
The generalized Euclidean distance between ( ) and ( ) can be given as Inspired by the generalized idea proposed by Yager [42], this paper gives the generalized distance as where > 0.
Especially if = 1, then the generalized distance reduces to the generalized Hamming distance.If = 2 , then it reduces to the generalized Euclidean distance.Then, Definition 9 extends the normalized Hamming distance and Euclidean distance.

A MAGDM Algorithm Based on PT
A MAGDM problem with multi-granular probabilistic linguistic information is described as follows.
There are a set of alternatives, = { , , … , } , and the weight vector = ( , , … , ) of attributes = ( , , … , ) , where is the th attribute of the alternatives and where 0 ≤ ≤ 1 and ∑ = 1 .The DMs assess alternatives on attributes by utilizing linguistic term set to get a set of linguistic decision matrices.
Then, the assessment linguistic information is used to make up a multi-granular probabilistic linguistic decision matrix as follows: where In MAGDM problems, the attributes can be classified into two types: benefits and costs.The higher a benefit attribute is, the better the situation is, while a cost attribute is the reverse [43].In this paper, we suppose the attributes are benefits.Definition 10.Prospect Theory [44]: ∆ implies the gain (∆ ≥ 0) or the loss (∆ < 0) of the outcome relative to the reference point (RP).The prospect value function V(∆ ) is given by where is a parameter that represents the decision maker's sensitivity degree on gain, is a parameter that represents the decision maker's sensitivity degree on loss, and 0 ≤ , ≤ 1, ( > 1) is a parameter that represents the decision maker's loss aversion degree.
Inspired by the definition of RP, the theory of TOPSIS is extended as follows.

Definition 11. Generalized prospect value function ( ) based on TOPSIS is given by
where ( ) and ( ) are normalized ordered multi-granular PLTSs.
Under Definition 11, a novel MAGDM algorithm based on PT diagram is shown as follows.See Figure 1.The specific steps of the algorithm are as follows.
Step 1. Information gathering process: individual reference points (RPs) over the alternatives on different attributes provided by experts are gathered as R = [L (P)] × by Equation (9).
Inspired by Reference [45], the extended cross-entry method is proposed to calculate the attributes' weights.
Step 2. Compute the weight vector w = (w , w , … , w ) of n attributes X = (x , x , … , x ) as follows.See Equation (12). where is the th attribute of the alternatives, 0 ≤ ≤ 1 , and ∑ = 1.In this paper, let = 2 [46].Step 3. Aggregation process: get weighted DM matrix * .See Equation (13).* = * ( ) where * ( ) = ( Step 4. Calculate the positive ideal solution and the negative ideal solution respectively.Then, the definitions of the probabilistic linguistic positive ideal solution (PLPIS) and the probabilistic linguistic negative ideal solution (PLNIS) are defined respectively as follows.The PLPIS of the alternatives is where × .
In order to select a preferred alternative or to rank all the alternatives, we should compute the distance between and and the distance between and .Certainly, a better should be closer to and also farther from .
Step 5. Calculate the gains value and the losses value: gains and losses are calculated with respect to the group reference points of the different alternatives.
Step 6. Calculate the ration between the gains value and the losses value of each alternative.
Rank the alternatives by the values of .Certainly, the bigger the closeness degree is, the better the alternative is.

Case Studies
With the popularity of the Internet, e-commerce has become an indispensable field in daily life.For example, if people want to purchase a car, they will be concerned with all kinds of information about cars on the Internet.People take all kinds of information into consideration to decide which car to buy, such as scoring data, word-of-mouth data, forum reviews, and so on.For example, there are seven new energy cars to choose.People collect assessment information of these cars from users by the three ways (scoring data, word-of-mouth data, and forum reviews) on eight attributes, which are space, power, manipulate, power consumption, comfort, appearance, interior decoration, and cost performance on the "Auto Home" website.Note them as , , , , , , , and respectively.The seven cars are ZhiXuan ( ), WeiChiFS ( ), LiWei ( ), FeiDu ( ), RuiNa RV ( ), KiaK2 ( ), and JinRui ( ) respectively.Since scoring data online is a 5-point system, the scoring data to assessment information is operated to 5-granular linguistic term sets.The word-of-mouth data of the overall assessment for cars can be mapped to 7-granular linguistic term sets.Due to the complexity of the community review information, this information can be operated to 9-granular linguistic term sets.

The Applications of the Algorithm
The applications of the algorithm based on PT are shown as follows.
Step 1. Collect the users' assessment information on the "Auto Home" website until May 20 in 2018.See Tables 1-3.

Alternative
Here, the scoring data (see Table 1) are the final average values of the seven cars on eight attributes from the scoring data.The assessment information (see Table 2) is the general impression from the word-of-mouth data.The assessment information (see Table 3) is from forum reviews data.These data are obtained on the "Auto Home" website.
Then, we get the users' overall assessment probabilistic linguistic term sets.See Table 4.
Table 13.The ranking of .

Sensitivity Analysis
In order to analyze the sensitivity of the parameters, take the different parameters [38] and rank ( = 1,2, … ,7) by the algorithm based on PT as in Section 4.1.Then, the results are as follows.
Table 15.The ranking of .

Distance parameter Parameters
From Tables 14 and 15, we can find the rankings of the alternatives ( = 1,2, … ,7) are different but only when = 0.85， = 0.85，and = 4.1( = 2) and = 0.725， = 0.717，and = 2.04 ( = 2), respectively; the results are coincident by the algorithm based on PT.The ranking results are different as the parameters ( , , and ) are changed.This result is consistent with the meaning of the parameters ( , , and ).Here, and are power parameters related to gains and losses, respectively.is the risk-aversion parameter, which has the characteristic of being steeper for losses than for gains when > 1: The larger the value, the greater the degree of risk.Then the ranking of is different.Therefore, the algorithm based on PT proposed in Section 4.1 is scientific.

Comparative Analysis
In order to illustrate the feasibility and efficiency of the algorithm-based PT, we calculate the other results by the other three algorithms based on TOPSIS, VIKOR, and Pang Qi et al.'s method respectively.
The results of the algorithm based on TOPSIS are as follows: The closeness coefficient of .
The parameter ∈ [0,1] represents the risk preferences of the decision maker.If > 0.5, then it means that the DMs are optimistic.If < 0.5, then it means they are pessimistic.The value of should be given by DMs beforehand.Here, let = 0.5.The higher is, the better the alternative is.
Calculate the closeness coefficient by Equation (19), and the results are as follows.See Table 16 and Figure 3. Rank the alternatives by the values of ( = 1,2, … ,7).See Table 17.
Table 17.The ranking of .

Distance Parameter Rank
The results of the algorithm based on VIKOR are as follows: The compromise index: The whole benefit index: The compromise index: where The parameter denotes the weight of the strategy of the maximum whole benefits, where 1 − is the weight of the individual regret strategy.Here, let = 0.5.The higher is, the better the alternative is.
Table 19.The ranking of .

Distance parameter Rank
The results of the algorithm based on Pang Qi et al.'s method [27] are as follows: The closeness coefficient of Calculate the closeness coefficient by Equation ( 23), and obtain the results.See Table 20 and Figure 5.  Rank the alternatives by the closeness coefficient of ( = 1,2, … ,7).See Table 21.
Table 21.The ranking of .

Distance Parameter Rank
From the comparative analysis, we can find when = 1, the ranking of ( = 1,2, … ,7) is " ≻ ≻ ≻ ≻ ≻ ≻ " and when = 2, the ranking of ( = 1,2, … ,7) is " ≻ ≻ ≻ ≻ ≻ ≻ ", which illustrates the ranking results of the algorithm based on PT are consistent with that of the algorithm based on TOPSIS.See Table 13 and Table 17.Although the parameters (θ, α, and β) are changed, the ranking of are still " ≻ ≻ ≻ ≻ ≻ ≻ " ( = 1) and " ≻ ≻ ≻ ≻ ≻ ≻ " ( = 2) respectively.See Table 15 and Table 17.However, the rankings of are different in the other three algorithms.See Tables 19 and 21.The comparative analysis results demonstrate the algorithm based on PT is superior to the other three traditional algorithms.

The Second Case Study
In order to illustrate the feasibility and validity of the algorithm based on PT better, a second case study is given.There are seven cars to choose on the "Auto Home" website.The seven cars are ATENZA ( ), CAMRY ( ), ACCORD ( ), LAMANDO ( ), SAGITAR ( ), LAVIDA( ), and BORA ( ).The computational and analytical processes are the same as in Section 4.1.
Collect the users' assessment information on the "Auto Home" website until January 5 in 2019.See Tables 22-24.

Alternative
By the same method of Section 4.1, the weight vector = ( , , , , , , , ) on the eight attributes = ( , , , , , , , ) is given by Equation (12).See Table 25.Since the calculation process of this example is the same as that of Section 4.1, then only the final calculation results are given as follows.See Table 27.The ranking of is as follows.See Table 28.
Table 28.The ranking of .

Conclusions
This paper proposes a generalized distance measures method between two PLTs with multigranular linguistic information, which are helpful to deal with multi-granular MAGDM problems.These distance measures improve the accuracy of multi-granular linguistic information in the MAGDM problems, even some assessment information is null.Especially, the parameter λ of the extended distance measures method is a variable, which can be used to obtain different distance measures formula according to people's need.Under these distance measures, the extended MAGDM algorithm based on PT is proposed.From the sensitivity analyses of the parameters (θ, α, and β), we can find the ranking of the alternatives ( = 1,2, … ,7) are different but only when = 0.85 , = 0.85 , and = 4.1 for = 2 and = 0.725 , = 0.717 , and = 2.04 for = 2 ; the results are coincident by the algorithm based on PT.The ranking results are different when the parameters ( , , and ) are changed.This result is consistent with the meaning of the parameters ( , , and ) .Therefore, the algorithm based on PT proposed in Section 4.1 is illustrated to be scientific.Two case studies of purchasing a car is given to demonstrate the algorithm based on PT is valid and applied by comparing the extended TOPSIS, VIKOR, and Pang Qi et al.'s algorithms.Here, the parameters of and can be selected by what we need in actual problems.From the comparative analyses, we can find when = 1 , the ranking of ( = 1,2, … ,7 ) is " ≻ ≻ ≻ ≻ ≻ ≻ " and when = 2, the ranking of ( = 1,2, … ,7) is " ≻ ≻ ≻ ≻ ≻ ≻ ", which illustrates the ranking results of the algorithm based on PT are consistent with that of the algorithm based on TOPSIS.See Table 13 and Table 17.Although the parameters (θ, α, and β) are changed, the ranking of are still " ≻ ≻ ≻ ≻ ≻ ≻ " ( = 1 ) and " ≻ ≻ ≻ ≻ ≻ ≻ " ( = 2 ) respectively.However, the rankings of are different in the other three algorithms.Therefore, the comparing analyses demonstrate the novelty, feasibility, and validity of the proposed MAGMD method based on PT.The novel method of MAGMD in this paper can be used to deal with some practical MAGDM problems under multigranular probabilistic linguistic environments.
The MAGDM algorithm based on PT proposed in this paper also has some limitations: If there are too many attributes or alternatives, the size of Section 4.1 might be quite big.However, we can use some software packages to solve it, such as by crawler technology, matlab, python, and so on, so this is not a big problem in the use of this method.Whether there are more appropriate ways to measure the distances between two PLTs with multi-granular linguistic information is a valued question.There are some directions for further investigation: Firstly, how to select an appropriate parameter λ to calculate the distance between two PLTSs is a valued problem; secondly, the applications of these distance measures are interesting to research in other fields, such as cluster analysis, MCGDM problems, and so on; and finally, the linguistic information is also expressed by hesitant fuzzy numbers, interval fuzzy number, etc. to represent the fitting accuracy.These issues should be focused on further in the future.

Table 1 .
The assessment information from scoring data by .

Table 2 .
The assessment information from word-of-mouth data by .

Table 3 .
The assessment information from forum reviews data by .

Table 6 .
The weights of the attributes.

Table 7 .
The weighted normalized DM matrix.

Table 8 .
The positive ideal solution.

Table 9 .
The negative ideal solution.

Table 14 .
The ration of .

Table 18 .
The compromise index of .

Table 22 .
The assessment information from scoring data by .

Table 23 .
The assessment information from word-of-mouth data by .

Table 24 .
The assessment information from forum reviews data by .

Table 25 .
The weights of the attributes.

Table 26 .
The weighted normalized DM matrix.

Table 27 .
The relative values of .