Next Article in Journal
Prediction of the Superficial Heat Source Parameters for TIG Heating Process Using FEM and ANN Modeling
Next Article in Special Issue
Linear Programming and Fuzzy Optimization to Substantiate Investment Decisions in Tangible Assets
Previous Article in Journal
A Statistical Method for Estimating Activity Uncertainty Parameters to Improve Project Forecasting
Previous Article in Special Issue
Optimization of Big Data Scheduling in Social Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight and Its Application to Supplier Selection of New Agricultural Machinery Products

1
School of Business, Sichuan Normal University, Chengdu 610101, China
2
School of Statistics, Southwestern University of Finance and Economics, Chengdu 611130, China
3
Communications Systems and Networks (CSN) Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Entropy 2019, 21(10), 953; https://doi.org/10.3390/e21100953
Submission received: 14 August 2019 / Revised: 19 September 2019 / Accepted: 23 September 2019 / Published: 29 September 2019

Abstract

:
In multiple attribute group decision making (MAGDM) problems, uncertain decision information is well-represented by linguistic term sets (LTSs). These LTSs are easily converted into probabilistic linguistic sets (PLTSs). In this paper, a TOPSIS method is proposed for probabilistic linguistic MAGDM in which the attribute weights are completely unknown, and the decision information is in the form of probabilistic linguistic numbers (PLNs). First, the definition of the scoring function is used to solve the probabilistic linguistic entropy, which is then employed to objectively derive the attribute weights. Second, the optimal alternatives are determined by calculating the shortest distance from the probabilistic linguistic positive ideal solution (PLPIS) and on the other side the farthest distance of the probabilistic linguistic negative ideal solution (PLNIS). This proposed method extends the applications range of the traditional entropy-weighted method. Moreover, it doesn’t need the decision-maker to give the attribute weights in advance. Finally, a numerical example for supplier selection of new agricultural machinery products is used to illustrate the use of the proposed method. The result shows the approach is simple, effective and easy to calculate. The proposed method can contribute to the selection of suitable alternative successfully in other selection problems.

1. Introduction

In many decision-making problems, it has been traditionally supposed that all information is depicted in the form of crisp numbers. However, most of a decision makers’ assessment information is imprecise or uncertain [1,2,3]. Hence, he or she can’t express his or her preferences using an exact numerical value [4,5,6,7]. In order to depict the qualitative assessment information easily, Herrera and Martinez [8] gave the linguistic term sets (LTSs) for computing with words. Herrera and Martinez [9] combined linguistic and numerical information on the basis of the two-tuple fuzzy linguistic representation model. Herrera and Martinez [10] defined the linguistic two-tuples for handling multigranular hierarchical linguistic contexts. Dong and Herrera-Viedma [11] tackled the consistency-driven automatic method to interval numerical scales of LTSs for linguistic GDM with preference relation. Recently, two-tuple linguistic processing model are extended to interval numbers [12,13], intuitionistic fuzzy sets [14,15,16], hesitant fuzzy sets [17,18,19,20], and bipolar fuzzy sets [21,22]. Furthermore, Rodriguez, et al. [23] defined the hesitant fuzzy linguistic term sets (HFLTSs) on the basis of hesitant fuzzy sets [24] and linguistic term sets [25] which allow DMs to provide several possible linguistic variable. However, in most of the current researches on HFLTSs, all possible values supplied by the DMs have equal weight or importance. Obviously, it is inconsistent with the reality. In both personal MADM and multiple attribute group decision making (MAGDM) problems, the DMs may offer possible linguistic terms so that these offered possible values may have different probability distributions. Thus, Pang, et al. [26] proposed the probabilistic linguistic term sets (PLTSs) to overcome this defect and constructed a framework for ranking PLTSs with the score or deviation degree of each PLTS. Bai, et al. [27] gave a comparison method and proposed a more efficient way to tackle PLTSs. Kobina, et al. [28] proposed some probabilistic linguistic power operators for MAGDM with classical power aggregation operators [29,30,31]. Liang, et al. [32] developed the probabilistic linguistic grey relational analysis (PL-GRA) for MAGDM based on geometric Bonferroni mean [33,34,35,36]. Liao, et al. [37] defined the linear programming method to deal with the MADM with probabilistic linguistic information. Lin, et al. [38] proposed the ELECTRE II method to deal with PLTSs for edge computing. Liao, et al. [39] studied the novel operations of PLTSs to solve the probabilistic linguistic ELECTRE III method. Feng, et al. [40] proposed the probabilistic linguistic QUALIFLEX method with possibility degree comparison. Chen, et al. [41] employed the probabilistic linguistic MULTIMOORA for cloud-based ERP system selection.
Entropy is a very important and efficient tool for measuring uncertain information. The fuzzy entropy was first defined by Zadeh [42]. The beginning point for the cross entropy method is information theory as proposed by Shannon [43]. Kullback and Leibler [44] developed “cross entropy distance” measure between two probability distributions. Furtan [45] studied the entropy theory in firm decision-making. Dhar, et al. [46] investigated the investment decision making with entropy reduction. Yang and Qiu [47] researched the decision-making method based on expected utility and entropy. Muley and Bajaj [48] used the entropy-based approach to solve the fuzzy MADM. Xu and Hu [49] proposed the entropy-based procedures for intuitionistic fuzzy MADM. Chen, et al. [50] constructed concretely an interval-valued intuitionistic fuzzy entropy. Lotfi and Fallahnejad [51] proposed the imprecise Shannon’s entropy in MADM. Khaleie and Fasanghari [52] investigated the intuitionistic fuzzy MAGDM method by using entropy and association coefficient. Zhao, et al. [53] extended the VIKOR method based on cross-entropy for interval-valued intuitionistic fuzzy MAGDM. Peng, et al. [54] defined the cross-entropy for intuitionistic hesitant fuzzy sets in MADM. Tao, et al. [55] developed the entropy measures for linguistic information in MAGDM. Song, et al. [56] studied the MADM for dual uncertain information on the basis of grey incidence analysis and grey relative entropy optimization. Farhadinia and Xu [57] tackled the hesitant fuzzy linguistic entropy and cross-entropy measures in MADM. Xue, et al. [58] proposed the Pythagorean fuzzy LINMAP method with the entropy theory for railway project investment. Biswas and Sarkar [59] developed the Pythagorean fuzzy TOPSIS for MAGDM with unknown weight information through entropy measure. Xiao [60] gave the MADM model based on D-numbers and belief entropy. Xu and Luo [61] defined the information entropy risk measure for large group decision-making method.
TOPSIS (Technique for order performance by similarity to ideal solution) method was initially proposed by Hwang and Yoon [62] for solving a MADM problem, which concentrates on choosing the alternative with the smallest distance from the positive ideal solution (PIS) and with the longest distance from the negative ideal solution (NIS). In the recent years, many scholars indulged in MADM or MAGDM problem based on the TOPSIS method [63,64,65,66,67,68]. The goal of this paper is to extend the TOPSIS method to solve the probabilistic linguistic MAGDM with unknown weight information on the basis of the information entropy. The innovativeness of the paper can be summarized as follows: (1) the TOPSIS method is extended by PLTSs with unknown weight information; (2) the probabilistic linguistic TOPSIS (PL-TOPSIS) method is proposed to solve the probabilistic linguistic MAGDM problems with entropy weight; (3) a case study for supplier selection of new agricultural machinery products is supplied to show the developed approach; and (4) some comparative studies are provided with the probabilistic linguistic weighted average (PLWA) operator and the PL-GRA method to give effect to the rationality of PL-TOPSIS method.
The remainder of this paper is set out as follows. Section 2 supplies some basic concepts of PLTSs. In Section 3, the TOPSIS method is proposed for probabilistic linguistic MAGDM problems with entropy weight. In Section 4, a case study for supplier selection of new agricultural machinery products is given and some comparative analysis is conducted. The study finishes with some conclusions in Section 5.

2. Preliminaries

In this section, we review some concepts and operations related to linguistic terms sets and PLTSs.
Definition 1.
([26]) Let L = { l α | α = θ , , 2 , 1 , 0 , 1 , 2 , θ } be an LTS, the linguistic terms l α can express the equivalent information to β which is expressed with the transformation function g :
g : [ l θ , l θ ] [ 0 , 1 ] ,   g ( l α ) = α + θ 2 θ = β
β can also be expressed the equivalent information to the linguistic terms l α which is denoted with the transformation function g 1 :
g 1 : [ 0 , 1 ] [ l θ , l θ ] ,   g 1 ( β ) = l ( 2 β 1 ) θ = l α
Pang, Wang and Xu [26] proposed a novel concept called probabilistic linguistic term sets to depict qualitative information.
Definition 2.
([26]) Given an LTS L = { l α | α = θ , , 2 , 1 , 0 , 1 , 2 , θ } , a PLTS is defined as:
L ( p ) = { l ( ϕ ) ( p ( ϕ ) ) | l ( ϕ ) L , p ( ϕ ) 0 , ϕ = 1 , 2 , , # L ( p ) , ϕ = 1 # L ( p ) p ( ϕ ) 1 }
where l ( ϕ ) ( p ( ϕ ) ) is the ϕ th linguistic term l ( ϕ ) associated with the probability value p ( ϕ ) , and # L ( p ) is the length of linguistic terms in L ( p ) . The linguistic term l ( ϕ ) in L ( p ) are arranged in ascending order.
In order to easy computation, Pang, Wang and Xu [26] normalized the PLTS L ( p ) as L ˜ ( p ) = { l ( ϕ ) ( p ˜ ( ϕ ) ) | l ( ϕ ) L , p ˜ ( ϕ ) 0 , ϕ = 1 , 2 , , # L ( p ˜ ) , ϕ = 1 # L ( p ) p ˜ ( ϕ ) = 1 } , where p ˜ ( ϕ ) = p ( ϕ ) / ϕ = 1 # L ( p ) p ( ϕ ) for all ϕ = 1 , 2 , , # L ( p ˜ ) .
Moreover, in the process of MAGDM issue, the numbers of linguistic terms in PLTSs are always different, which brings great trouble to calculate. In this case, we need to increase the numbers of linguistic terms for the PLTSs in which the numbers of linguistic terms are relatively small, so that they have the same number of linguistic terms.
Definition 3.
([26]) Let L = { l α | α = θ , , 1 , 0 , 1 , θ } be an LTS, L ˜ 1 ( p ˜ ) = { l 1 ( ϕ ) ( p ˜ 1 ( ϕ ) ) | ϕ = 1 , 2 , , # L ˜ 1 ( p ˜ ) } and L ˜ 2 ( p ˜ ) = { l 2 ( ϕ ) ( p ˜ 2 ( ϕ ) ) | ϕ = 1 , 2 , , # L ˜ 2 ( p ˜ ) } be two PLTSs, where # L ˜ 1 ( p ˜ ) and # L ˜ 2 ( p ˜ ) are the numbers of PLTS L ˜ 1 ( p ˜ ) and L ˜ 2 ( p ˜ ) , respectively. If # L ˜ 1 ( p ˜ ) > # L ˜ 2 ( p ˜ ) , then add # L ˜ 1 ( p ˜ ) # L ˜ 2 ( p ˜ ) linguistic terms to L ˜ 2 ( p ˜ ) . Moreover, the added linguistic terms should be the smallest linguistic term in L ˜ 2 ( p ˜ ) and the probabilities of added linguistic terms should be zero.
After we have defined the concept of PLTSs, we need to propose a method to compare these PLTSs. In order to do so, in the following, we first define the score and deviation degree of PLTSs.
Definition 4.
([26]) For a PLTS L ˜ ( p ˜ ) = { l ( ϕ ) ( p ˜ ( ϕ ) ) | ϕ = 1 , 2 , , # L ˜ ( p ˜ ) } , the expected value E ( L ˜ ( p ˜ ) ) and deviation degree σ ( L ˜ ( p ˜ ) ) of L ˜ ( p ˜ ) is defined:
E ( L ˜ ( p ˜ ) ) = ϕ = 1 # L ˜ ( p ˜ ) g ( l ( ϕ ) ) p ˜ ( ϕ ) / ϕ = 1 # L ˜ ( p ˜ ) p ˜ ( ϕ )
σ ( L ˜ ( p ˜ ) ) = ϕ = 1 # L ˜ ( p ˜ ) ( g ( l ( ϕ ) ) p ˜ ( ϕ ) E ( L ˜ ( p ˜ ) ) ) 2 / ϕ = 1 # L ˜ ( p ˜ ) p ˜ ( ϕ )
By using the Equations (4) and (5), the order relation between two PLTSs is defined as: (1) if E ( L ˜ 1 ( p ˜ ) ) > E ( L ˜ 2 ( p ˜ ) ) , then L ˜ 1 ( p ˜ ) > L ˜ 2 ( p ˜ ) ; and (2) if E ( L ˜ 1 ( p ˜ ) ) = E ( L ˜ 2 ( p ˜ ) ) , then if σ ( L ˜ 1 ( p ˜ ) ) = σ ( L ˜ 2 ( p ˜ ) ) , then L ˜ 1 ( p ˜ ) = L ˜ 2 ( p ˜ ) ; if σ ( L ˜ 1 ( p ˜ ) ) < σ ( L ˜ 2 ( p ˜ ) ) , then, L ˜ 1 ( p ˜ ) > L ˜ 2 ( p ˜ ) .
Definition 5.
([69]) Let L = { l α | α = θ , , 1 , 0 , 1 , θ } be an LTS. And let L ˜ 1 ( p ˜ ) = { l 1 ( ϕ ) ( p ˜ 1 ( ϕ ) ) | ϕ = 1 , 2 , , # L ˜ 1 ( p ˜ ) } and L ˜ 2 ( p ˜ ) = { l 2 ( ϕ ) ( p ˜ 2 ( ϕ ) ) | ϕ = 1 , 2 , , # L ˜ 2 ( p ˜ ) } be two PLTSs with # L ˜ 1 ( p ˜ ) = # L ˜ 2 ( p ˜ ) , then Hamming distance d ( L ˜ 1 ( p ˜ ) , L ˜ 2 ( p ˜ ) ) between L ˜ 1 ( p ˜ ) and L ˜ 2 ( p ˜ ) is defined as follows:
d ( L ˜ 1 ( p ˜ ) , L ˜ 2 ( p ˜ ) ) = ϕ = 1 # L ˜ 1 ( p ˜ ) | p ˜ 1 ( ϕ ) g ( l 1 ( ϕ ) ) p ˜ 2 ( ϕ ) g ( l 2 ( ϕ ) ) | # L ˜ 1 ( p ˜ )

3. TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight

In this section, we propose a novel probabilistic linguistic TOPSIS method for MAGDM problems with unknown weight information. The following notations are used to solve the probabilistic linguistic MAGDM problems. Let A = { A 1 , A 2 , , A m } be a discrete set of alternatives, and G = { G 1 , G 2 , , G n } with weight vector w = ( w 1 , w 2 , , w n ) , where ω j [ 0 , 1 ] , j = 1 , 2 , , n , j = 1 n w j = 1 , and a set of experts E = { E 1 , E 2 , , E q } . Suppose that there are n qualitative attribute A = { A 1 , A 2 , , A m } and their values are evaluated by qualified experts and denoted as linguistic expressions information l i j k ( i = 1 , 2 , , m , j = 1 , 2 , , n , k = 1 , 2 , , q ) .
Then, the PL-TOPSIS method is designed to solve the MAGDM problems with entropy weight. The detailed calculating steps are given as follows:
Step 1. Convert the linguistic information l i j k ( i = 1 , 2 , , m , j = 1 , 2 , , n , k = 1 , 2 , , q ) into probabilistic linguistic information l i j ( ϕ ) ( p i j ( ϕ ) ) , ϕ = 1 , 2 , , # L i j ( p ) and construct the probabilistic linguistic decision matrix L = ( L i j ( p ) ) m × n , L i j ( p ) = { l i j ( ϕ ) ( p i j ( ϕ ) ) | ϕ = 1 , 2 , , # L i j ( p ) } ( i = 1 , 2 , , m , j = 1 , 2 , , n ) .
Step 2. Derive the normalized probabilistic linguistic matrix L ˜ = ( L ˜ i j ( p ˜ ) ) m × n , L i j ( p ˜ ) = { l i j ( ϕ ) ( p ˜ i j ( ϕ ) ) | ϕ = 1 , 2 , , # L i j ( p ˜ ) } ( i = 1 , 2 , , m , j = 1 , 2 , , n ) . Thus, probabilistic linguistic information for the alternative A i A with respect to the all the attribute G can be expressed as: P L A i = ( l i 1 ( ϕ ) ( p ˜ i 1 ( ϕ ) ) , l i 2 ( ϕ ) ( p ˜ i 2 ( ϕ ) ) , , l i n ( ϕ ) ( p ˜ i n ( ϕ ) ) ) , ϕ = 1 , 2 , , # L i j ( p ˜ ) .
Step 3. Compute the weight values with entropy.
The weight of attributes is very important in decision making problem. Many scholars focus on decision making problems with incomplete or unknown attributes weight information in different fuzzy environment [70,71,72,73,74,75]. Entropy [43] is a conventional term from information theory which is also used to determine weight of attributes. The larger the value of entropy in a given attribute is, the smaller the differences in the ratings of alternatives with respect to this attribute. In turn, this means that this kind of attribute supplies less information and has a smaller weight. Firstly, the normalized decision matrix N L i j ( p ˜ ) is derived as follows:
N L i j ( p ˜ ) = ϕ = 1 # L ˜ 1 ( p ˜ ) ( p ˜ i j ( ϕ ) g ( l i j ( ϕ ) ) ) i = 1 m ϕ = 1 # L ˜ 1 ( p ˜ ) ( p ˜ i j ( ϕ ) g ( l i j ( ϕ ) ) ) ,   j = 1 , 2 , , n
Then, the information of Shannon entropy E = ( E 1 , E 2 , , E n ) is calculated as follows:
E j = 1 ln m i = 1 m N L i j ( p ˜ ) ln N L i j ( p ˜ )
and N L i j ( p ˜ ) ln N L i j ( p ˜ ) is defined as 0, if N L i j ( p ˜ ) = 0 .
Finally, the vector of attribute weights w = ( w 1 , w 2 , , w n ) is computed:
w j = 1 E j j = 1 n ( 1 E j ) ,   j = 1 , 2 , , n .
Step 4. Define the probabilistic linguistic positive ideal solution (PLPIS) and probabilistic linguistic negative ideal solution (PLNIS):
P L P I S = ( P L P I S 1 , P L P I S 2 , , P L P I S n )
P L N I S = ( P L N I S 1 , P L N I S 2 , , P L N I S n )
where
P L P I S j = { p l j ( ϕ ) ( p ˜ j ( ϕ ) ) | ϕ = 1 , 2 , , # L i j ( p ˜ ) } , E ( P L P I S j ) = { max i E ( L i j ( p ˜ ) ) }
P L N I S j = { n l j ( ϕ ) ( p ˜ j ( ϕ ) ) | ϕ = 1 , 2 , , # L i j ( p ˜ ) } , E ( P L N I S j ) = { min i E ( L i j ( p ˜ ) ) }
Step 5. Calculate the distances of each alternative from PLPIS and PLNIS, respectively:
d ( P L A i , P L P I S ) = j = 1 n w j d ( P L A i j , P L P I S j )
d ( P L A i , P L N I S ) = j = 1 n w j d ( P L A i j , P L N I S j )
d ( P L A i j , P L P I S j ) = ( ϕ = 1 # L i j ( p ˜ ) | p i j ( ϕ ) g ( l i j ( ϕ ) ) p ˜ j ( ϕ ) g ( p l j ( ϕ ) ) | ) / # L i j ( p ˜ )
d ( P L A i j , P L N I S j ) = ( ϕ = 1 # L i j ( p ˜ ) | p i j ( ϕ ) g ( l i j ( ϕ ) ) p ˜ j ( ϕ ) g ( n l j ( ϕ ) ) | ) / # L i j ( p ˜ )
Step 6. Calculate the probabilistic linguistic relative closeness degree (PLRCD) of each alternative from PLPIS.
P L R C D ( P L A i , P L P I S ) = d ( P L A i , P L N I S ) d ( P L A i , P L P I S ) + d ( P L A i , P L N I S ) ,   i = 1 , 2 , , m
Step 7. According to the P L R C D ( P L A i , P L P I S ) , the ranking order of all alternatives can be determined. The best alternative is the one closest to PLPIS and farthest from the PLNIS. Thus, if any alternative has the smallest P L R C D ( P L A i , P L P I S ) value, then, it is the most desirable alternative.

4. A Case Study and Comparative Analysis

4.1. A Case Study

With the development and improvement of agricultural mechanization and the amendment of preferential farmer policies, the agricultural machinery industry is confronted with broad market prospects for development. The innovation capability has become core competence for an agricultural company. In the process of supply chain integration, agricultural product innovation is being drawn into many corporations, manufacturer, and suppliers instead of manufacturer. From the perspective of manufacturer, the participation of suppliers is significative for the success of new product project. Of most importance, the supplier selection problem in new agricultural machinery products is becoming one of the key issues based on the basic idea of supply chain management. The supplier selection in new agricultural machinery products is a classical MAGDM issue [76,77,78,79,80,81,82]. Thus, in this section we present a numerical example for supplier selection in new agricultural machinery products to illustrate the method proposed in this paper. There is a panel with five possible suppliers of new agricultural machinery products A i ( i = 1 , 2 , 3 , 4 , 5 ) to select. The experts select four beneficial attribute to evaluate the five possible suppliers of new agricultural machinery products: (1) G1 is the supplier’s business reputation; (2) G2 is the supplier’s technical capability; (3) G3 is the supplier’s experience; and (4) G4 is the supplier’s willingness to cooperate. The five possible suppliers of new agricultural machinery products A i ( i = 1 , 2 , 3 , 4 , 5 ) are to be evaluated by using the linguistic term set L = { l 3 = e x t r e m e l y p o o r ( E P ) , l 2 = v e r y p o o r ( V P ) , l 1 = p o o r ( P ) , l 0 = m e d i u m ( M ) , l 1 = g o o d ( G ) , l 2 = v e r y g o o d ( V G ) , l 3 = e x t r e m e l y g o o d ( E G ) } by the five decision makers under the above four attributes, as listed in the Table 1, Table 2, Table 3, Table 4 and Table 5.
In the following, we utilize the PL-TOPSIS method developed for supplier selection of new agricultural machinery products.
Step 1. Transform the linguistic variables into probabilistic linguistic decision matrix (Table 6).
Step 2. Calculate the normalized probabilistic linguistic decision matrix (Table 7).
Step 3. Compute the weight values for attributes from Equations (7)–(9): w 1 = 0.3020 , w 2 = 0.1536 , w 3 = 0.2856 , w 4 = 0.2587 .
Step 4. Determine the PLPIS and PLNIS by Equations (10)–(13) (Table 8):
Step 5. Calculate the distances d ( P L A i , P L P I S ) and d ( P L A i , P L N I S ) of each alternative by Equations (14)–(17), respectively (Table 9):
Step 6. Calculating the PLRCD of each alternative from PLPIS by Equation (18) (Table 10).
Step 7. According to the P L R C D ( P L A i , P L P I S ) ( i = 1 , 2 , 3 , 4 , 5 ) , we can rank all the suppliers of new agricultural machinery products. Obviously, the rank is A 3 > A 1 > A 4 > A 5 > A 2 and the best supplier among five alternatives is A 3 .

4.2. Comparative Analysis

Firstly, we compare our proposed method with probabilistic linguistic weighted average operator [26], then we can get the calculat3e results: E ( Z 1 ( w ) ) = s 0.0036 ,   E ( Z 2 ( w ) ) = s 0.6638 , E ( Z 3 ( w ) ) = s 0.5812 , E ( Z 4 ( w ) ) = s 0.1787 , and E ( Z 5 ( w ) ) = s 0.1273 . Furthermore, we can derive the ranking order: A 3 > A 1 > A 5 > A 4 > A 2 . Thus, we have the same optimal supplier A 3 .
Then, we compare our proposed method with probabilistic linguistic grey relational analysis method [32] (let ρ = 0.5 ), then we can get the calculating results: ε 1 + = 0.6279 , ε 2 + = 0.4457 , ε 3 + = 0.8410 , ε 4 + = 0.5039 , ε 5 + = 0.5364 . Furthermore, we can derive the ranking order: A 3 > A 1 > A 5 > A 4 > A 2 . Thus, we also have the same optimal supplier A 3 .
From the above analysis, it can be seen that these four methods have the same optimal supplier A 2 and three methods’ ranking results are slightly different. This verifies the method we proposed is reasonable and effective in this paper. These three methods have their advantages: (1) PL-GRA method only emphasis the shape similarity degree from the positive ideal solution; (2) PLWA operator emphasis group influences degree; (3) our proposed PL-TOPSIS method emphasizes the distance closeness degree from the positive and negative ideal solution with entropy weight information; and (4) Pang, Wang and Xu [26] also developed probabilistic linguistic TOPSIS method for MAGDM with maximizing deviation method. However, the employed distance measures for TOPSIS method in this paper [26] are more or less irrational.

5. Conclusions

In this paper, we extend the TOPSIS method to the probabilistic linguistic MAGDM with unknown weight information. Firstly, the basic concept, comparative formula, and Hamming distance of PLNs are briefly reviewed. Then, the definition of the scoring function is utilized to tackle the probabilistic linguistic entropy, which is then employed to objectively compute the attribute weights information. Then, the optimal alternative(s) is determined by calculating the shortest distance from the PLPIS and on the other side the farthest distance of the PLNIS. Finally, a practical case study for supplier selection of new agricultural machinery products is supplied to show the proposed approach and some comparative analysis is also designed to verify the applicability in practical MAGDM problems. The main advantages of this paper can be given as follows: (1) the TOPSIS method is extended by PLTSs with unknown weight information; (2) the probabilistic linguistic TOPSIS method is proposed to solve the probabilistic linguistic MAGDM problems with entropy weight; (3) a case study for supplier selection of new agricultural machinery products is supplied to show the developed approach; (4) our proposed PL-TOPSIS method emphasis the distance similarity degree from the positive and negative ideal solution with entropy weight; and (5) In this research, the TOPSIS method, which may be also a resultful MADM or MAGDM method, has been designed to tackle uncertain decision-making problems. In the future, the application of the proposed models and methods with PLTSs needs to be investigated into other uncertain decision making [83,84,85,86,87,88,89,90] and other uncertain and fuzzy environment [91,92,93,94,95,96,97,98].

Author Contributions

J.L., C.W., J.W. and G.W. conceived and worked together to achieve this work, C.W. and G.W. compiled the computing program by Excel and analyzed the data, C.W. and G.W. wrote the paper. Finally, all the authors have read and approved the final manuscript.

Funding

The work was supported by the National Social Science Foundation of China under Grant No. 17BSH125 and the Humanities and Social Sciences Foundation of Ministry of Education of the People’s Republic of China under Grant No.16YJA840008. The APC was funded by National Social Science Foundation of China under Grant No. 17BSH125.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Atanassov, K.T. More on intuitionistic fuzzy-sets. Fuzzy Sets Syst. 1989, 33, 37–45. [Google Scholar] [CrossRef]
  2. Atanassov, K.T. Operators over interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1994, 64, 159–174. [Google Scholar] [CrossRef]
  3. Zhou, W.; Xu, Z.S. Extended Intuitionistic Fuzzy Sets Based on the Hesitant Fuzzy Membership and their Application in Decision Making with Risk Preference. Int. J. Intell. Syst. 2018, 33, 417–443. [Google Scholar] [CrossRef]
  4. Li, Z.X.; Gao, H.; Wei, G.W. Methods for Multiple Attribute Group Decision Making Based on Intuitionistic Fuzzy Dombi Hamy Mean Operators. Symmetry 2018, 10, 574. [Google Scholar] [CrossRef]
  5. Wu, L.P.; Wei, G.W.; Gao, H.; Wei, Y. Some Interval-Valued Intuitionistic Fuzzy Dombi Hamy Mean Operators and Their Application for Evaluating the Elderly Tourism Service Quality in Tourism Destination. Mathematics 2018, 6, 294. [Google Scholar] [CrossRef]
  6. Wei, G.W. 2-tuple intuitionistic fuzzy linguistic aggregation operators in multiple attribute decision making. Iran. J. Fuzzy Syst. 2019, 16, 159–174. [Google Scholar]
  7. Wu, L.P.; Wang, J.; Gao, H. Models for competiveness evaluation of tourist destination with some interval-valued intuitionistic fuzzy Hamy mean operators. J. Intell. Fuzzy Syst. 2019, 36, 5693–5709. [Google Scholar] [CrossRef]
  8. Herrera, F.; Martinez, L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2000, 8, 746–752. [Google Scholar] [Green Version]
  9. Herrera, F.; Martinez, L. An approach for combining linguistic and numerical information based on the 2-tuple fuzzy linguistic representation model in decision-making. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2000, 8, 539–562. [Google Scholar] [CrossRef]
  10. Herrera, F.; Martinez, L. A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multi-expert decision-making. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2001, 31, 227–234. [Google Scholar] [CrossRef]
  11. Dong, Y.C.; Herrera-Viedma, E. Consistency-Driven Automatic Methodology to Set Interval Numerical Scales of 2-Tuple Linguistic Term Sets and Its Use in the Linguistic GDM With Preference Relation. IEEE Trans. Cybern. 2015, 45, 780–792. [Google Scholar] [CrossRef]
  12. Shan, M.M.; Li, P.; Liu, H.C. Interval 2-Tuple Linguistic Distance Operators and Their Applications to Supplier Evaluation and Selection. Math. Probl. Eng. 2016, 2016. [Google Scholar] [CrossRef]
  13. Shan, M.M.; You, J.X.; Liu, H.C. Some Interval 2-Tuple Linguistic Harmonic Mean Operators and Their Application in Material Selection. Adv. Mater. Sci. Eng. 2016, 2016. [Google Scholar] [CrossRef]
  14. Beg, I.; Rashid, T. An Intuitionistic 2-Tuple Linguistic Information Model and Aggregation Operators. Int. J. Intell. Syst. 2016, 31, 569–592. [Google Scholar] [CrossRef]
  15. Faizi, S.; Rashid, T.; Zafar, S. A Multicriteria Decision-Making Approach Based on Fuzzy AHP with Intuitionistic 2-Tuple Linguistic Sets. Adv. Fuzzy Syst. 2018, 2018. [Google Scholar] [CrossRef]
  16. Yu, G.F.; Li, D.F.; Qiu, J.M.; Zheng, X.X. Some operators of intuitionistic uncertain 2-tuple linguistic variables and application to multi-attribute group decision making with heterogeneous relationship among attributes. J. Intell. Fuzzy Syst. 2018, 34, 599–611. [Google Scholar] [CrossRef]
  17. Truck, I.; Abchir, M.A. Toward a Classification of Hesitant Operators in the 2-Tuple Linguistic Model. Int. J. Intell. Syst. 2014, 29, 560–578. [Google Scholar] [CrossRef]
  18. Dong, Y.C.; Li, C.C.; Herrera, F. Connecting the linguistic hierarchy and the numerical scale for the 2-tuple linguistic model and its use to deal with hesitant unbalanced linguistic information. Inf. Sci. 2016, 367, 259–278. [Google Scholar] [CrossRef]
  19. Wei, C.P.; Liao, H.C. A Multigranularity Linguistic Group Decision-Making Method Based on Hesitant 2-Tuple Sets. Int. J. Intell. Syst. 2016, 31, 612–634. [Google Scholar] [CrossRef]
  20. Si, G.S.; Liao, H.C.; Yu, D.J.; Llopis-Albert, C. Interval-valued 2-tuple hesitant fuzzy linguistic term set and its application in multiple attribute decision making. J. Intell. Fuzzy Syst. 2018, 34, 4225–4236. [Google Scholar] [CrossRef]
  21. Lu, M.; Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Bipolar 2-tuple linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1197–1207. [Google Scholar] [CrossRef]
  22. Wei, G.W.; Gao, H.; Wang, J.; Huang, Y.H. Research on Risk Evaluation of Enterprise Human Capital Investment With Interval-Valued Bipolar 2-Tuple Linguistic Information. IEEE Access 2018, 6, 35697–35712. [Google Scholar] [CrossRef]
  23. Rodriguez, R.M.; Martinez, L.; Herrera, F. Hesitant Fuzzy Linguistic Term Sets for Decision Making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  24. Torra, V. Hesitant Fuzzy Sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  25. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning. Inf. Sci. 1975, 8, 301–357. [Google Scholar] [CrossRef]
  26. Pang, Q.; Wang, H.; Xu, Z.S. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  27. Bai, C.Z.; Zhang, R.; Qian, L.X.; Wu, Y.N. Comparisons of probabilistic linguistic term sets for multi-criteria decision making. Knowl. Based Syst. 2017, 119, 284–291. [Google Scholar] [CrossRef]
  28. Kobina, A.; Liang, D.C.; He, X. Probabilistic Linguistic Power Aggregation Operators for Multi-Criteria Group Decision Making. Symmetry 2017, 9, 320. [Google Scholar] [CrossRef]
  29. Wei, G.W. Pythagorean fuzzy Hamacher Power aggregation operators in multiple attribute decision making. Fundam. Inform. 2019, 166, 57–85. [Google Scholar] [CrossRef]
  30. Yager, R.R. The power average operator. IEEE Trans. Syst. Man Cybern. Part A 2001, 31, 724–731. [Google Scholar] [CrossRef]
  31. Xu, Z.S.; Yager, R.R. Power-Geometric operators and their use in group decision making. IEEE Trans. Fuzzy Syst. 2010, 18, 94–105. [Google Scholar]
  32. Liang, D.C.; Kobina, A.; Quan, W. Grey Relational Analysis Method for Probabilistic Linguistic Multi-criteria Group Decision-Making Based on Geometric Bonferroni Mean. Int. J. Fuzzy Syst. 2018, 20, 2234–2244. [Google Scholar] [CrossRef]
  33. Wang, J.; Wei, G.W.; Wei, Y. Models for Green Supplier Selection with Some 2-Tuple Linguistic Neutrosophic Number Bonferroni Mean Operators. Symmetry 2018, 10, 131. [Google Scholar] [CrossRef]
  34. Wei, G.W.; Wang, R.; Wang, J.; Wei, C.; Zhang, Y. Methods for Evaluating the Technological Innovation Capability for the High-Tech Enterprises with Generalized Interval Neutrosophic Number Bonferroni Mean Operators. IEEE Access 2019, 7, 86473–86492. [Google Scholar] [CrossRef]
  35. Zhu, B.; Xu, Z.S. Hesitant fuzzy Bonferroni means for multi-criteria decision making. J. Oper. Res. Soc. 2013, 64, 1831–1840. [Google Scholar] [CrossRef]
  36. Dutta, B.; Guha, D. Partitioned Bonferroni mean based on linguistic 2-tuple for dealing with multi-attribute group decision making. Appl. Soft Comput. 2015, 37, 166–179. [Google Scholar] [CrossRef]
  37. Liao, H.C.; Jiang, L.S.; Xu, Z.H.; Xu, J.P.; Herrera, F. A linear programming method for multiple criteria decision making with probabilistic linguistic information. Inf. Sci. 2017, 415, 341–355. [Google Scholar] [CrossRef]
  38. Lin, M.W.; Chen, Z.Y.; Liao, H.C.; Xu, Z.S. ELECTRE II method to deal with probabilistic linguistic term sets and its application to edge computing. Nonlinear Dyn. 2019, 96, 2125–2143. [Google Scholar] [CrossRef]
  39. Liao, H.C.; Jiang, L.S.; Lev, B.; Fujitac, H. Novel operations of PLTSs based on the disparity degrees of linguistic terms and their use in designing the probabilistic linguistic ELECTRE III method. Appl. Soft Comput. 2019, 80, 450–464. [Google Scholar] [CrossRef]
  40. Feng, X.Q.; Liu, Q.; Wei, C.P. Probabilistic linguistic QUALIFLEX approach with possibility degree comparison. J. Intell. Fuzzy Syst. 2019, 36, 719–730. [Google Scholar] [CrossRef]
  41. Chen, S.X.; Wang, J.Q.; Wang, T.L. Cloud-based ERP system selection based on extended probabilistic linguistic MULTIMOORA method and Choquet integral operator. Comput. Appl. Math. 2019, 38, 88. [Google Scholar] [CrossRef]
  42. Zadeh, L.A. Probability Measure of Fuzzy Events. J. Math. Anal. Appl. 1968, 23, 421–427. [Google Scholar] [CrossRef]
  43. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  44. Kullback, S.; Leibler, R.A. On Information and Sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  45. Furtan, W.H. Entropy, information and economics in firm decision-making. Int. J. Syst. Sci. 1977, 8, 1105–1112. [Google Scholar] [CrossRef]
  46. Dhar, V.; Chou, D.; Provost, F. Discovering interesting patterns for investment decision making with GLOWER—A genetic learner overlaid with entropy reduction. Data Min. Knowl. Discov. 2000, 4, 251–280. [Google Scholar] [CrossRef]
  47. Yang, J.P.; Qiu, W.H. A measure of risk and a decision-making model based on expected utility and entropy. Eur. J. Oper. Res. 2005, 164, 792–799. [Google Scholar] [CrossRef]
  48. Muley, A.A.; Bajaj, V.H. Fuzzy multiple attribute decision making by utilizing entropy-based approach. Int. J. Agric. Stat. Sci. 2009, 5, 613–621. [Google Scholar]
  49. Xu, Z.S.; Hu, H. Entropy-based procedures for intuitionistic fuzzy multiple attribute decision making. J. Syst. Eng. Electron. 2009, 20, 1001–1011. [Google Scholar]
  50. Chen, Q.; Xu, Z.S.; Liu, S.S.; Yu, X.H. A Method Based on Interval-Valued Intuitionistic Fuzzy Entropy for Multiple Attribute Decision Making. Inf. Int. Interdiscip. J. 2010, 13, 67–77. [Google Scholar]
  51. Lotfi, F.H.; Fallahnejad, R. Imprecise Shannon′s Entropy and Multi Attribute Decision Making. Entropy 2010, 12, 53–62. [Google Scholar] [CrossRef]
  52. Khaleie, S.; Fasanghari, M. An intuitionistic fuzzy group decision making method using entropy and association coefficient. Soft Comput. 2012, 16, 1197–1211. [Google Scholar] [CrossRef]
  53. Zhao, X.Y.; Tang, S.; Yang, S.L.; Huang, K.D. Extended VIKOR method based on cross-entropy for interval-valued intuitionistic fuzzy multiple criteria group decision making. J. Intell. Fuzzy Syst. 2013, 25, 1053–1066. [Google Scholar]
  54. Peng, J.J.; Wang, J.Q.; Wu, X.H.; Zhang, H.Y.; Chen, X.H. The fuzzy cross-entropy for intuitionistic hesitant fuzzy sets and their application in multi-criteria decision-making. Int. J. Syst. Sci. 2015, 46, 2335–2350. [Google Scholar] [CrossRef]
  55. Tao, Z.F.; Liu, X.; Chen, H.Y.; Liu, J.P. Entropy measures for linguistic information and its application to decision making. J. Intell. Fuzzy Syst. 2015, 29, 747–759. [Google Scholar] [CrossRef]
  56. Song, W.; Zhu, J.J.; Zhang, S.T.; Chen, Y. Decision Making Method for Dual Uncertain Information based on Grey Incidence Analysis and Grey Relative Entropy Optimization. J. Grey Syst. 2017, 29, 78–98. [Google Scholar]
  57. Farhadinia, B.; Xu, Z.S. Novel hesitant fuzzy linguistic entropy and cross-entropy measures in multiple criteria decision making. Appl. Intell. 2018, 48, 3915–3927. [Google Scholar] [CrossRef]
  58. Xue, W.T.; Xu, Z.S.; Zhang, X.L.; Tian, X.L. Pythagorean Fuzzy LINMAP Method Based on the Entropy Theory for Railway Project Investment Decision Making. Int. J. Intell. Syst. 2018, 33, 93–125. [Google Scholar] [CrossRef]
  59. Biswas, A.; Sarkar, B. Pythagorean fuzzy TOPSIS for multicriteria group decision-making with unknown weight information through entropy measure. Int. J. Intell. Syst. 2019, 34, 1108–1128. [Google Scholar] [CrossRef]
  60. Xiao, F.Y. A Multiple-Criteria Decision-Making Method Based on D Numbers and Belief Entropy. Int. J. Fuzzy Syst. 2019, 21, 1144–1153. [Google Scholar] [CrossRef]
  61. Xu, X.H.; Luo, X.T. Information entropy risk measure applied to large group decision-making method. Soft Comput. 2019, 23, 4987–4997. [Google Scholar] [CrossRef]
  62. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making Methods and Applications; Springer: Berlin, Germany, 1981. [Google Scholar]
  63. Seiti, H.; Hafezalkotob, A. Developing the R-TOPSIS methodology for risk-based preventive maintenance planning: A case study in rolling mill company. Comput. Ind. Eng. 2019, 128, 622–636. [Google Scholar] [CrossRef]
  64. Singh, L.; Singh, S.; Aggarwal, N. Improved TOPSIS method for peak frame selection in audio-video human emotion recognition. Multimed. Tools Appl. 2019, 78, 6277–6308. [Google Scholar] [CrossRef]
  65. Tang, H.M.; Shi, Y.; Dong, P.W. Public blockchain evaluation using entropy and TOPSIS. Expert Syst. Appl. 2019, 117, 204–210. [Google Scholar] [CrossRef]
  66. Vidal, R.; Sanchez-Pantoja, N. Method based on life cycle assessment and TOPSIS to integrate environmental award criteria into green public procurement. Sustain. Cities Soc. 2019, 44, 465–474. [Google Scholar] [CrossRef]
  67. Wu, Z.B.; Xu, J.P.; Jiang, X.L.; Zhong, L. Two MAGDM models based on hesitant fuzzy linguistic term sets with possibility distributions: VIKOR and TOPSIS. Inf. Sci. 2019, 473, 101–120. [Google Scholar] [CrossRef]
  68. Zamani, R.; Berndtsson, R. Evaluation of CMIP5 models for west and southwest Iran using TOPSIS-based method. Theor. Appl. Climatol. 2019, 137, 533–543. [Google Scholar] [CrossRef]
  69. Lin, M.W.; Xu, Z.S. Probabilistic Linguistic Distance Measures and Their Applications in Multi-criteria Group Decision Making. In Soft Computing Applications for Group Decision-Making and Consensus Modeling. Studies in Fuzziness and Soft Computing; Collan, M., Kacprzyk, J., Eds.; Springer: Cham, Switzerland, 2018; Volume 357. [Google Scholar]
  70. Baloglu, U.B.; Demir, Y. An Agent-Based Pythagorean Fuzzy Approach for Demand Analysis with Incomplete Information. Int. J. Intell. Syst. 2018, 33, 983–997. [Google Scholar] [CrossRef]
  71. Wei, G.W.; Wang, H.J.; Lin, R. Application of correlation coefficient to interval-valued intuitionistic fuzzy multiple attribute decision-making with incomplete weight information. Knowl. Inf. Syst. 2011, 26, 337–349. [Google Scholar] [CrossRef]
  72. Xu, Y.J.; Ma, F.; Tao, F.F.; Wang, H.M. Some methods to deal with unacceptable incomplete 2-tuple fuzzy linguistic preference relations in group decision making. Knowl. Based Syst. 2014, 56, 179–190. [Google Scholar] [CrossRef]
  73. Zhang, Z.M. Deriving the priority weights from incomplete hesitant fuzzy preference relations based on multiplicative consistency. Appl. Soft Comput. 2016, 46, 37–59. [Google Scholar] [CrossRef]
  74. Bogiatzis, A.; Papadopoulos, B. Global Image Thresholding Adaptive Neuro-Fuzzy Inference System Trained with Fuzzy Inclusion and Entropy Measures. Symmetry 2019, 11, 286. [Google Scholar] [CrossRef]
  75. Bogiatzis, A.; Papadopoulos, B. Local thresholding of degraded or unevenly illuminated documents using fuzzy inclusion and entropy measures. Evol. Syst. 2019. [Google Scholar] [CrossRef]
  76. Deng, X.M.; Gao, H. TODIM method for multiple attribute decision making with 2-tuple linguistic Pythagorean fuzzy information. J. Intell. Fuzzy Syst. 2019, 37, 1769–1780. [Google Scholar] [CrossRef]
  77. Li, Z.X.; Lu, M. Some novel similarity and distance and measures of Pythagorean fuzzy sets and their applications. J. Intell. Fuzzy Syst. 2019, 37, 1781–1799. [Google Scholar] [CrossRef]
  78. Lu, J.P.; Wei, C. TODIM method for Performance Appraisal on Social-Integration-based Rural Reconstruction with Interval-Valued Intuitionistic Fuzzy Information. J. Intell. Fuzzy Syst. 2019, 37, 1731–1740. [Google Scholar] [CrossRef]
  79. Wang, R. Research on the Application of the Financial Investment Risk Appraisal Models with Some Interval Number Muirhead Mean Operators. J. Intell. Fuzzy Syst. 2019, 37, 1741–1752. [Google Scholar] [CrossRef]
  80. Wu, L.P.; Gao, H.; Wei, C. VIKOR method for financing risk assessment of rural tourism projects under interval-valued intuitionistic fuzzy environment. J. Intell. Fuzzy Syst. 2019, 37, 2001–2008. [Google Scholar] [CrossRef]
  81. Wang, J.; Gao, H.; Lu, M. Approaches to strategic supplier selection under interval neutrosophic environment. J. Intell. Fuzzy Syst. 2019, 37, 1707–1730. [Google Scholar] [CrossRef]
  82. Wei, G.W. The generalized dice similarity measures for multiple attribute decision making with hesitant fuzzy linguistic information. Econ. Res. Ekon. Istraživanja 2019, 32, 1498–1520. [Google Scholar] [CrossRef] [Green Version]
  83. Stevic, Z.; Vasiljevic, M.; Zavadskas, E.K.; Sremac, S.; Turskis, Z. Selection of Carpenter Manufacturer using Fuzzy EDAS Method. Inz. Ekon. Eng. Econ. 2018, 29, 281–290. [Google Scholar] [CrossRef]
  84. Wei, G.W.; Wang, J.; Wei, C.; Wei, Y.; Zhang, Y. Dual Hesitant Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Decision Making. IEEE Access 2019, 7, 86697–86716. [Google Scholar] [CrossRef]
  85. Wang, P.; Wei, G.W.; Wang, J.; Lin, R.; Wei, Y. Dual Hesitant q-Rung Orthopair Fuzzy Hamacher Aggregation Operators and their Applications in Scheme Selection of Construction Project. Symmetry 2019, 11, 771. [Google Scholar] [CrossRef]
  86. Deng, X.M.; Wang, J.; Wei, G.W. Some 2-tuple linguistic Pythagorean Heronian mean operators and their application to multiple attribute decision-making. J. Exp. Theor. Artif. Intell. 2019, 31, 555–574. [Google Scholar]
  87. Tang, X.Y.; Wei, G.W.; Gao, H. Pythagorean fuzzy Muirhead mean operators in multiple attribute decision making for evaluating of emerging technology commercialization. Econ. Res. Ekon. Istraz. 2019, 32, 1667–1696. [Google Scholar] [CrossRef] [Green Version]
  88. Gao, H. Pythagorean fuzzy Hamacher Prioritized aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2018, 35, 2229–2245. [Google Scholar] [CrossRef]
  89. Wang, J.; Wei, G.W.; Lu, M. TODIM Method for Multiple Attribute Group Decision Making under 2-Tuple Linguistic Neutrosophic Environment. Symmetry 2018, 10, 486. [Google Scholar] [CrossRef]
  90. Wei, G.W. TODIM Method for Picture Fuzzy Multiple Attribute Decision Making. Informatica 2018, 29, 555–566. [Google Scholar] [CrossRef] [Green Version]
  91. Wang, J.; Gao, H.; Wei, G.W. The generalized Dice similarity measures for Pythagorean fuzzy multiple attribute group decision making. Int. J. Intell. Syst. 2019, 34, 1158–1183. [Google Scholar] [CrossRef]
  92. Zhang, K.; Kong, W.R.; Liu, P.P.; Shi, J.; Lei, Y.; Zou, J. Assessment and sequencing of air target threat based on intuitionistic fuzzy entropy and dynamic VIKOR. J. Syst. Eng. Electron. 2018, 29, 305–310. [Google Scholar] [CrossRef]
  93. Jahan, A.; Zavadskas, E.K. ELECTRE-IDAT for design decision-making problems with interval data and target-based criteria. Soft Comput. 2019, 23, 129–143. [Google Scholar] [CrossRef]
  94. Dahooie, J.H.; Zavadskas, E.K.; Firoozfar, H.R.; Vanaki, A.S.; Mohammadi, N.; Brauers, W.K.M. An improved fuzzy MULTIMOORA approach for multi-criteria decision making based on objective weighting method (CCSD) and its application to technological forecasting method selection. Eng. Appl. Artif. Intell. 2019, 79, 114–128. [Google Scholar] [CrossRef]
  95. Wang, P.; Wang, J.; Wei, G.W. EDAS method for multiple criteria group decision making under 2-tuple linguistic Neutrosophic enviroment. J. Intell. Fuzzy Syst. 2019, 37, 1597–1608. [Google Scholar] [CrossRef]
  96. Wang, J.; Wei, G.W.; Lu, J.P.; Alsaadi, F.E.; Hayat, T.; Wei, C.; Zhang, Y. Some q-Rung Orthopair Fuzzy Hamy mean Operators in Multiple Attribute Decision Making and their application to enterprise resource planning systems selection. Int. J. Intell. Syst. 2019, 34, 2429–2458. [Google Scholar] [CrossRef]
  97. Wang, J.; Wei, G.W.; Lu, M. An Extended VIKOR Method for Multiple Criteria Group Decision Making with Triangular Fuzzy Neutrosophic Numbers. Symmetry 2018, 10, 497. [Google Scholar] [CrossRef]
  98. Wei, G.W.; Wu, J.; Wei, C.; Wang, J.; Lu, J.P. Models for MADM with 2-tuple linguistic neutrosophic Dombi Bonferroni mean operators. IEEE Access 2019, 71, 108878–108905. [Google Scholar] [CrossRef]
Table 1. Linguistic decision matrix by the first DM.
Table 1. Linguistic decision matrix by the first DM.
AlternativesG1G2G3G4
A1MGVGEP
A2VPEPPVP
A3EGPGEG
A4EPVGVPG
A5GEGEPP
Table 2. Linguistic decision matrix by the second DM.
Table 2. Linguistic decision matrix by the second DM.
AlternativesG1G2G3G4
A1PVGVGVP
A2EPPVPP
A3EGPVGG
A4VPGPEG
A5EGVGEPEP
Table 3. Linguistic decision matrix by the third DM.
Table 3. Linguistic decision matrix by the third DM.
AlternativesG1G2G3G4
A1EPEGVGP
A2PVPEPVP
A3EGPGEG
A4EPVGVPG
A5VGGEPVP
Table 4. Linguistic decision matrix by the fourth DM.
Table 4. Linguistic decision matrix by the fourth DM.
AlternativesG1G2G3G4
A1EPGGP
A2VPEPPP
A3EGVPVGEG
A4VPGEPVG
A5EGVGPP
Table 5. Linguistic decision matrix by the fifth DM.
Table 5. Linguistic decision matrix by the fifth DM.
AlternativesG1G2G3G4
A1MVGEGP
A2PPEPEP
A3EGEPEGG
A4EPVGPEG
A5GEGVPEP
Table 6. Probabilistic linguistic decision matrix.
Table 6. Probabilistic linguistic decision matrix.
AlternativesG1G2
A1 { l 0 ( 0.4 ) , l 1 ( 0.2 ) , l 3 ( 0.4 ) } { l 1 ( 0.4 ) , l 2 ( 0.4 ) , l 3 ( 0.2 ) }
A2 { l 2 ( 0.4 ) , l 1 ( 0.4 ) , l 3 ( 0.2 ) } { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) }
A3 { l 3 ( 1 ) } { l 3 ( 0.2 ) , l 2 ( 0.2 ) , l 1 ( 0.6 ) }
A4 { l 3 ( 0.6 ) , l 2 ( 0.4 ) } { l 1 ( 0.4 ) , l 2 ( 0.6 ) }
A5 { l 1 ( 0.4 ) , l 2 ( 0.2 ) , l 3 ( 0.4 ) } { l 1 ( 0.2 ) , l 2 ( 0.4 ) , l 3 ( 0.4 ) }
AlternativesG3G4
A1 { l 1 ( 0.2 ) , l 2 ( 0.6 ) , l 3 ( 0.2 ) } { l 3 ( 0.2 ) , l 2 ( 0.2 ) , l 1 ( 0.6 ) }
A2 { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) } { l 3 ( 0.6 ) , l 1 ( 0.4 ) }
A3 { l 1 ( 0.4 ) , l 2 ( 0.4 ) , l 3 ( 0.2 ) } { l 1 ( 0.4 ) , l 3 ( 0.6 ) }
A4 { l 3 ( 0.2 ) , l 2 ( 0.4 ) , l 1 ( 0.4 ) } { l 1 ( 0.4 ) , l 2 ( 0.2 ) , l 3 ( 0.4 ) }
A5 { l 3 ( 0.6 ) , l 2 ( 0.2 ) , l 1 ( 0.2 ) } { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) }
Table 7. Normalized probabilistic linguistic decision matrix.
Table 7. Normalized probabilistic linguistic decision matrix.
AlternativesG1G2
A1 { l 3 ( 0.4 ) , l 1 ( 0.2 ) , l 0 ( 0.4 ) } { l 1 ( 0.4 ) , l 2 ( 0.4 ) , l 3 ( 0.2 ) }
A2 { l 3 ( 0.2 ) , l 2 ( 0.4 ) , l 1 ( 0.4 ) } { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) }
A3 { l 3 ( 0 ) , l 3 ( 0 ) , l 3 ( 1 ) } { l 3 ( 0.2 ) , l 2 ( 0.2 ) , l 1 ( 0.6 ) }
A4 { l 3 ( 0 ) , l 3 ( 0.6 ) , l 2 ( 0.4 ) } { l 1 ( 0 ) , l 1 ( 0.4 ) , l 2 ( 0.6 ) }
A5 { l 1 ( 0.4 ) , l 2 ( 0.2 ) , l 3 ( 0.4 ) } { l 1 ( 0.2 ) , l 2 ( 0.4 ) , l 3 ( 0.4 ) }
AlternativesG3G4
A1 { l 1 ( 0.2 ) , l 2 ( 0.6 ) , l 3 ( 0.2 ) } { l 3 ( 0.2 ) , l 2 ( 0.2 ) , l 1 ( 0.6 ) }
A2 { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) } { l 3 ( 0 ) , l 3 ( 0.6 ) , l 1 ( 0.4 ) }
A3 { l 1 ( 0.4 ) , l 2 ( 0.4 ) , l 3 ( 0.2 ) } { l 1 ( 0 ) , l 1 ( 0.4 ) , l 3 ( 0.6 ) }
A4 { l 3 ( 0.2 ) , l 2 ( 0.4 ) , l 1 ( 0.4 ) } { l 1 ( 0.4 ) , l 2 ( 0.2 ) , l 3 ( 0.4 ) }
A5 { l 3 ( 0.6 ) , l 2 ( 0.2 ) , l 1 ( 0.2 ) } { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) }
Table 8. Probabilistic linguistic positive ideal solution (PLPIS) and probabilistic linguistic negative ideal solution (PLNIS).
Table 8. Probabilistic linguistic positive ideal solution (PLPIS) and probabilistic linguistic negative ideal solution (PLNIS).
G1G2
PLPIS { l 3 ( 0 ) , l 3 ( 0 ) , l 3 ( 1 ) } { l 1 ( 0.2 ) , l 2 ( 0.4 ) , l 3 ( 0.4 ) }
PLNIS { l 3 ( 0 ) , l 3 ( 0.6 ) , l 2 ( 0.4 ) } { l 3 ( 0.4 ) , l 2 ( 0.2 ) , l 1 ( 0.4 ) }
G3G4
PLPIS { l 1 ( 0.2 ) , l 2 ( 0.6 ) , l 3 ( 0.2 ) } { l 1 ( 0 ) , l 1 ( 0.4 ) , l 3 ( 0.6 ) }
PLNIS { l 3 ( 0.6 ) , l 2 ( 0.2 ) , l 1 ( 0.2 ) } { l 3 ( 0 ) , l 3 ( 0.6 ) , l 1 ( 0.4 ) }
Table 9. d ( P L A i , P L P I S ) and d ( P L A i , P L N I S ) of each alternative.
Table 9. d ( P L A i , P L P I S ) and d ( P L A i , P L N I S ) of each alternative.
Alternatives d ( P L A i , P L P I S ) d ( P L A i , P L N I S )
A10.1589 0.1310
A20.2565 0.0198
A30.0610 0.2273
A40.2185 0.1006
A50.2342 0.1159
Table 10. Probabilistic linguistic relative closeness degree (PLRCD of each alternative from PLPIS.
Table 10. Probabilistic linguistic relative closeness degree (PLRCD of each alternative from PLPIS.
AlternativesA1A2A3A4A5
P L R C D ( P L A i , P L P I S ) 0.45180.07160.78840.31530.3310

Share and Cite

MDPI and ACS Style

Lu, J.; Wei, C.; Wu, J.; Wei, G. TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight and Its Application to Supplier Selection of New Agricultural Machinery Products. Entropy 2019, 21, 953. https://doi.org/10.3390/e21100953

AMA Style

Lu J, Wei C, Wu J, Wei G. TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight and Its Application to Supplier Selection of New Agricultural Machinery Products. Entropy. 2019; 21(10):953. https://doi.org/10.3390/e21100953

Chicago/Turabian Style

Lu, Jianping, Cun Wei, Jiang Wu, and Guiwu Wei. 2019. "TOPSIS Method for Probabilistic Linguistic MAGDM with Entropy Weight and Its Application to Supplier Selection of New Agricultural Machinery Products" Entropy 21, no. 10: 953. https://doi.org/10.3390/e21100953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop