Next Article in Journal
A New Model for Determining Weight Coefficients of Criteria in MCDM Models: Full Consistency Method (FUCOM)
Next Article in Special Issue
Symmetric Triangular Interval Type-2 Intuitionistic Fuzzy Sets with Their Applications in Multi Criteria Decision Making
Previous Article in Journal
Modeling the Service Network Design Problem in Railway Express Shipment Delivery
Previous Article in Special Issue
Intertemporal Choice of Fuzzy Soft Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Probabilistic Hesitant Intuitionistic Linguistic Term Sets in Multi-Attribute Group Decision Making

1
Department of IT, AGI Education, Auckland 1051, New Zealand
2
Department of Information Technology, Otago Polytechnic, Auckland International Campus, Auckland 1141, New Zealand
3
Department of Mathematics, Quaid-i-Azam University, Islamabad 45320, Pakistan
4
Department of Mathematics, University of Management and Technology, Lahore 54770, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(9), 392; https://doi.org/10.3390/sym10090392
Submission received: 22 July 2018 / Revised: 31 August 2018 / Accepted: 3 September 2018 / Published: 10 September 2018
(This article belongs to the Special Issue Fuzzy Techniques for Decision Making 2018)

Abstract

:
Decision making is the key component of people’s daily life, from choosing a mobile phone to engaging in a war. To model the real world more accurately, probabilistic linguistic term sets (PLTSs) were proposed to manage a situation in which several possible linguistic terms along their corresponding probabilities are considered at the same time. Previously, in linguistic term sets, the probabilities of all linguistic term sets are considered to be equal which is unrealistic. In the process of decision making, due to the vagueness and complexity of real life, an expert usually hesitates and unable to express its opinion in a single term, thus making it difficult to reach a final agreement. To handle real life scenarios of a more complex nature, only membership linguistic decision making is unfruitful; thus, some mechanism is needed to express non-membership linguistic term set to deal with imprecise and uncertain information in more efficient manner. In this article, a novel notion called probabilistic hesitant intuitionistic linguistic term set (PHILTS) is designed, which is composed of membership PLTSs and non-membership PLTSs describing the opinions of decision makers (DMs). In the theme of PHILTS, the probabilities of membership linguistic terms and non-membership linguistic terms are considered to be independent. Then, basic operations, some governing operational laws, the aggregation operators, normalization process and comparison method are studied for PHILTSs. Thereafter, two practical decision making models: aggregation based model and the extended TOPSIS model for PHILTS are designed to classify the alternatives from the best to worst, as an application of PHILTS to multi-attribute group decision making. In the end, a practical problem of real life about the selection of the best alternative is solved to illustrate the applicability and effectiveness of our proposed set and models.

1. Introduction

The choices we make today determine our future, therefore, to choose the best alternative subject to certain attributes is an important problem. Multi-attribute group decision making (MAGDM) has established its importance by providing the optimal solution considering different attributes in many real life problems. For this purpose, many sets and models have been designed to express and comprehend the opinions of DMs. The classical set theory is too restrictive to express one’s opinion, as some real life scenarios are too complicated and the vague data are often involved, therefore the DMs are unable to form a definite opinion. Fuzzy set theory is proposed as a remedy for such kind of real life problems. Fuzzy set approaches are suitable to use when the modelling of human knowledge is necessary and when human evaluations are required. However, the usual fuzzy set theory is limited to the modelling in which the diversity of variants occurs at the same time.
To overcome such situation, different extensions of fuzzy set have been proposed to better model the real world, such as intuitionistic fuzzy set [1], hesitant fuzzy set [2], hesitant probabilistic fuzzy set [3], hesitant probabilistic multiplicative set [4], and necessary and possible hesitant fuzzy set [5]. Zadeh [6] suggested the concept of a linguistic variable that is more natural for humans to express there will in situations where data are imprecise. Thus far, linguistic environment has been extensively used to cope with the problems of decision making within [7]. Mergió et al. [8] used the Dempster–Shafer theory of evidence to construct an improved linguistic representation model for the sake of decision making process. Next, they introduced several linguistic aggregation operators. Zhu et al. [9] proposed a two-dimensional linguistic lattice implication algebra to determine implicitly and further the compilation of two-dimensional linguistic information decision in MAGDM dilemmas. Meng and Tang [10] generalized the 2-tuple linguistic aggregation operators and then used them in MAGDM dilemmas. Li and Dong [11] gave an introduction to the proportional 2-tuple linguistic form to make easy the solving of MAGDM dilemmas. Xu [12] introduced a dynamic linguistic weighted geometric operator to cumulate the linguistic information and then solved the problem of MAGDM when the judgment in different periods to change the linguistic information. Li [13] applied the concept of extended linguistic variables to construct an advanced way to cope with MAGDM dilemmas under linguistic environments. Agell et al. [14] used qualitative thinking approaches to perform and incorporate linguistic decision information and then applied it to MAGDM dilemmas.
Because of the uncertainty, vagueness and complexity of real world problems, it is troublesome for experts to grant linguistic judgment using a single linguistic term. Torra [2] managed the situation where several membership values of a fuzzy set are possible by defining hesitant fuzzy set (HFS). Experts may hesitate among several possible linguistic terms. For this purpose, Rodriguez et al. [15] introduced the concept of hesitant fuzzy linguistic term sets (HFLTS) to improve the flexibility of linguistic information within hesitant situation. Zhu and Li [16] designed hesitant fuzzy linguistic aggregation operators based on the Hamacher t-norm and t-conorm. Cui and Ye [17] proposed multiple-attribute decision-making method using similarity measures of hesitant linguistic neutrosophic numbers regarding least common multiple cardinality. Liu et al. [18] defined new kind of similarity and distance measures based on a linguistic scale function. However, in some cases, the probabilities of these possible terms are not equal. Given this reality, Peng et al. [19] proposed the more generalized concept, called probabilistic linguistic term sets (PLTSs). PLTSs allow DMs to state more than one linguistic term, as an assessment for linguistic variable. This increases the flexibility and the fruitfulness of the expression of linguistic information and it is more reasonable for DMs to state their preference in terms of PLTSs because the PLTSs can reflect different probabilities for each possible assessment of a given object. Therefore, the research on the PLTSs is necessary. Thus, they used PLTSs in multi-attribute group decision making problem and construct an extended TOPSIS method as well as an aggregation-based method for MAGDM. Recently, in 2017, Lin et al. [20] extended the PLTSs to probabilistic uncertain linguistic term set, which is designed as some possible uncertain linguistic terms coupled with the corresponding probabilities, and developed an extended approach for preference to rank the alternatives.
Atanassov [1,21] presented the concept of the intuitionistic fuzzy set (IFS) which has three main parts, membership function, non-membership function and hesitancy function, and is better suited to handling uncertainty than the usual fuzzy set. Many researchers have been applying IFS for multi-attribute decision making under various different fuzzy environments. Up to now, the intuitionistic fuzzy set has been applied extensively to decision making problems [22,23,24,25,26,27]. Beg and Rashid [28] generalized the concept of HFLTS by hesitant intuitionistic fuzzy linguistic term set (HIFLTS) which is characterized by a membership and non-membership function that is more applicable for dealing with uncertainty than the HFLTS. HIFLTS collects possible membership and non-membership linguistic values provided by the DMs. This approach is useful to model more complex real life scenarios.
In this article, we introduce the concept of PHILTS. The main idea is to facilitate DMs to provide their opinions about membership and non-membership linguistic terms more freely to cope with the vagueness and uncertainties of real life. To make meaningful decision making, the basic framework of PHILTS is developed. In this regard, normalization process for the purpose to equalize the length of PHILTSs, basic operations and their governing laws are presented. Furthermore, to deal with different scenarios, range of aggregation operators, i.e., probabilistic hesitant intuitionistic linguistic averaging operator, probabilistic hesitant intuitionistic linguistic weighted averaging operator, probabilistic hesitant intuitionistic linguistic geometric operator and probabilistic hesitant intuitionistic linguistic weighted geometric operator are proposed. The DM can choose the aggregation operator according to his preference. Lastly, for practical use of PHILTS in decision making, an extended TOPSIS method is derived, in which the DMs provide their opinions in PHILTSs which are further aggregated and processed according to the proposed mechanism of extended TOPSIS to find the best alternative.
This paper is organized as follows. In Section 2, we review some basic knowledge needed to understand our proposal. In Section 3, the concept of PHILTSs is firstly proposed and then some concepts concerning PHILTS, i.e., normalization process, deviation degree, score function, operations and comparison between probabilistic hesitant intuitionistic linguistic term elements (PHILTEs), are also discussed. In Section 4, aggregation operators, deviation degree between two PHILTEs and weight vector are derived. In Section 5, we propose an extended TOPSIS method and aggregation based method designed for MAGDM with probabilistic hesitant intuitionistic linguistic information. An example is provided in Section 6 to illustrate the usefulness and practicality of our methodology by ranking of alternatives. Section 7 is dedicated to highlighting the advantages of the proposed set and comparing proposed models with existing theory. Finally, some concluding remarks are given in Section 8.

2. Preliminaries

In this section, we give some concepts and operations related to HFLTSs, HIFLTSs and PLTSs that will be used in coming sections.

2.1. Hesitant Fuzzy Linguistic Term Set

The DMs may face such a problem where they hesitate with certain possible values. For this purpose, Rodriguez et al. [15] introduced the following concept of hesitant fuzzy linguistic term set (HFLTS).
Definition 1 
([15]).Let S = s α ; α = 0 , 1 , 2 , , g be a linguistic term set; then, HFLTS, H S , is a finite and ordered subset of the consecutive linguistic terms of S.
Example 1.
Let S = s 0 = extremely poor , s 1 = very poor , s 2 = poor , s 3 = medium , s 4 = good , s 5 = very good , s 6 = extremely good be a linguistic term set. Then, two different HFLTSs may be defined as: H S x = s 1 = very poor , s 2 = poor , s 3 = medium , s 4 = good and H S y = s 3 = medium , s 4 = good , s 5 = very good .
Definition 2 
([15]).Let S = s α ; α = 0 , 1 , 2 , , g be an ordered finite set of linguistic terms and E be an ordered finite subset of the consecutive linguistic terms of S. Then, the operators “max” and “min” on E can be defined as follows:
(i) 
max E = max s l = s m ; s l E and s l s m l
(ii) 
min E = min s l = s n ; s l E and s l s n l .

2.2. Hesitant Intuitionistic Fuzzy Linguistic Term Set

In 2014, Beg and Rashid [28] introduced the concept of hesitant intuitionistic fuzzy linguistic term set (HIFLTS). This concept is actually based on HFLTS and intuitionistic fuzzy set.
Definition 3 
([28]).Let X be a universe of discourse, and S = s α ; α = 0 , 1 , 2 , , g be a linguistic term set, then HIFLTS on X are two functions h and h that when applied to an element of X return finite and ordered subsets of consecutive linguistic terms of S, this can be presented mathematically as:
A = x , h x , h x | x X ,
where h x and h x denote the possible membership and non-membership degree in terms of consecutive linguistic terms of the element x X to the set A such that the following conditions are satisfied:
(i) 
max h x + min h x s g ;
(ii) 
min h x + max h x s g .

2.3. Probabilistic Linguistic Term Sets

Recently, in 2016, Pang et al. [19] introduced the concept of PLTSs by attaching probabilities with each linguistic term, which is basically the generalization of HFLTS, and thus they opened a new dimension of research in decision theory.
Definition 4 
([19]).Let S = s α ; α = 0 , 1 , 2 , , g be a linguistic term set, then a PLTS can be presented as follows:
L p = L i p i | L i S , p i 0 i = 1 , 2 , , # L p , i = 1 # L p p i 1 .
where L i p i is the ith linguistic term L i associated with the probability p i , and # L p denotes the number of linguistic terms in L p .
Definition 5 
([19]).Let L p = L i p i ; i = 1 , 2 , , # L p , r i be the lower index of linguistic term L i , L p is called an ordered PLTS, if all the elements L i p i in L p are ranked according to the values of r i × p i in descending order.
However, in a PLTS, it is possible for two or more linguistic terms with equal values of r i × p i . Taking a PLTS L p = s 1 0.4 , s 2 0.2 , s 3 0.4 , here r 1 × p 1 = r 2 × p 2 = 0.4
According to the above rule, these two values cannot be arranged. To handle such type of problem, Zhang et al. [29] defined the following ranking rule.
Definition 6 
([29]).Let L P = L i p i ; i = 1 , 2 , , # L p , r i be the lower index of linguistic term L i .
(1) 
If the values of r i p i are different for all elements in PLTS, then arrange all the elements according to the values of r i p i directly.
(2) 
If all the values of r i p i become equal for two or more elements, then
(a) 
When the lower indices r i i = 1 , 2 , , # L p are unequal, arrange r i p i i = 1 , 2 , , # L p according to the values of r i i = 1 , 2 , , # L p in descending order.
(b) 
When the lower indices r i i = 1 , 2 , , # L p are incomparable, arrange r i p i i = 1 , 2 , , # L p according to the values of p i i = 1 , 2 , , # L p in descending order.
Definition 7 
([19]).Let L p be a PLTS such that i = 1 # L p p i < 1 , then the associated PLTS is denoted and defined as
L p = L i p i ; i = 1 , 2 , , # L p
where p i = p i i = 1 # L p p i , i = 1 , 2 , , # L p .
Definition 8 
([19]).Let L 1 p = L 1 i p 1 i ; i = 1 , 2 , , # L 1 p and L 2 p = L 2 i p 2 i ; i = 1 , 2 , , # L 2 p be two PLTSs, where # L 1 p and # L 2 p denote the number of linguistic terms in L 1 p and L 2 p , respectively. If # L 1 p > # L 2 p , then # L 1 p - # L 2 p linguistic terms will be added to L 2 p so that the number of elements in L 1 p and L 2 p becomes equal. The added linguistic terms are the smallest one’s in L 2 p and the probabilities of all the linguistic terms are zero.
Let L 1 p = L 1 i p 1 i ; i = 1 , 2 , , # L 1 p and L 2 p = L 2 i p 2 i ; i = 1 , 2 , , # L 2 p , then the Normalized PLTSs denoted by L 1 ˜ p = L 1 i ˜ p 1 i ; i = 1 , 2 , , # L 1 p and L 2 ˜ p = L 2 i ˜ p 2 i ; i = 1 , 2 , , # L 2 p can be obtained according to the following two steps:
(1)
If i = 1 # L k p p k i < 1 , then L k p , k = 1 , 2 is calculated according to Definition 7.
(2)
If # L 1 p # L 2 p , then according to Definition 8, add some linguistic terms to the one with the smaller number of elements.
The deviation degree between PLTSs, which is analogous to the Euclidean distance between hesitant fuzzy sets [30] can be defined as:
Definition 9 
([19]).Let L 1 p = L 1 i p 1 i ; i = 1 , 2 , , # L 1 p and L 2 p = L 2 i p 2 i ; i = 1 , 2 , , # L 2 p be two PLTSs, where # L 1 p and # L 2 p denote the number of linguistic terms in L 1 p and L 2 p , respectively, with # L 1 p = # L 2 p . Then, the deviation degree between these two PLTSs can be defined as
d L 1 p , L 2 p = 1 # L 1 p i = 1 L 1 p p 1 i r 1 i - p 2 i r 2 i 2
where r 1 i and r 2 i denote the lower indices of linguistic terms L 1 i and L 2 i , respectively.
For further detail of PLTS, one can see Ref. [19].

3. Probabilistic Hesitant Intuitionistic Linguistic Term Set

Although HIFLTS allow the DM to state his assessments by using several linguistic terms, it cannot reflect the probabilities of the assessments of DM.
To overcome this issue, in this section, the concept of probabilistic hesitant intuitionistic linguistic term set (PHILTS) which is based on the concept of HIFLTS and PLTS is proposed. Furthermore, some basic operations for PHILTS are also designed.
Definition 10.
Let X be a universe of discourse, and S = s α ; α = 0 , 1 , 2 , , g be a linguistic term set, then a PHILTS on X are two functions l and l that when applied to an element of X return finite and ordered subsets of the consecutive linguistic terms of S along with their occurrence probabilities, which can be mathematically expressed as
A p = x , l x p x = l i x p i x , l x p x = l j x p j x | p i x 0 , i = 1 , 2 , , # l x p x , i = 1 # l x p x p i x 1 & p j x 0 , j = 1 , 2 , , # l x p x , j = 1 # l x p x p j x 1
where l x p x and l x p x are the PLTSs, denoting the membership and non-membership degree of the element x X to the set A p such that the following two conditions are satisfied:
(i) 
max l x + min l x s g ;
(ii) 
min l x + max l x s g .
For the sake of simplicity and convenience, we call the pair A x p x = l x p x , l x p x as the intuitionistic probabilistic linguistic term element (PHILTE), denoted by A p = l p , l p for short.
Remark 1.
Particularly, if the probabilities of all linguistic terms in membership part and non-membership part become equal, then PHILTE reduces to HIFLTE.
Example 2.
Let S = s 0 = extremely poor , s 1 = very poor , s 2 = poor , s 3 = medium , s 4 = good , s 5 = very good , s 6 = extremely good be a linguistic term set. A PHILTS is A p = x 1 , s 1 0.4 , s 2 0.1 , s 3 0.35 , s 3 0.3 , s 4 0.4 , x 2 , s 4 0.33 , s 5 0.5 , s 1 0.2 , s 2 0.45
One can easily check the conditions of PHILTS for A p .
To illustrate the PHILTS more straightforwardly, in the following, a practical life example is given to depict the difference between the PHILTS and HIFLTS:
Example 3.
Take the evaluation of a vehicle on the comfortable degree attribute/criteria as an example. Let S be a linguistic term set used in the above example. An expert provides an HIFLTE s 1 , s 2 , s 3 , s 3 , s 4 on the comfortable degree due to his/her hesitation for this evaluation. However, he/she is more confident in the linguistic term s 2 for the membership degree set and the linguistic term s 4 for the non-membership degree set. The HIFLTS fails to express his/her confidence. Therefore, we utilize the PHILTS to present his/her evaluations. In this case, his/her evaluations can be expressed as A p = s 1 0.2 , s 2 0.6 , s 3 0.2 , s 3 0.2 , s 4 0.8 .
In the following, the ordered PHILTE is defined to make sure that the operational results among PHILTEs can be determined easily.
Definition 11.
A PHILTE A p = l p , l p is known to be an ordered PHILTE, if l p and l p are ordered PLTSs.
Example 4.
Consider a PHILTE A p = s 1 0.4 , s 2 0.1 , s 3 0.35 , s 3 0.3 , s 4 0.4 used in the Example 2. Then, according to Definition 11 the ordered PHILTE is A p = s 3 0.35 , s 1 0.4 , s 2 0.1 , s 4 0.4 , s 3 0.3

3.1. The Normalization of PHILTEs

Ideally, the sum of the probabilities is one, but in PHILTE if either of the membership probabilities or non-membership probabilities have sum less than one than this issue is resolved as follows.
Definition 12.
Consider a PHILTE A p = l p , l p , the associated PHILTE A p = l p , l p is defined, where
l p = l i p i | i = 1 , 2 , , # l p ; p i = p i i = 1 # l p p i , i = 1 , 2 , , # l p
and
l p = l j p j | j = 1 , 2 , , l p ; p j = p j j = 1 l p p j , j = 1 , 2 , , l p .
Example 5.
Consider a PHILTE A p = s 1 0.4 , s 2 0.1 , s 3 0.35 , s 3 0.3 , s 4 0.4 . Here, we see that i = 1 # l p p i = 0.85 < 1 also j = 1 # l p p j = 0.7 < 1 so the associated PHILTE A p = l p , l p = s 1 0.4 0.85 , s 2 0.1 0.85 , s 3 0.35 0.85 , s 3 0.3 0.7 , s 4 0.4 0.7 .
In decision making process, experts usually face such problems in which the length of PHILTEs is different. Let A p = l p , l p and A 1 p 1 = l 1 p 1 , l 1 p 1 be two PHILTEs of different lengths. Then, the following three cases are possible I # l p # l 1 p 1 , I I # l p # l 1 p 1 , I I I # l p # l 1 p 1 and # l p # l 1 p 1 . In such situation, they need to equalize their lengths by increasing the number of probabilistic linguistic terms in that PLTS in which the number of probabilistic linguistic terms are relatively small because PHILTEs of different lengths create great problems in operations, aggregation operators and finding the deviation degree between two PHILTEs.
Definition 13.
Given any two PHILTEs A p = l p , l p and A 1 p 1 = l 1 p 1 , l 1 p 1 if # l p > # l 1 p 1 then # l p - # l 1 p 1 linguistic terms should be added to l 1 p 1 to make their cardinalities identical. The added linguistic terms are the smallest one(s) in l 1 p 1 , and the probabilities of all the linguistic terms are zero.
The remaining cases are analogous to Case I .
Let A 1 p 1 = l 1 p 1 , l 1 p 1 and A 2 p 2 = l 2 p 2 , l 2 p 2 be two PHILTEs. Then, the following two simple steps are involved in normalization process.
Step 1: If i = 1 # l j p j p j i < 1 or i = 1 # l j p j p j i < 1 ; j = 1 , 2 , then we calculate l j p j , l j p j ; j = 1 , 2 using Equations (5) and (6).
Step 2: If # l 1 p 1 # l 2 p 2 or # l 1 p 1 # l 2 p 2 , then we add some elements according to Definition 13 to the one with small number of elements.
The resultant PHILTEs are called the normalized PHILTEs which are denoted as A ˜ p and A 1 ˜ p 1 .
Note, for the convenience of presentation, we denote the normalized PHILTEs by A p and A 1 p 1 as well.
Example 6.
Let A p = s 2 0.3 , s 3 0.7 , s 0 0.2 , s 1 0.4 , s 2 0.3 and A 1 p 1 = s 3 0.4 , s 4 0.3 , s 5 0.3 , s 1 0.4 , s 2 0.6 then
Step 1: According to Equation (6) l p = s 0 0.2 0.9 , s 1 0.4 0.9 , s 2 0.3 0.9
Step 2: Since # l p < # l 1 p 1 , so we add the linguistic term s 2 to l p so that the number of linguistic terms in l p and l 1 p 1 becomes equal, thus l p = s 2 0.3 , s 3 0.7 , s 2 0 . In addition, # l 1 p 1 < # l p so we add the linguistic term s 1 to l 1 p 1 , l 1 p 1 = s 1 0.4 , s 2 0.6 , s 1 0 . Therefore, after normalization, we have
A p = s 2 0.3 , s 3 0.7 , s 2 0 , s 0 0.2 , s 1 0.4 , s 2 0.3 and
A 1 p 1 = s 3 0.4 , s 4 0.3 , s 5 0.3 , s 1 0.4 , s 2 0.6 , s 1 0 .

3.2. The Comparison between PHILTEs

In this section, the comparison between two PHILTEs is presented. For this purpose, the score function and the deviation degree of the PHILTE are defined.
Definition 14.
Let A p = l p , l p = l i p i , l j p j ; i = 1 , 2 , , # l p , j = 1 , 2 , , l p be a PHILTE with a linguistic term set S = s α ; α = 0 , 1 , 2 , , g such that r i and r j denote, respectively, the lower indices of linguistic terms l i and l j , then the score of A p is denoted and defined as follows:
E A p = s γ ¯
where γ ¯ = g + α - β 2 ; α = i = 1 # l p r i p i i = 1 # l p p i and β = j = 1 # l p r j p j j = 1 # l p p j .
It is easy to see that 0 g + α - β 2 g which means s γ ¯ S ¯ = s α | α 0 , g .
Apparently, the score function represents the averaging linguistic term of PHILTE.
For two PHILTEs A p and A 1 p 1 , if E A p > E A 1 p 1 , then A p is superior to A 1 p 1 , denoted as A p > A 1 p 1 ; if E A p < E A 1 p 1 , then E A p is inferior to A 1 p 1 , denoted as A p < A 1 p 1 ; and, if E A p = E A 1 p 1 , then we cannot distinguish between them. Thus, in this case, we define another indicator, named as the deviation degree as follows:
Definition 15.
Let A p = l p , l p = l i p i , l j p j ; i = 1 , 2 , # l p , j = 1 , 2 , , l p be a PHILTE such that r i and r j denote, respectively, the lower indices of linguistic terms l i and l j , then the deviation degree of A p is denoted and defined as follows:
σ A p = i = 1 # l p p i r i - γ ¯ 2 i = 1 # l p p i + j = 1 # l p p j r j - γ ¯ 2 j = 1 # l p p j 1 2
The deviation degree shows the distance from the average value in the PHILTE. The greater value of σ implies lower consistency while the lesser value of σ indicates higher consistency.
Thus, A p and A 1 p 1 can be ranked by the following procedure:
(1)
if E A p > E A 1 p 1 , then A p > A 1 p 1 ;
(2)
if E A p = E A 1 p 1 and
(a)
σ A p > σ A 1 p 1 , then A p < A 1 p 1 ;
(b)
σ A p < σ A 1 p 1 , then A p > A 1 p 1 ;
(c)
σ A p = σ A 1 p 1 , then A p is indifferent to A 1 p 1 and is denoted as A p A 1 p 1 .
Example 7.
Let A p = l p , l p = s 1 0.12 , s 2 0.26 , s 3 0.62 , s 2 0.1 , s 3 0.3 , s 4 0.6 , A 1 p 1 = l 1 p 1 , l 1 p 1 = s 2 0.3 , s 3 0.3 , s 3 0.35 , s 4 0.35 and S be the linguistic term set used in Example 2 then
α = 1 × 0.12 + 2 × 0.26 + 3 × 0.62 0.12 + 0.26 + 0.62 = 2.5 , β = 2 × 0.1 + 3 × 0.3 + 4 × 0.6 0.6 + 0.3 + 0.1 = 3.5 ,
γ ¯ = 6 + 2.5 - 3.5 2 = 2.5 , E A p = s 2.5
α 1 = 2 × 0.3 + 3 × 0.3 0.3 + 0.3 = 2.5 , β 1 = 0.35 × 3 + 0.35 × 4 0.35 + 0.35 = 3.5 ,
γ ¯ 1 = 6 + 2.5 - 3.5 2 = 2.5 , E A 1 P 1 = s 2.5
Since E A p = E A 1 p 1 , we have to calculate the deviation degree of A p and A 1 p 1 .
σ A p = 0.12 1 - 2.5 2 + 0.26 2 - 2.5 2 + 0.62 3 - 2.5 2 0.12 + 0.26 + 0.62 + 0.6 4 - 3.5 2 + 0.3 3 - 3.5 2 + 0.1 2 - 3.5 2 0.6 + 0.3 + 0.1 = 0.529 ,
σ A 1 p 1 = 0.3 2 - 2.5 2 + 0.3 3 - 2.5 2 0.3 + 0.3 + 0.35 3 - 3.5 2 + 0.35 4 - 3.5 2 0.35 + 0.35 = 0.37
Thus, σ A p > σ A 1 p 1 so A p is inferior to A 1 p 1 .
In the following, we present a theorem which shows that the association does not affect the score and deviation degree of PHILTE.
Theorem 1.
Let A p = l p , l p be a PHILTE and A p = l p , l p be the associated PHILTE then E A p = E A p and σ A p = σ A p .
Proof. 
E A p = s γ ¯ where γ ¯ = g + α - β 2 and α = i = 1 # l p r i p i i = 1 # l p p i . Since i = 1 # l p p i = 1 and p i = p i i = 1 # l p p i , which implies that α = i = 1 # l p r i p i i = 1 # l p p i = α and β = j = 1 # l p r j p j j = 1 # l p . p j . Since j = 1 # l p p j = 1 and p . j = p j j = 1 # l p p j which further implies that β = j = 1 # l p r i p i i = 1 # l p p i = β . Hence, E A p = E A p .
Next, σ A p = i = 1 # l p p i r i - γ ¯ 2 i = 1 # l p . p . i + j = 1 # l p p j r j - γ ¯ 2 j = 1 # l p p j 1 2
Since i = 1 # l p p i = 1 , p i = p i i = 1 # l p p i , j = 1 # l p p j = 1 , p j = p j j = 1 # l p p j and γ ¯ = γ ¯ .
It yields that σ A p = i = 1 # l p p i r i - γ ¯ 2 i = 1 # l p p i + j = 1 # l p p j r j - γ ¯ 2 j = 1 # l p p j 1 2 = σ A p . ☐
The following theorem shows that order of comparison between two PHILTEs remains unaltered after normalization.
Theorem 2.
Let A p = l p , l p and A 1 p 1 = l 1 p 1 , l 1 p 1 be any two PHILTEs, A ˜ p = l ˜ p , l ˜ p and A ˜ 1 p 1 = l ˜ 1 p 1 , l ˜ 1 p 1 be the corresponding normalized PHILTEs respectively, then A p < A 1 p 1 A ˜ p < A ˜ 1 p 1 .
Proof. 
The proof is quite clear because, according to Theorem 1, E A p = E A p and σ A p = σ A p , so order of comparison in Step 1 of normalization process is preserved and so for Step 2 is concerned in that step we add some elements to PHILTEs though it does not change the order as we attach zero probabilities with the corresponding added elements so this means E A ˜ p = E A ˜ 1 p 1 and σ A ˜ p = σ A ˜ 1 p 1 . Hence, the result holds true. ☐
In the following definition, we summarize the fact that comparison of any two PHILTEs can be done by their corresponding normalized PHILTEs.
Definition 16.
Let A p = l p , l p and A 1 p 1 = l 1 p 1 , l 1 p 1 be any two PHILTEs, A ˜ p = l ˜ p , l ˜ p and A ˜ 1 p 1 = l ˜ 1 p 1 , l ˜ 1 p 1 be the corresponding normalized PHILTEs, respectively, then
(I)
If E A ˜ p > E A 1 ˜ p 1 then A p > A 1 p 1 .
(II)
If E A ˜ p < E A 1 ˜ p 1 then A p < A 1 p 1 .
(III)
If E A ˜ p = E A 1 ˜ p 1 then in this case we are unable to decide which one is superior. Thus, in this case, we do the comparison of PHILTEs on the bases of the deviation degree of normalized PHILTEs as follows.
(1)
If δ A ˜ p > δ A 1 ˜ p 1 then A p < A 1 p 1 .
(2)
If δ A ˜ p < δ A 1 ˜ p 1 then A p > A 1 p 1 .
(3)
If δ A ˜ p = δ A 1 ˜ p 1 in such case we say that A p is indifferent to A 1 p 1 and is denoted by A p A 1 p 1 .
Example 8.
Let S be the linguistic term set used in Example 2, A p = l p , l p = s 1 0.12 , s 2 0.26 , s 3 0.62 , s 2 0.1 , s 3 0.3 , s 4 0.5 and A 1 p 1 = l 1 p 1 , l 1 p 1 = s 2 0.3 , s 3 0.3 , s 3 0.35 , s 4 0.35 then the corresponding normalized PHILTEs are A ˜ p = l ˜ p , l ˜ p = s 1 0.12 , s 2 0.26 , s 3 0.62 , s 3 0.375 , s 4 0.625 , s 3 0 and A ˜ 1 p 1 = l ˜ 1 p 1 , l ˜ 1 p 1 = s 2 . 5 , s 3 0.5 , s 2 0 , s 3 0.5 , s 4 0.5 , s 3 0 .
We calculate the score of these normalized PHILTEs
α = 1 × 0.12 + 2 × 0.26 + 3 × 0.62 0.12 + 0.26 + 0.62 = 2.5 , β = 3 × 0.375 + 4 × 0.625 + 3 × 0 0.375 + 0.625 + 0 = 3.625 ,
γ ¯ = 6 + 2.5 - 3.625 2 = 2.437 , E A ˜ P = s 2.437
α 1 = 2 × 0.5 + 3 × 0.5 + 0 × 2 0.5 + 0.5 = 2.5 , β 1 = 0.5 × 3 + 0.5 × 4 + 0 × 3 0.5 + 0.5 = 3.5 ,
γ ¯ 1 = 6 + 2.5 - 3.5 2 = 2.5 , E A 1 ˜ p 1 = s 2.5
Since E A ˜ p < E A 1 ˜ p 1 so A p < A 1 p 1 .

3.3. Basic Operations of PHILTEs

Based on the operational laws of the PLTSs [19], we develop some basic operational framework of PHILTEs and investigate their properties in preparation for applications to the practical real life problems. Hereafter, it is assumed that all PHILTEs are normalized.
Definition 17.
Let A p = l p , l p = l i p i , l j p j ; i = 1 , 2 , , # l p , j = 1 , 2 , , # l p and A 1 p 1 = l 1 p 1 , l 1 p 1 = l 1 i p 1 i , l 1 j p 1 j ; i = 1 , 2 , , # l 1 p 1 , j = 1 , 2 , , # l 1 p 1 be two normalized and ordered PHILTEs, then
Addition:
A p A 1 p 1 = l p l 1 p 1 , l p l 1 p 1 = l i l p , l 1 i l 1 p 1 p i l i p 1 i l 1 i , l j l p , l 1 j l 1 j p 1 j p j l j p 1 j l 1 j
Multiplication:
A p A 1 p 1 = l p l 1 p 1 , l p l 1 p 1 = l i l p , l 1 i l 1 p 1 l i p i l 1 i p 1 i , l j l p , l 1 j l 1 p 1 l j p j l 1 j p 1 j
Scalar multiplication:
γ A p = γ l p , γ l p = l i l p γ p i l i , l j l p γ p j l j
Scalar power:
A p γ = l p γ , l p γ = l i l p l i γ p i , l j l p l j γ p j
where l i and l 1 i are the i t h linguistic terms in l p and l 1 p 1 , respectively; l j and l 1 j are the j t h linguistic terms in l p and l 1 p 1 , respectively; p i and p 1 i are the probabilities of the i t h linguistic terms in l p and l 1 p 1 , respectively; p j and p 1 j are the probabilities of the j t h linguistic terms in l p and l 1 p 1 , respectively; and γ denote a nonnegative scalar.
Theorem 3.
Let A p = l p , l p , A 1 p 1 = l 1 p 1 , l 1 p 1 , A 2 p 2 = l 2 p 2 , l 2 p 2 be any three ordered and normalized PHILTEs, γ 1 , γ 2 , γ 3 0 , then
(1) 
A p A 1 p 1 = A 1 p 1 A p ;
(2) 
A p A 1 p 1 A 2 p 2 = A p A 1 p 1 A 2 p 2 ;
(3) 
γ A p A 1 p 1 = γ A p γ A 1 p 1 ;
(4) 
γ 1 + γ 2 A p = γ 1 A p γ 2 A p ;
(5) 
A p A 1 p 1 = A 1 p 1 A p ;
(6) 
A p A 1 p 1 A 2 p 2 = A p A 1 p 1 A 2 p 2 ;
(7) 
A p A 1 p 1 γ = A p γ A 1 p 1 γ ;
(8) 
A p γ 1 + γ 2 = A p γ 1 A p γ 2 .
Proof. 
1 A p A 1 p 1 = l p , l p l 1 p 1 , l 1 p 1 = l p l 1 p 1 , l p l 1 p 1
= l i l p , l 1 i l 1 p 1 p i l i p 1 i l 1 i , l j l p , l 1 j l 1 p 1 p j l j p 1 j l 1 j
= l i l p , l 1 i l 1 p 1 p 1 i l 1 i p i l i , l j l p , l 1 j l 1 p 1 p 1 j l 1 j p j l j
= l 1 p 1 l p , l 1 p 1 l p = l 1 p 1 , l 1 p 1 l p , l p
= A 1 P 1 A p
2 A p A 1 p 1 A 2 p 2 = l p , l p l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2
= l p l 1 p 1 l 2 p 2 , l p l 1 p 1 l 2 p 2
= l i l p , l 1 i l 1 p 1 , l i z l 2 p 2 p i l i p 1 i l 1 i p 2 i l 2 i , l j l p , l 1 j l 1 p 1 , l j z l 2 p 2 p j l j p 1 j l 1 j p 2 j l 2 j
= l i l p , l 1 i l 1 p 1 , l i z l 2 p 2 p i l i p 1 i l 1 i p 2 i l 2 i , l j l p , l 1 j l 1 p 1 , l j z l 2 p 2 p j l j p 1 j l 1 j p 2 j l 2 j
= l p l 1 p 1 l 2 p 2 , l p l 1 p 1 l 2 p 2
= l p , l p l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2
= A p A 1 p 1 A 2 p 2
3 γ A p A 1 p 1 = γ l p , l p l 1 p 1 , l 1 p 1 = γ l p l 1 p 1 , l p l 1 p 1
= γ l i l p , l 1 i l 1 p 1 p i l i p 1 i l 1 i , l j l p , l 1 j l 1 p 1 p j l j p 1 j l 1 j
= l i l p , l 1 i l 1 p 1 γ p i l i γ p 1 i l 1 i , l j l p , l 1 j l 1 p 1 γ p j l j γ p 1 j l 1 j
= γ l p γ l 1 p 1 , γ l p γ l 1 p 1 = γ l p , γ l p γ l 1 p 1 , γ l 1 p 1
= γ A p γ A 1 p 1
4 γ 1 + γ 2 A p = γ 1 + γ 2 l p , l p
= γ l i l p γ 1 + γ 2 p i l i , l j l p γ 1 + γ 2 p j l j
= l i l p γ 1 p i l i γ 2 p i l i , l j l p γ 1 p j l j γ 2 p j l j
= l i l p γ 1 p i l i l i l p γ 2 p i l i , l j l p γ 1 p j l j l j l p γ 2 p j l j
= γ 1 l p γ 2 l p , γ 1 l p γ 2 l p = γ 1 l p , γ 1 l p γ 2 l p , γ 2 l p
= γ 1 A p γ 2 A P
5 A p A 1 p 1 = l p , l p l 1 p 1 , l 1 p 1 = l p l 1 p 1 , l p l 1 p 1
= l i l p , l 1 i l 1 p 1 l i p i l 1 i p 1 i , l j l p , l 1 j l 1 p 1 l j p j l 1 j p 1 j
= l i l p , l 1 i l 1 p 1 l 1 i p 1 i l i p i , l j l p , l 1 j l 1 p 1 l 1 j p 1 j l j p j
= l y p y l p , l y p y l p = l 1 p 1 , l 1 p 1 l p , l p
= A 1 P 1 A p
6 A p A 1 p 1 A 2 p 2 = l p , l p l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2
= l p l 1 p 1 l 2 p 2 , l p l 1 p 1 l 2 p 2
= l i l p , l 1 i l 1 p 1 , l 2 i l 2 p 2 l i p i l 1 i p 1 i l 2 i p 2 i , l j l p , l 1 j l 1 p 1 , l 2 j l 2 p 2 l j p j l 1 j p 1 j l 2 j p 2 j
= l i l p , l 1 i l 1 p 1 , l 2 i l 2 p 2 l i p i l 1 i p 1 i l 2 i p 2 i , l j l p , l 1 j l 1 p 1 , l 2 j l 2 p 2 l j p j l 1 j p 1 j l 2 j p 2 j
= l p l 1 p 1 l 2 p 2 , l p l 1 p 1 l 2 p 2
= l p , l p l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2
= A p A 1 p 1 A 2 p 2
7 A p A 1 p 1 γ = l p , l p l 1 p 1 , l 1 p 1 γ
= l p l 1 p 1 γ , l p l 1 p 1 γ
= l i l p , l 1 i l 1 p 1 p i l i p 1 i l 1 i γ , l j l p , l 1 j l 1 p 1 p j l j p 1 j l 1 j γ
= l i l p , l 1 i l 1 p 1 l i γ p i l 1 i γ p 1 i , l j l p , l j y l 1 p 1 l j γ p j l 1 j γ p 1 j
= l p γ l 1 p 1 γ , l p γ l 1 p 1 γ
= l p γ , l p γ l 1 p 1 γ , l 1 p 1 γ = A p γ A 1 p 1 γ
8 A p γ 1 + γ 2 = l p , l p γ 1 + γ 2 = l p γ 1 + γ 2 , l p γ 1 + γ 2
= l i l p l i γ 1 + γ 2 p i , l j l p l j x γ 1 + γ 2 p j
= l i l p l i γ 1 p i l i γ 2 p i , l j l p l j γ 1 p j l j γ 2 p j
= l i l p l i γ 1 p i l i x l p l i γ 2 p i , l j l p l j γ 1 p j l j l p l j γ 2 p j
= l p γ 1 l p γ 2 , l p γ 1 l p γ 2
= l p γ 1 , l p γ 1 l p γ 2 , l p γ 2 = A p γ 1 A p γ 2 . ☐

4. Aggregation Operators and Attribute Weights

This section is dedicated to discussion on some basic aggregation operators of PHILTS. Deviation degree between two PHILTEs is also defined in this section. Finally, we calculate the attribute weights in the light of PHILTEs.

4.1. The Aggregation Operators for PHILTEs

The aggregation operators are powerful tools to deal with linguistic information. To make a better usage of PHILTEs in real world problems, in the following, aggregation operators for PHILTEs have been developed.
Definition 18.
Let A k p k = l k p k , l k p k k = 1 , 2 , , n be n ordered and normalized PHILTEs. Then
P H I L A A 1 p 1 , A 2 p 2 , , A n p n = 1 n l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2 l n p n , l n p n = 1 n l 1 p 1 l 2 p 2 l n p n , l 1 p 1 l 2 p 2 l n p n = 1 n l 1 i l 1 p 1 , l 2 i l 2 p 2 , , l n i l n p n p 1 i l 1 i p 2 i l 2 i p n i l n i , l 1 j l 1 p 1 , l 2 j l 2 p 2 , , l n j l n p n p 1 j l 1 j p 2 j l 2 j p n j l n j
is called the probabilistic hesitant intuitionistic linguistic averaging (PHILA) operator.
Definition 19.
Let A k p k = l k p k , l k p k k = 1 , 2 , , n be n ordered and normalized PHILTEs. Then
P H I L W A A 1 p 1 , A 2 p 2 , , A n p n = w 1 l 1 p 1 , l 1 p 1 w 2 l 2 p 2 , l 2 p 2 w n l n p n , l n p n = w 1 l 1 p 1 w 2 l 2 p 2 w n l n p n , w 1 l 1 p 1 w 2 l 2 p 2 w n l n p n = l 1 i l 1 p 1 w 1 p 1 i l 1 i l 2 i l 2 p 2 w 2 p 2 i l 2 i l n i l n p n w n p n i l n i , l 1 j l 1 p 1 w 1 p 1 j l 1 j l 2 j l 2 p 2 w 2 p 2 j l 2 j l n j l n p n w n p n j l n j
is called the probabilistic hesitant intuitionistic linguistic weighted averaging (PHILWA) operator, where w = w 1 , w 2 , , w n t is the weight vector of A k p k k = 1 , 2 , , n , w k 0 , k = 1 , 2 , , n , and k = 1 n w k = 1 .
Particularly, if we take w = 1 n , 1 n , , 1 n t , then the PHILWA operator reduces to the PHILA operator.
Definition 20.
Let A k p k = l k p k , l k p k k = 1 , 2 , , n be n ordered and normalized PHILTEs. Then,
P H I L G A 1 p 1 , A 2 p 2 , , A n p n = l 1 p 1 , l 1 p 1 l 2 p 2 , l 2 p 2 l n p n , l n p n 1 n = l 1 p 1 l 2 p 2 l n p n , l 1 p 1 l 2 p 2 l n p n 1 n = l 1 i l 1 p 1 , l 2 i l 2 p 2 , , l n i l n p n l 1 i p 1 i l 2 i p 2 i l n i p n i , l 1 j l 1 p 1 , l 2 j l 2 p 2 , , l n j l n p n l 1 j p 1 j l 2 j p 2 j l n j p n j 1 n
is called the probabilistic hesitant intuitionistic linguistic geometric (PHILG) operator.
Definition 21.
Let A k p k = l k p k , l k p k k = 1 , 2 , , n be n ordered and normalized PHILTEs. Then
P H I L W G A 1 p 1 , A 2 p 2 , , A n p n = l 1 p 1 , l 1 p 1 w 1 l 2 p 2 , l 2 p 2 w 2 l n p n , l n p n w n = l 1 p 1 w 1 l 2 p 2 w 2 l n p n w n , l 1 p 1 w 1 l 2 p 2 w 2 l n p n w n = l 1 i l 1 p 1 l 1 i w 1 p 1 i l 2 i l 2 p 2 l 2 i w 2 p 2 i l n i l n p n l n i w n p n i , l 1 j l 1 p 1 l 1 j w 1 p 1 j l 2 j l 2 p 2 l 2 j w 2 p 2 j l n j l n p n l n j w n p n j
is called the probabilistic hesitant intuitionistic linguistic weighted geometric (PHILWG) operator, where w = w 1 , w 2 , , w n t is the weight vector of A k p k k = 1 , 2 , , n , w k 0 , k = 1 , 2 , , n , and k = 1 n w k = 1 .
Particularly, if we take w = 1 n , 1 n , , 1 n t , then the PHILWG operator reduces to the PHILG operator.

4.2. Maximizing Deviation Method for Calculating the Attribute Weights

The choice of weights directly affects the performance of weighted aggregation operators. For this purpose, in this subsection, the affective maximizing deviation method is adopted to calculate weight in MAGDM when weights are unknown or partly known. Based on Definition 9, the deviation degree between two PHILTEs is defined as follows:
Definition 22.
Let A p and A 1 p 1 be any two PHILTEs of equal length. Then, the deviation degree D between A p and A 1 p 1 is given by
D A p , A 1 p 1 = d l p , l 1 p 1 + d l p , l 1 p 1
where
d l p , l 1 p 1 = i = 1 # l p p i r i - p 1 i r 1 i # l p ,
d l p , l 1 p 1 = j = 1 # l p p j r j - p 1 j r 1 j # l p
r i denote the lower index of the i t h linguistic term of l p and r j denote the lower index of the j t h linguistic term of l p .
Based on the above definition, in the following, we derive attribute weight vector because working on the probabilistic linguistic data to deal with the MAGDM problems, in which the weight information of attribute values is completely unknown or partly known, we must find the attribute weights in advance.
Given the set of alternatives x = x 1 , x 2 , , x m and the set of “n” attributes c = c 1 , c 2 , , c n , respectively, then, by using Equation (17), the deviation measure between the alternative “ x i ” and all other alternatives with respect to the attribute “ c j ” can be given as:
D i j w = q = 1 , q i w j D h i j , h q j , i = 1 , 2 , , m , j = 1 , 2 , , n
In accordance with the theme of the maximizing deviation method, if the deviation degree among alternatives is smaller for an attribute, then the attribute should give a smaller weight. This one shows that the alternatives are homologous to the attribute. Contrarily, it should give a larger weight. Let
D j w = i = 1 m D i j w = i = 1 m q i m w j D h i j , h q j = i = 1 m q i m w j d l i j p i j , l q j p q j + d l i j p i j , l q j p q j
show the deviation degree of one alternative and others with respect to the attribute “ c j ” and let
D w = j = 1 n D j w = j = 1 n i = 1 m D i j w = j = 1 n i = 1 m q i m w j D h i j , h q j = j = 1 n i = 1 m q i m w j d l i j p i j , l q j p q j + d l i j p i j , l q j p q j = j = 1 n i = 1 m q i m w j 1 # l i j p i j k 1 = 1 # l i j p i j p i j k 1 r i j k 1 - p q j k 1 r q j k 1 2 + 1 # l i j p i j k 2 = 1 # l i j p i j p i j k 2 r i j k 2 - p q j k 2 r q j k 2 2
express the sum of the deviation degrees among all attributes.
To obtain the attribute weights vector w = w 1 , w 2 , , w n t , we build the following single objective optimization model (named as M 1 ) to drive the deviation degree d w as large as possible.
M 1 = max D w = j = 1 n i = 1 m q i m w j D h i j , h q j w j 0 , j = 1 , 2 , , n , j = 1 n w j 2 = 1
To solve the above model M 1 , we use the Lagrange multiplier function:
L w , η = j = 1 n i = 1 m q i m w j D h i j , h q j + η 2 j = 1 n w j 2 - 1
where η is the Lagrange parameter.
Then, we compute the partial derivatives of Lagrange function with respect to w j and η and let them be zero:
L w , η w j = i = 1 m q i m w j D h i j , h q j + η w j = 0 , j = 1 , 2 , , n . δ L w , η η = j = 1 n w j 2 - 1 = 0
By solving Equation (24), one can obtain the optimal weight w = w 1 , w 2 , , w n t .
w j = i = 1 m q i m D h i j , h q j j = 1 n i = 1 m q i D h i j , h q j 2 = i = 1 m q i m d l i j p i j , l q j p q j + d l i j p i j , l q j p q j j = 1 n i = 1 m q i d l i j p i j , l q j p q j + d l i j p i j , l q j p q j 2 w j = i = 1 m q i m 1 # l i j p i j k 1 = 1 # l i j p i j p i j k 1 r i j k 1 - p q j k 1 r q j k 1 2 + 1 # l i j p i j k 2 = 1 # l i j p i j p i j k 2 r i j k 2 - p q j k 2 r q j k 2 2 j = 1 n i = 1 m q i 1 # l i j p i j k 1 = 1 # l i j p i j p i j k 1 r i j k 1 - p q j k 1 r q j k 1 2 + 1 # l i j p i j k 2 = 1 # l i j p i j p i j k 2 r i j k 2 - p q j k 2 r q j k 2 2 2
where j = 1 , 2 , , n .
Obviously, w j 0 ∀j. By normalizing Equation (25), we get:
w j = i = 1 m q i m D h i j , h q j j = 1 n i = 1 m q i D h i j , h q j w j = i = 1 m q i m 1 # l i j p i j k 1 = 1 # l i j p i j p i j k 1 r i j k 1 - p q j k 1 r q j k 1 2 + 1 # l i j p i j k 2 = 1 # l i j p i j p i j k 2 r i j k 2 - p q j k 2 r q j k 2 2 j = 1 n i = 1 m q i 1 # l i j p i j k 1 = 1 # l i j p i j p i j k 1 r i j k 1 - p q j k 1 r q j k 1 2 + 1 # l i j p i j k 2 = 1 # l i j p i j p i j k 2 r i j k 2 - p q j k 2 r q j k 2 2
where j = 1 , 2 , , n .
The above end result can be applied to the situations where the information of attribute weights is completely unknown. However, in real life decision making problems, the weight information is usually partly known. In such cases, let H be a set of the known weight information, which can be given in the following forms based on the literature [31,32,33,34].
Form 1. A weak ranking: w i w j i j .
Form 2. A strict ranking: w i - w j β i i j .
Form 3. A ranking of differences: w i - w j w k - w l j k l .
Form 4. A ranking with multiples: w i β i w j i j .
Form 5. An interval form: β i w j β i + ϵ i i j .
β i and ϵ i denote the non-negative numbers.
With the set H, we can build the following model:
M 2 = max D w = j = 1 n i = 1 m q i m w j D h i j , h q j w j H , w j 0 , j = 1 , 2 , , n , j = 1 n w j 2 = 1
from which the optimal weight vector w = w 1 , w 2 , , w n t obtained.

5. MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information

In this section, two practical methods, i.e., an extended TOPSIS method and an aggregation based method, for MAGDM problems are proposed, where the opinions of DMs take the form of PHILTSs.

5.1. Extended TOPSIS Method for MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information

Of the numerous MAGDM methods, TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) is one of the effective methods for ranking and selecting a number of possible alternatives by measuring Euclidean distances. It has been successfully applied to solve evaluation problems with a finite number of alternatives and criteria [19,24,28] because it is easy to understand and implement, and can measure the relative performance for each alternative.
In the following, we discuss the complete construction of extended TOPSIS method in PHILTS regard. This methodology involves the following steps.
Step 1: Analyze the given MAGDM problem; since the problem is group decision making, so let there be “l” decision makers or experts M = m 1 , m 2 , , m l involved in the given problem. The set of alternatives is x = x 1 , x 2 , , x m and the set of attributes is c = c 1 , c 2 , , c n . The experts provide their linguistic evaluation values for membership and non-membership by using linguistic term set S = s 0 , s 1 , , s g over the alternative x i i = 1 , 2 , , m with respect to the attribute c j j = 1 , 2 , , n .
The DM m k k = 1 , 2 , , l states his membership and non-membership linguistic evaluation values keeping in mind all the alternatives and attributes in the form of PHILTEs. Thus, intuitionistic probabilistic linguistic decision matrix H k = l i j k p i j , l i j k p i j m × n is constructed. It should be noted that preference of alternative “ x i ” with respect to decision maker “ m k ” and attribute “ c j ” is denoted as PHILTE A i j k p i j in a group decision making problem with “l” experts.
Step 2: Calculate the one probabilistic hesitant intuitionistic linguistic decision matrix H by aggregating the opinions of DMs H 1 , H 2 , , H l ; H = h i j , where
h i j = s m i j p i j , s n i j q i j , s m i j p i j , s n i j q i j where
s m i j p i j = min min k = 1 l max l i j k p i j , max k = 1 l min l i j k p i j ,
s n i j q i j = max min k = 1 l max l i j k q i j , max k = 1 l min l i j k q i j ,
s m i j p i j = min min k = 1 l max l i j k p i j , max k = 1 l min l i j k p i j ,
s n i j q i j = max min k = 1 l max l i j k q i j , max k = 1 l min l i j k q i j ,
Here, max l i j k p i j and min l i j k p i j are taken according to the maximum and minimum value of p i j × r i j l , l = 1 , 2 , , # l i j k p i j , respectively, where r i j l denotes the lower index of the l t h linguistic term and p i j is its corresponding probability.
In this aggregated matrix H, the preference of alternative a i with respect to attribute c j is denoted as h i j .
Each term of the aggregated matrix H i.e., h i j is also an PHILTE; for this, we have to prove that
s m i j p i j + s n i j q i j s g and s n i j q i j + s m i j p i j s g . Since we know that l i j k p i j , l i j p i j is a PHILTS for every k t h expert, i t h alternative and j t h attribute, a PHILTS it must satisfy the conditions
min l i j k + max l i j k s g , max l i j k + min l i j k s g .
Thus, the above simple construction of s m i j p i j , s n i j q i j , s m i j p i j , and s n i j q i j guarantees that the h i j is a PHILTE.
Step 3: Normalize the probabilistic hesitant intuitionistic linguistic decision matrix H = h i j according to the method in Section 3.1.
Step 4: Obtain the weight vector w = w 1 , w 2 , , w n t of the attributes c j j = 1 , 2 , , n . w j = i = 1 m q i D h i j , h q j j = 1 n i = 1 m q i D h i j , h q j = i = 1 m q i d l i j p i j , l q j p q j + d l i j p i j , l q j p q j j = 1 n i = 1 m q i d l i j p i j , l q j p q j + d l i j p i j , l q j p q j , j = 1 , 2 , , n
Step 5: The PHILTS positive ideal solution (PHILTS-PIS) of alternatives, denoted by A + = l + p , l + p , is defined as follows:
A + = l + p = l 1 + p , l 2 + p , , l n + p , l + p = l 1 + p , l 2 + p , , l n + p
where l j + p = l j k 1 + | k 1 = 1 , 2 , , # l i j p and l j k 1 + = s max i p i j k 1 r i j k 1 , k 1 = 1 , 2 , , # l i j p , j = 1 , 2 , , n and r i j k 1 is lower index of the linguistic term l i j k 1 while l j + p = l j k 2 + | k 2 = 1 , 2 , , # l i j p and l j k 2 + = s min i p i j k 2 r i j k 2 , k 2 = 1 , 2 , , # l i j p , j = 1 , 2 , , n and r i j k 2 is lower index of the linguistic term l i j k 2 . Similarly, the PHILTS negative ideal solution (PHILTS-NIS) of alternatives, denoted by A - = l - p , l - p , is defined as follows:
A - = l - p = l 1 - p , l 2 - p , , l n - p , l - p = l 1 - p , l 2 - p , , l n - p
where l j - p = l j k 1 - | k 1 = 1 , 2 , , # l i j p and l j k 1 - = s min i p i j k 1 r i j k 1 , k 1 = 1 , 2 , , # l i j p , j = 1 , 2 , , n and r i j k 1 is lower index of the linguistic term l i j k 1 while l j - p = l j k 2 - | k 2 = 1 , 2 , , # l i j p and l j k 2 + = s max i p i j k 2 r i j k 2 , k 2 = 1 , 2 , , # l i j p ; j = 1 , 2 , , n and r i j k 2 is lower index of the linguistic term l i j k 2 .
Step 6: Compute the deviation degree between each alternative x i PHILTS-PIS A + as follows:
D x i , A + = j = 1 n w j D h i j , A + = j = 1 n w j d l i j p , l j + p + d l i j p , l j + p = j = 1 n w j 1 # l i j p k 1 = 1 # l i j p p i j k 1 r i j k 1 - p j k 1 r j k 1 + 2 + 1 # l i j p k 2 = 1 # l i j p p i j k 1 r i j k 2 - p j k 2 r j k 2 + 2
The smaller is the deviation degree D x i , A + , the better is alternative x i .
Similarly, compute the deviation degree between each alternative x i PHILTS-NIS A - as follows:
D x i , A - = j = 1 n w j D h i j , A - = j = 1 n w j d l i j p , l j - p + d l i j p , l j - p = j = 1 n w j 1 # l i j p k 1 = 1 # l i j p p i j k 1 r i j k 1 - p j k 1 r j k 1 - 2 + 1 # l i j p k 2 = 1 # l i j p p i j k 1 r i j k 2 - p j k 2 r j k 2 - 2
The larger is the deviation degree D x i , A - , the better is alternative x i .
Step 7: Determine D min x i , A + and D max x i , A - , where
D min x i , A + = min 1 i m D x i , A +
and
D max x i , A - = max 1 i m D x i , A -
Step 8: Determine the closeness coefficient C l of each alternative x i to rank the alternatives.
C l x i = D x i , A - D max x i , A - - D x i , A + D min x i , A +
Step 9: Pick the best alternative x i on the basis of the closeness coefficient C l , where the larger is the closeness coefficient C l x i , the better is alternative x i . Thus, the best alternative
x b = x i | max 1 i m C l x i

5.2. The Aggregation-Based Method for MAGDM with Probabilistic Hesitant Intuitionistic Linguistic Information

In this subsection, the aggregation-based method for MAGDM is presented, where the preference opinions of DMs are represented by PHILTS. In Section 4, we have developed some aggregation operators, i.e., PHILA, PHILWA, PHILG and PHILWG. In this algorithm, we use PHILWA operator to aggregate the attribute values of each alternative x i , into the overall attribute values. The following steps are involved in this algorithm. The first four Steps are similar to the extended TOPSIS method. Therefore, we go to Step 5.
Step 5: Determine the overall attribute values Z i ˜ w i = 1 , 2 , , m , where w = w 1 , w 2 , , w n T is the weight vector of attributes, using PHILWA operator, this can be expressed as follows:
Z i ˜ w = w 1 l i 1 p , l i 1 p w 2 l i 2 p , l i 2 p w n l i n p , l i n p = w 1 l i 1 p w 2 l i 2 p w n l i n p , w 1 l i 1 p w 2 l i 2 p w n l i n p = l i 1 k 1 l i 1 p w 1 p i 1 k 1 l i 1 k 1 l i 2 k 1 l i 2 p w 2 p i 2 k 1 l i 2 k 1 l i n k 1 l i n p w n p i n k 1 l i n k 1 , l i 1 k 2 l i 1 p w 1 p i 1 k 2 l i 1 k 2 l i 2 k 2 l i 2 p w 2 p i 2 k 2 l i 2 k 2 l i n k 2 l i n p w n p i n k 2 l i n k 2
where i = 1 , 2 , , m .
Step 6: Compare the overall attribute values Z i ˜ w i = 1 , 2 , , m mutually, based on their score function and deviation degree whose detail is given in Section 3.2.
Step 7: Rank the alternatives x i i = 1 , 2 , , m according to the order of Z i ˜ w i = 1 , 2 , , m and pick the best alternative.
The flow chart of the proposed models is presented in Figure 1.

6. A Case Study

To validate the proposed theory and decision making models, in this section, a practical example taken from [28] is solved. A group of seven peoples m l l = 1 , 2 , 3 , , 7 need to invest their savings in a most profitable way. They considered five possibilities: x 1 is real estate, x 2 is stock market, x 3 is T-bills, x 4 is national saving scheme, and x 5 is insurance company. To determine best option, the following attributes are taken into account: c 1 is the risk factor, c 2 is the growth, c 3 is quick refund, and c 4 is complicated documents requirement. Base upon their knowledge and experience, they provide their opinion in terms of following HIFLTSs.

6.1. The Extended TOPSIS Method for the Considered Case

We handle the above problem by applying the extended TOPSIS method.
Step 1: The probabilistic hesitant intuitionistic linguistic decision matrices derived from Table 1, Table 2 and Table 3 are shown in Table 4, Table 5 and Table 6, respectively.
Step 2: The decision matrix H in Table 7 is constructed by utilizing Table 4, Table 5 and Table 6.
Step 3: The normalized probabilistic hesitant intuitionistic linguistic decision matrix of the group is shown in Table 8.
Step 4: The weight vector is derived from Equation (26) as follows:
w = 0.2715 , 0.2219 , 0.2445 , 0.2621 t
Step 5: The PHILTS-PIS “ A + ” and the PHILTS-NIS “ A - ” of each alternative are derived using Equations (27) and (28) as follows:
A + = 3 , 3 , 0 , 0 , 3 , 2.4 , 0 , 0 , 3 , 1.6 , 0 , 0 , 3 , 2.5 , 0 , 0
A - = 0 , 0.661 , 2.25 , 1 , 1 , 1 , 2.25 , 1.25 , . 5 , 0.66 , 2 , 1.6 , 1 , 0.2 , 2 , 1.6
D x 1 , A + = 2.1211 , D x 2 , A + = 2.5516 , D x 3 , A + = 2.9129 , D x 4 , A + = 1.7999 , D x 5 , A + = 1.6494
D x 1 , A - = 2.0142 , D x 2 , A - = 1.5861 , D x 3 , A - = 1.6204 , D x 4 , A - = 2.4056 , D x 5 , A - = 2.2812
Step 7: Calculate D min x i , A + and D max x i , A - by Equations (31) and (32) :
D min x i , A + = 1.6494 , D max x i , A - = 2.4050
Step 8: Determine the closeness coefficient of each alternative x i by Equation (33) :
C l x 1 = - 0.4486 , C l x 2 = - 0.8876 , C l x 3 = - 1.0924 , C l x 4 = - 0.0912 , C l x 5 = - 0.0519
Step 9: Rank the alternatives according to the ranking of C l x i i = 1 , 2 , , 5 : x 5 > x 4 > x 1 > x 2 > x 3 , and thus, x 5 (insurance company) is the best alternative.

6.2. The Aggregation-Based Method for the Considered Case

We can also apply the aggregation-based method to attain the ranking of alternatives for the case study.
Step 1: Construct the probabilistic hesitant intuitionistic fuzzy decision matrices of the group as listed in Table 4, Table 5 and Table 6, and then aggregated and normalized as shown in Table 7 and Table 8.
Step 2: Utilize Equation (26) to obtain the weight vector
w = 0.2715 , 0.2219 , 0.2445 , 0.2621 t .
Step 3: Derive the overall attribute value of each alternative x i i = 1 , 2 , 3 , 4 , 5 by using Equation (35) :
Z 1 ˜ w = s 1.8962 , s 0.5187 , s 1.2847 , s 0.5187 ,
Z 2 ˜ w = s 1.4074 , s 0.9776 , s 1.4679 , s 0.4934 ,
Z 3 ˜ w = s 1.7923 , s 1.1256 , s 1.8096 , s 0.9915 ,
Z 4 ˜ w = s 2.1467 , s 1.642 , s 0.7977 , s 0.8886 ,
Z 5 ˜ w = s 2.0596 , s 1.8546 , s 1.0267 , s 0.8043 .
Step 4: Compute the score of each attribute value Z i ˜ w by Definition 14:
E Z 1 ˜ w = s 3.1528 , E Z 2 ˜ w = s 3.1059 , E Z 3 ˜ w = s 3.0584 , E Z 4 ˜ w = s 4.0512 , E Z 5 ˜ w = s 5.8726
Step 5: Compare the overall attribute values of alternatives according to the values of the score function. It is obvious, that x 5 > x 4 > x 1 > x 2 > x 3 . Thus, again, we get the best alternative x 5 .

7. Discussions and Comparison

For the purpose of comparison, in this subsection, the case study is again solved by applying the TOPSIS method with traditional HIFLTSs.
Step 1: The decision matrix X in Table 9 is constructed by utilizing Table 1, Table 2 and Table 3 as follows:
Step 2: Determine the HIFLTS-PIS “ P + ” and the HIFLTS-NIS “ P - ” for cost criteria c 1 , c 4 and benefit criteria c 2 ,c 3 as follows:
P + = s 0 , s 1 , s 3 , s 4 , s 5 , s 6 , s 0 , s 0 , s 5 , s 6 , s 0 , s 0 , s 0 , s 1 , s 3 , s 4
P - = s 6 , s 6 , s 0 , s 0 , s 1 , s 2 , s 3 , s 5 , s 0 , s 1 , s 3 , s 4 , s 6 , s 6 , s 0 , s 0
Note: One can see the detail of HIFLTS-PIS “ P + ” and the HIFLTS-NIS “ P - ” in [28].
Step 3: Calculate the positive ideal matrix D + and the negative ideal matrix D - as follows:
D + = 8 + 1 + 12 + 5 4 + 11 + 2 + 14 9 + 7 + 2 + 2 15 + 9 + 14 + 12 15 + 12 + 14 + 16 = 26 31 20 50 57
D 11 + = d x 11 , v 1 + + d x 12 , v 2 + + d x 13 , v 3 + + d x 14 , v 4 + in which d x 11 , v 1 + = d s 2 , s 4 , s 1 , s 3 , s 0 , s 1 , s 3 , s 4 = | 2 - 0 | + | 4 - 1 | + | 1 - 3 | + | 3 - 4 | = 8
Other entries can be found by similar calculation.
D - = 10 + 15 + 5 + 13 14 + 5 + 15 + 4 9 + 9 + 15 + 16 3 + 7 + 3 + 6 3 + 4 + 3 + 2 = 43 38 49 19 12
Step 4: The relative closeness ( R C ) of each alternative to the ideal solution can be obtained as follows:
R C ( x 1 ) = 43 / 26 + 43 = 0.6232
R C ( x 2 ) = 38 / 31 + 38 = 0.5507
The R C of other alternatives can be find by similar calculations.
R C ( x 3 ) = 0.7101 , R C ( x 4 ) = 0.2754 , R C ( x 5 ) = 0.1739 .
Step 5: The ranking of alternatives of alternatives x i i = 1 , 2 , , 5 according to the closeness coefficient R C ( x i ) is:
x 3 > x 1 > x 2 > x 4 > x 5 .
  • In Table 9, the disadvantages of HIFLTS are apparent because in HIFLTS the probabilities of the linguistic terms is not considered which means that all possible linguistic terms in HIFLTS have same occurrence possibility which is unrealistic, whereas the inspection of Table 7 shows that PHILTS not only contains the linguistic terms, but also considers the probabilities of linguistic terms, and, thus, PHILTS constitutes an extension of HIFLTS.
  • The inspection of Table 10 reveals that the extended TOPSIS method and the aggregation-based method give the same best alternative x 5 . The TOPSIS method with the traditional HIFLTSs gives x 3 as the best alternative.
  • This difference of best alternative in Table 10 is due to the effect of probabilities of membership and non-membership linguistic terms, which highlight the critical role of probabilities. Thus, our methods are more rational to get the ranking of alternatives and further to find the best alternative.
  • Extended TOPSIS method and aggregation-based method for MAGDM with PLTS information explained in [19] are more promising and better than extended TOPSIS method and aggregation-based method for MAGDM with HFLTS information. However, a clear superiority of PHILTS is that it assigns to each element the degree of belongingness and also the degree of non-belongingness along with probability. PLTS only assigns to each element a belongingness degree along with probability. Using PLTSs, various frameworks have been developed by DMs [19,29] but they are still intolerant, since there is no mean of attributing reliability or confidence information to the degree of belongingness.
The comparisons and other aspects are summarized in Table 11.

8. Conclusions

Because of the blurring of human thinking, sometimes it becomes difficult for experts to accurately measure the opinions in the area of the usual fuzzy set theory, even in the HIFLTSs and PLTSs. For this purpose, in this article, a new concept called PHILTS was introduced to extend the current HIFLTS and PLTS. To facilitate the calculation of the PHILTSs, a normalization process, basic operations and aggregation operators for PHILTSs are also designed. An extended TOPSIS method and aggregation based method have been proposed to solve decision ranking problems of the group with the multiple conflict criteria in PHILTS. The proposed models are compared with existing model of TOPSIS. The PLTS and HIFLTS are special cases of PHILTS, it grants the freedom to DMs to express their opinions in more dynamic way. Furthermore, the occurrence probabilities of membership and non-membership linguistic term sets greatly affects the decision making, validating the importance of designed theory and models in this manuscript. The probability is one of the best tool to handle uncertainty of future, thus our proposed models are more suitable of decision making related to the possible future scenarios. However, its arithmetic complexity is high.
In the future, all the work which has been done thus far PLTSs and HIFLTSs can be studied for PHILTS and then applied to decision making.

Author Contributions

Conceptualization, Z.B., T.R. and J.A.; Formal analysis, Z.B., M.G.A.M., T.R. and J.A.; Investigation, Z.B. and J.A.; Methodology, M.G.A.M.; Resources, T.R.; Software, M.G.A.M.

Funding

This work is partially supported by AGI Education, Auckland, New Zealand.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers, whose insightful comments and constructive suggestions helped us to significantly improve the quality of this paper. This research work is partially supported by AGI Education, Auckland, New Zealand.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  2. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
  3. Xu, Z.; Zhou, W. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2017, 16, 481–503. [Google Scholar] [CrossRef]
  4. Bashir, Z.; Rashid, T.; Wątróbski, J.; Sałabun, W.; Malik, A. Hesitant Probabilistic Multiplicative Preference Relations in Group Decision Making. Appl. Sci. 2018, 8, 398. [Google Scholar] [CrossRef]
  5. Alcantud, J.C.R.; Giarlotta, A. Necessary and possible hesitant fuzzy sets: A novel model for group decision making. Inf. Fusion 2019, 46, 63–76. [Google Scholar] [CrossRef]
  6. Zadeh, L.A. The concept of a linguistic variable and its applications to approximate reasoning. Inf. Sci. Part I II III 1975, 8–9, 43–80, 199–249, 301–357. [Google Scholar]
  7. Ju, Y.B.; Yang, S.H. Approaches for multi-attribute group decision making based on intuitionistic trapezoid fuzzy linguistic power aggregation operators. J. Intell. Fuzzy Syst. 2014, 27, 987–1000. [Google Scholar]
  8. Merigó, J.M.; Casanovas, M.; Martínez, L. Linguistic aggregation operators for linguistic decision making based on the Dempster–Shafer theory of evidence. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2010, 18, 287–304. [Google Scholar]
  9. Zhu, H.; Zhao, J.B.; Xu, Y. 2-Dimension linguistic computational model with 2-tuples for multi-attribute group decision making. Knowl. Based Syst. 2016, 103, 132–142. [Google Scholar] [CrossRef]
  10. Meng, F.Y.; Tang, J. Extended 2-tuple linguistic hybrid aggregation operators and their application to multi-attribute group decision making. Int. J. Comput. Intell. Syst. 2014, 7, 771–784. [Google Scholar] [CrossRef]
  11. Li, C.C.; Dong, Y. Multi-attribute group decision making methods with proportional 2-tuple linguistic assessments and weights. Int. J. Comput. Intell. Syst. 2014, 7, 758–770. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, Z.S. Multi-period multi-attribute group decision-making under linguistic assessments. Int. J. Gen. Syst. 2009, 38, 823–850. [Google Scholar] [CrossRef]
  13. Li, D.F. Multiattribute group decision making method using extended linguistic variables. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2009, 17, 793–806. [Google Scholar] [CrossRef]
  14. Agell, N.; Sánchez, M.; Prats, F.; Roselló, L. Ranking multi-attribute alternatives on the basis of linguistic labels in group decisions. Inf. Sci. 2012, 209, 49–60. [Google Scholar] [CrossRef]
  15. Rodríguez, R.M.; Martínez, L.; Herrera, F. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar]
  16. Zhu, J.; Li, Y. Hesitant Fuzzy Linguistic Aggregation Operators Based on the Hamacher t-norm and t-conorm. Symmetry 2018, 10, 189. [Google Scholar] [CrossRef]
  17. Cui, W.; Ye, J. Multiple-Attribute Decision-Making Method Using Similarity Measures of Hesitant Linguistic Neutrosophic Numbers Regarding Least Common Multiple Cardinality. Symmetry 2018, 10, 330. [Google Scholar] [CrossRef]
  18. Liu, D.; Liu, Y.; Chen, X. The New Similarity Measure and Distance Measure of a Hesitant Fuzzy Linguistic Term Set Based on a Linguistic Scale Function. Symmetry 2018, 10, 367. [Google Scholar] [CrossRef]
  19. Pang, Q.; Wang, H.; Xu, Z. Probabilistic linguistic term sets in multi-attribute group decision making. Inf. Sci. 2016, 369, 128–143. [Google Scholar] [CrossRef]
  20. Lin, M.; Xu, Z.; Zhai, Y.; Zhai, T. Multi-attribute group decision-making under probabilistic uncertain linguistic environment. J. Oper. Res. Soc. 2017. [Google Scholar] [CrossRef]
  21. Atanassov, K. Intuitionistic Fuzzy Sets; Springer: Heidelberg, Germany, 1999. [Google Scholar]
  22. Bashir, Z.; Rashid, T.; Wątróbski, J.; Sałabun, W.; Ali, J. Intuitionistic-fuzzy goals in zero-sum multi criteria matrix games. Symmetry 2017, 9, 158. [Google Scholar] [CrossRef]
  23. Beg, I.; Rashid, T. Group Decision making Using Intuitionistic Hesitant Fuzzy Sets. Int. J. Fuzzy Logic Intell. Syst. 2014, 14, 181–187. [Google Scholar] [CrossRef]
  24. Boran, F.E.; Gen, S.; Kurt, M.; Akay, D. A multi-criteria intuitionistic fuzzy group decision making for supplier selection with TOPSIS method. Expert Syst. Appl. 2009, 36, 11363–11368. [Google Scholar] [CrossRef]
  25. De, S.K.; Biswas, R.; Roy, A.R. An application of intuitionistic fuzzy sets in medical diagnosis. Fuzzy Sets Syst. 2001, 117, 209–213. [Google Scholar] [CrossRef]
  26. Li, D.F. Multiattribute decision making models and methods using intuitionistic fuzzy sets. J. Comput. Syst. Sci. 2005, 70, 73–85. [Google Scholar] [CrossRef]
  27. Liu, P.; Mahmood, T.; Khan, Q. Multi-Attribute Decision-Making Based on Prioritized Aggregation Operator under Hesitant Intuitionistic Fuzzy Linguistic Environment. Symmetry 2017, 9, 270. [Google Scholar] [CrossRef]
  28. Beg, I.; Rashid, T. Hesitant intuitionistic fuzzy linguistic term sets. Notes Intuit. Fuzzy Sets 2014, 20, 53–64. [Google Scholar]
  29. Zhang, Y.; Xu, Z.; Wang, H.; Liao, H. Consistency-based risk assessment with probablistic linguistic prefrence relation. Appl. Soft Comput. 2016, 49, 817–833. [Google Scholar] [CrossRef]
  30. Xu, Z.S.; Xia, M.M. On distance and correlation measures of hesitant fuzzy information. Int. J. Intell. Syst. 2011, 26, 410–425. [Google Scholar] [CrossRef]
  31. Kim, S.H.; Ahn, B.S. Interactive group decision making procedure under incomplete information. Eur. J. Oper. Res. 1999, 116, 498–507. [Google Scholar] [CrossRef]
  32. Kim, S.H.; Choi, S.H.; Kim, J.K. An interactive procedure for multiple attribute group decision making with incomplete information: Range-based approach. Eur. J. Oper. Res. 1999, 118, 139–152. [Google Scholar] [CrossRef]
  33. Park, K.S. Mathematical programming models for characterizing dominance and potential optimality when multicriteria alternative values and weights are simultaneously incomplete. IEEE Trans. Syst. Man Cybern. 2004, 34, 601–614. [Google Scholar] [CrossRef]
  34. Xu, Z.S. An interactive procedure for linguistic multiple attribute decision making with incomplete weight information. Fuzzy Optim. Decis. Mak. 2007, 6, 17–27. [Google Scholar] [CrossRef]
Figure 1. Extended TOPSIS and Aggregation-based models.
Figure 1. Extended TOPSIS and Aggregation-based models.
Symmetry 10 00392 g001
Table 1. Decision matrix provided by the DMs 1, 2, 3 ( m 1 , m 2 , m 3 ) .
Table 1. Decision matrix provided by the DMs 1, 2, 3 ( m 1 , m 2 , m 3 ) .
c 1 c 2 c 3 c 4
x 1 s 3 , s 4 , s 5 , s 1 , s 2 s 4 , s 5 , s 0 , s 1 s 1 , s 2 , s 3 , s 4 s 1 , s 2 , s 3 , s 4
x 2 s 1 , s 2 , s 3 , s 4 s 3 , s 4 , s 5 , s 1 , s 2 s 3 , s 4 , s 0 , s 1 s 4 , s 5 , s 1 , s 2
x 3 s 4 , s 5 ) , s 0 , s 1 , s 2 s 3 , s 4 , s 1 , s 2 s 5 , s 6 , s 0 s 1 , s 2 , s 2 , s 3 , s 4
x 4 s 5 , s 6 , s 0 , s 1 s 1 , s 2 , s 3 , s 4 s 1 , s 2 , s 3 , s 4 s 3 , s 4 , s 5 , s 1 , s 2
x 5 s 6 , s 0 s 1 , s 2 , s 3 , s 4 , s 5 s 0 , s 1 , s 2 , s 3 s 4 , s 5 , s 1 , s 2
Table 2. Decision matrix provided by the DMs 4, 5 ( m 4 , m 5 ) .
Table 2. Decision matrix provided by the DMs 4, 5 ( m 4 , m 5 ) .
c 1 c 2 c 3 c 4
x 1 s 1 , s 2 , s 3 , s 4 s 5 , s 6 , s 0 , s 1 s 0 , s 1 , s 3 , s 4 s 3 , s 4 , s 1 , s 2
x 2 s 0 , s 1 , s 2 , s 3 s 1 , s 2 , s 2 , s 3 , s 4 s 4 , s 5 , s 0 , s 1 s 5 , s 6 , s 0
x 3 s 3 , s 4 , s 0 , s 1 s 1 , s 2 , s 3 , s 4 s 4 , s 5 , s 1 , s 2 ) s 0 , s 1 , s 2 , s 3
x 4 s 5 , s 6 , s 0 s 3 , s 4 , s 0 , s 1 , s 2 s 1 , s 2 , s 2 , s 3 , s 4 s 4 , s 5 , s 0
x 5 s 4 , s 5 , s 1 , s 2 s 3 , s 4 , s 1 , s 2 , s 3 s 1 , s 2 , s 3 , s 4 s 5 , s 6 , s 0
Table 3. Decision matrix provided by the DMs 6, 7 ( m 6 , m 7 ) .
Table 3. Decision matrix provided by the DMs 6, 7 ( m 6 , m 7 ) .
c 1 c 2 c 3 c 4
x 1 s 4 , s 5 , s 0 , s 1 s 5 , s 6 , s 0 s 3 , s 4 , s 1 , s 2 s 0 , s 1 , s 3 , s 4
x 2 s 3 , s 4 , s 1 , s 2 , s 3 s 1 , s 2 , s 3 , s 4 s 5 , s 6 , s 0 s 3 , s 4 , s 1 , s 2
x 3 s 1 , s 2 , s 2 , s 3 , s 4 s 5 , s 6 , s 0 s 4 , s 5 , s 0 , s 1 s 0 , s 1 , s 3 , s 4
x 4 s 4 , s 5 , s 1 , s 2 s 4 , s 5 , s 0 , s 1 s 0 , s 1 , s 2 , s 2 , s 3 s 3 , s 4 , s 5 , s 1 , s 2
x 5 s 3 , s 4 , s 0 , s 1 , s 2 s 1 , s 2 , s 2 , s 3 , s 4 s 2 , s 3 , s 3 , s 4 s 6 , s 0
Table 4. Probabilistic hesitant intuitionistic linguistic decision matrix H 1 with respect to DMs 1, 2, 3 m 1 , m 2 , m 3 .
Table 4. Probabilistic hesitant intuitionistic linguistic decision matrix H 1 with respect to DMs 1, 2, 3 m 1 , m 2 , m 3 .
c 1 c 2
x 1 s 3 0.14 , s 4 0.28 , s 5 0.28 , s 1 0.28 , s 2 0.14 s 4 0.14 , s 5 0.42 , s 0 0.42 , s 1 0.28
x 2 s 1 0.28 , s 2 0.14 , s 3 0.42 , s 4 0.14 s 3 0.14 , s 4 . 14 , s 5 0.14 , s 1 0.14 , s 2 0.28
x 3 s 4 0.28 , s 5 0.14 , s 0 0.28 , s 1 0.28 , s 2 0.28 s 3 0.14 , s 4 0.28 , s 1 0.14 , s 2 0.14
x 4 s 5 0.42 , s 6 0.28 , s 0 0.28 , s 1 0.28 s 1 0.14 , s 2 0.14 , s 3 0.14 , s 4 0.14
x 5 s 6 0.14 , s 0 0.28 s 1 0.28 , s 2 0.28 , s 3 0.42 , s 4 0.28 , s 5 0.14
c 3 c 4
x 1 s 1 0.28 , s 2 0.14 , s 3 0.28 , s 4 0.28 s 1 0.28 , s 2 0.14 , s 3 0.28 , s 4 0.28
x 2 s 3 0.14 , s 4 0.28 , s 0 0.42 , s 1 0.28 s 4 0.14 , s 5 0.28 , s 1 0.28 , s 2 0.28
x 3 s 5 0.42 , s 6 0.14 , s 0 0.28 s 1 0.42 , s 2 0.14 , s 2 0.28 , s 3 0.42 , s 4 0.28
x 4 s 1 0.42 , s 2 . 42 , s 3 0.42 , s 4 0.28 s 3 0.28 , s 4 0.42 , s 5 0.42 , s 1 0.28 , s 2 0.28
x 5 s 0 0.14 , s 1 0.28 , s 2 0.28 , s 3 0.42 s 4 0.14 , s 5 0.28 , s 1 0.14 , s 2 0.14
Table 5. Probabilistic hesitant intuitionistic linguistic decision matrix H 2 with respect to DMs 4 , 5 m 4 , m 5 .
Table 5. Probabilistic hesitant intuitionistic linguistic decision matrix H 2 with respect to DMs 4 , 5 m 4 , m 5 .
c 1 c 2
x 1 s 1 0.14 , s 2 0.14 , s 3 0.14 , s 4 0.14 s 5 0.42 , s 6 0.28 , s 0 0.42 , s 1 0.28
x 2 s 0 0.14 , s 1 0.28 , s 2 0.28 , s 3 0.42 s 1 0.28 , s 2 0.28 , s 2 0.28 , s 3 0.28 , s 4 0.28
x 3 s 3 0.14 , s 4 . 28 , s 0 0.28 , s 1 0.28 s 1 0.14 , s 2 0.14 , s 3 0.14 , s 4 0.14
x 4 s 5 0.42 , s 6 0.28 , s 0 0.28 s 3 0.14 , s 4 0.28 , s 0 0.28 , s 1 0.28 , s 2 0.14
x 5 s 4 0.28 , s 5 0.14 , s 1 0.28 , s 2 0.28 s 3 0.14 , s 4 0.14 , s 1 0.14 , s 2 0.28 , s 3 0.42
c 3 c 4
x 1 s 0 . 0.14 , s 1 0.28 , s 3 0.28 , s 4 0.28 s 3 0.14 , s 4 0.14 , s 1 0.14 , s 2 0.14
x 2 s 4 0.28 , s 5 0.28 , s 0 0.42 , s 1 0.28 s 5 0.28 , s 6 0.14 , s 0 0.14
x 3 s 4 0.28 , s 5 0.42 , s 1 0.28 , s 2 0.14 s 0 0.28 , s 1 0.42 , s 2 0.28 , s 3 0.42
x 4 s 1 0.42 , s 2 0.42 , s 2 0.28 , s 3 0.42 , s 4 0.28 s 4 0.42 , s 5 0.42 , s 0 0.14
x 5 s 1 0.28 , s 2 0.14 , s 3 0.42 , s 4 0.28 s 5 0.28 , s 6 0.28 , s 0 0.28
Table 6. Probabilistic hesitant intuitionistic linguistic decision matrix H 3 with respect to DMs 6 , 7 m 6 , m 7 .
Table 6. Probabilistic hesitant intuitionistic linguistic decision matrix H 3 with respect to DMs 6 , 7 m 6 , m 7 .
c 1 c 2
x 1 s 4 0.28 , s 5 0.28 , s 0 0.14 , s 1 0.28 s 5 0.42 , s 6 0.28 , s 0 0.42
x 2 s 3 0.14 , s 4 0.14 , s 1 0.14 , s 2 0.28 , s 3 0.42 s 1 0.28 , s 2 0.28 , s 3 0.28 , s 4 0.28
x 3 s 1 0.14 , s 2 0.14 , s 2 0.28 , s 3 0.14 , s 4 0.14 s 5 0.28 , s 6 0.14 , s 0 0.14
x 4 s 4 0.14 , s 5 0.42 , s 1 0.28 , s 2 0.14 s 4 0.28 , s 5 0.14 , s 0 0.28 , s 1 0.28
x 5 s 3 0.14 , s 4 0.28 , s 0 0.28 , s 1 0.28 , s 2 0.28 s 1 0.28 , s 2 0.28 , s 2 0.28 , s 3 0.42 , s 4 0.28
c 3 c 4
x 1 s 3 0.14 , s 4 0.14 , s 1 0.14 , s 2 0.14 s 0 0.14 , s 1 0.28 , s 3 0.28 , s 4 0.28
x 2 s 5 0.28 , s 6 0.14 , s 0 . 42 s 3 0.14 , s 4 0.28 , s 1 0.28 , s 2 0.28
x 3 s 4 0.28 , s 5 0.42 , s 0 0.28 , s 1 0.28 s 0 0.28 , s 1 0.42 , s 3 0.42 , s 4 0.28
x 4 s 0 0.14 , s 1 0.42 , s 2 0.42 , s 2 0.28 , s 3 0.42 s 3 0.28 , s 4 0.42 , s 5 0.42 , s 1 0.28 , s 2 0.28
x 5 s 2 0.14 , s 3 0.14 , s 3 0.28 , s 4 0.28 s 6 0.28 , s 0 0.28
Table 7. Decision matrix (H).
Table 7. Decision matrix (H).
c 1 c 2
x 1 s 2 0.14 , s 4 0.28 , s 1 0.28 , s 3 0.14 s 6 0.28 , s 5 0.42 , s 0 0.42 , s 0 0.42
x 2 s 1 0.28 , s 3 0.14 , s 4 0.14 , s 3 0.42 s 2 0.28 , s 3 0.14 , s 2 0.28 , s 3 0.28
x 3 s 2 0.14 , s 0 0.14 , s 1 0.28 , s 3 0.14 s 2 0.14 , s 6 0.14 , s 0 0.14 , s 3 0.14
x 4 s 6 0.28 , s 5 0.42 , s 0 0.28 , s 1 0.28 s 2 0.14 , s 5 0.14 , s 1 0.28 , s 3 0.14
x 5 s 6 0.14 , s 6 0.14 , s 0 0.28 , s 1 0.28 s 3 0.14 , s 2 0.28 , s 5 0.14 , s 3 0.42
c 3 c 4
x 1 s 1 0.28 , s 3 0.14 , s 2 0.14 , s 3 0.28 s 1 0.28 , s 3 0.14 , s 2 0.14 , s 3 0.28
x 2 s 4 0.28 , s 4 0.14 , s 0 0.42 , s 0 0.42 s 1 0.28 , s 3 0.14 , s 0 0.14 , s 3 0.28
x 3 s 4 0.28 , s 5 0.42 , s 0 0.28 , s 1 0.28 s 1 0.14 , s 2 0.42 , s 4 0.28 , s 3 0.42
x 4 s 1 0.42 , s 2 0.42 , s 4 . 0.28 , s 3 0.42 s 4 0.42 , s 5 0.42 , s 0 0.14 , s 2 0.28
x 5 s 1 0.28 , s 2 0.14 , s 4 . 0.28 , s 3 0.42 s 5 0.28 , s 6 0.28 , s 0 0.28 , s 1 0.14
Table 8. The normalized probabilistic hesitant intuitionistic linguistic decision matrix.
Table 8. The normalized probabilistic hesitant intuitionistic linguistic decision matrix.
c 1
x 1 s 4 0.6666667 , s 2 0.3333333 , s 3 0.3333333 , s 1 0.6666667
x 2 s 3 0.3333333 , s 1 0.6666667 , s 3 0.75 , s 4 0.25
x 3 s 0 0.5 , s 2 0.5 , s 3 0.3333333 , s 1 0.6666667
x 4 s 5 0.6 , s 6 0.4 , s 1 0.5 , s 0 0.5
x 5 s 6 0.5 , s 6 0.5 , s 0 0.5 , s 1 0.5
c 2
x 1 s 5 0.6 , s 6 0.4 , s 0 0.5 , s 0 0.5
x 2 s 3 0.3333333 , s 2 0.6666667 , s 3 0.5 , s 2 0.5
x 3 s 6 0.5 , s 2 0.5 , s 3 0.5 , s 0 0.5
x 4 s 5 0.5 , s 2 0.5 , s 3 0.3333333 , s 1 0.6666667
x 5 s 2 0.6666667 , s 3 0.3333333 , s 3 0.75 , s 5 0.25
c 3
x 1 s 3 0.3333333 , s 1 0.6666667 , s 3 0.6666667 , s 2 0.3333333
x 2 s 4 0.6666667 , s 4 0.3333333 , s 0 0.5 , s 0 0.5
x 3 s 5 0.6 , s 4 0.4 , s 5 0.6 , s 4 0.4
x 4 s 1 0.5 , s 2 0.5 , s 3 0.6 , s 4 . 0.4
x 5 s 1 0.6666667 , s 2 0.3333333 , s 3 0.6 , s 4 0.4
c 4
x 1 s 3 0.3333333 , s 1 0.6666667 , s 3 0.6666667 , s 2 0.3333333
x 2 s 3 0.3333333 , s 1 0.6666667 , s 3 0.6666667 , s 0 0.3333333
x 3 s 2 0.75 , s 1 0.25 , s 3 0.6 , s 4 0.4
x 4 s 5 0.5 , s 4 0.5 , s 0 0.3333333 , s 2 0.6666667
x 5 s 6 0.5 , s 5 0.5 , s 1 0.3333333 , s 0 0.6666667
Table 9. Decision matrix (X).
Table 9. Decision matrix (X).
c 1 c 2 c 3 c 4
x 1 s 2 , s 4 , s 1 , s 3 s 5 , s 5 , s 0 , s 0 s 1 , s 3 , s 2 , s 3 s 1 , s 3 , s 2 , s 3
x 2 s 1 , s 3 , s 3 , s 3 s 2 , s 3 , s 2 , s 3 s 4 , s 5 , s 0 , s 0 s 4 , s 5 , s 0 , s 1
x 3 s 2 , s 4 , s 1 , s 2 s 3 , s 5 , s 0 , s 3 s 5 , s 5 , s 0 , s 1 s 1 , s 1 , s 3 , s 3
x 4 s 5 , s 5 , s 0 , s 1 s 2 , s 4 , s 1 , s 3 s 1 , s 2 , s 3 , s 3 s 4 , s 5 , s 1 , s 2
x 5 s 4 , s 6 , s 0 , s 1 s 2 , s 3 , s 3 , s 3 s 1 , s 2 , s 3 , s 3 s 5 , s 6 , s 0 , s 1
Table 10. Comparison of Results.
Table 10. Comparison of Results.
TOPSIS [28] x 3 > x 1 > x 2 > x 4 > x 5
Proposed extend TOPSIS x 5 > x 4 > x 1 > x 2 > x 3
Proposed aggregation model x 5 > x 4 > x 1 > x 2 > x 3
Table 11. The advantages and limitations of the proposed methods.
Table 11. The advantages and limitations of the proposed methods.
AdvantagesLimitations
1. PHILTS generalize the existing PLTS models1. It is essential to take membership as
since PHILTS take more information from the DMswell as non-membership probabilistic
into account.data.
2. PHILTS is not affected by partial vagueness.2. Its computational index is
3. PHILTS is more in line with people’s language,high.
leading to much more fruitful decisions.
4. The attribute weights are calculated with
objectivity (without favor).

Share and Cite

MDPI and ACS Style

Malik, M.G.A.; Bashir, Z.; Rashid, T.; Ali, J. Probabilistic Hesitant Intuitionistic Linguistic Term Sets in Multi-Attribute Group Decision Making. Symmetry 2018, 10, 392. https://doi.org/10.3390/sym10090392

AMA Style

Malik MGA, Bashir Z, Rashid T, Ali J. Probabilistic Hesitant Intuitionistic Linguistic Term Sets in Multi-Attribute Group Decision Making. Symmetry. 2018; 10(9):392. https://doi.org/10.3390/sym10090392

Chicago/Turabian Style

Malik, M. G. Abbas, Zia Bashir, Tabasam Rashid, and Jawad Ali. 2018. "Probabilistic Hesitant Intuitionistic Linguistic Term Sets in Multi-Attribute Group Decision Making" Symmetry 10, no. 9: 392. https://doi.org/10.3390/sym10090392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop