Next Article in Journal
Nekhoroshev Stability for the Dirichlet Toda Lattice
Next Article in Special Issue
Nuclear Power Plant Location Selection in Vietnam under Fuzzy Environment Conditions
Previous Article in Journal
Invariant Graph Partition Comparison Measures
Previous Article in Special Issue
Some Partitioned Maclaurin Symmetric Mean Based on q-Rung Orthopair Fuzzy Information for Dealing with Multi-Attribute Group Decision Making
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Group Decision Making and Their Application to Supplier Selection

School of Business, Sichuan Normal University, Chengdu 610101, China
*
Authors to whom correspondence should be addressed.
Symmetry 2018, 10(10), 505; https://doi.org/10.3390/sym10100505
Submission received: 26 September 2018 / Revised: 5 October 2018 / Accepted: 10 October 2018 / Published: 15 October 2018
(This article belongs to the Special Issue Multi-Criteria Decision Aid methods in fuzzy decision problems)

Abstract

:
In this paper, we extend the Hamy mean (HM) operator and dual Hamy mean (DHM) operator with Pythagorean fuzzy numbers (PFNs) to propose Pythagorean fuzzy Hamy mean (PFHM) operator, weighted Pythagorean fuzzy Hamy mean (WPFHM) operator, Pythagorean fuzzy dual Hamy mean (PFDHM) operator, weighted Pythagorean fuzzy dual Hamy mean (WPFDHM) operator. Then the multiple attribute group decision making (MAGDM) methods are proposed with these operators. In the end, we utilize an applicable example for supplier selection to prove the proposed methods.

1. Introduction

Pythagorean fuzzy set (PFS) [1,2] has been designed with the membership degree and the non-membership degree, whose sum of squares is less than or equal to 1. Zhang and Xu [3] provided TOPSIS for MADM with PFNs. Peng and Yang [4] gave the superiority and inferiority ranking method to analyze the MAGDM with PFNs. Reformat and Yager [5] designed the recommender system with PFNs. Gou et al. [6] studied some characteristic of continuous PFNs. Garg [7] proposed some Einstein operators with PFNs. Zeng et al. [8] defined the hybrid model for MADM with PFNs. Wei [9] defined some novel interaction operators between PFNs. Gao et al. [10] defined some novel interaction operators with PFNs in MADM. Ren et al. [11] expanded TODIM for MADM with PFNs. Wei and Lu [12] extended MSM operator [13] to PFNs. Wu and Wei [14] proposed Hamacher operators with PFNs. Wei and Wei [15] defined some similarity measures between PFNs with cosine function [16,17,18]. Xue et al. [19] developed the LINMAP method with PFNs. Wei and Lu [20] developed the power operators with PFNs. Wan et al. [21] used the mathematical programming to solve MAGDM with PFNs. Baloglu and Demir [22] developed the agent-based methods for demand analysis with PFNs. Liang [23] proposed some Bonferroni mean operators with PFNs based on the traditional Bonferroni mean operators [24,25,26,27,28]. Mandal and Ranadive [29] proposed the decision-theoretic rough sets with PFNs. Chen [30] gave the outranking method under PFNs with a closeness-based assignment model. Garg [31] proposed the linear programming model for MADM with interval-valued Pythagorean fuzzy numbers (IVPFNs). Khan et al. [32] extend TOPSIS model with IVPFNs. Garg [33] defined the exponential operational laws of IVPFNs. Li and Zeng [34] gave some distance measure of PFNs. Gao [35] defined some hamacher prioritized operators in MADM with traditional prioritized aggregation operators [36,37,38,39,40,41]. Wei and Lu [42] developed some Hamacher operators for aggregating dual hesitant PFNs. Lu et al. [43] proposed some hamacher operators with hesitant PFNs and Wei et al. [44] proposed some Pythagorean hesitant fuzzy hamacher operators based on the traditional hamacher operators [45,46,47]. Wei et al. [48], Tang and Wei [49] and Huang and Wei [50] proposed the Pythagorean 2-tuple linguistic operators in MADM with the traditional arithmetic and geometric operators [51,52,53,54,55,56]. Wei et al. [57] proposed some q-Rung Orthopair fuzzy Heronian mean operators in MADM.
And HM operator [58] and DHM operator [59] are famous operators which depict interrelationships among any number of arguments assigned by a variable vector. Therefore, the HM and DHM operators can furnish a robust and flexible mechanism to solve the information fusion in MAGDM problems. Because PFNs can easily capture the fuzzy information and the HM can describe interrelationships among any number of arguments assigned by a variable vector, it is necessary to expand the HM and DHM operators to deal with the PFNs. Thus, how to fuse these PFNs with HM and DHM operators is an interesting topic. In order to accomplish this goal, the remainder of this paper is arranged as follows. In the next section, we introduce some basic concepts related to PFNs. In Section 3, we propose some HM and DHM operators with PFNs. In Section 4, we present some methods for MAGDM problems with PFWHM and PFWDHM operator. In Section 5, we give a numerical example. Finally, a brief conclusion is given in Section 6.

2. Preliminaries

2.1. Pythagorean Fuzzy Sets

Yager [1,2] gave the definition of PFSs.
Definition 1.
Let X be a fixed set. A PFS in X is defined as follows [1,2]
P = { x , ( μ p ( x ) , ν p ( x ) ) | x X }
where μ p ( x ) [ 0 , 1 ] and ν p ( x [ 0 , 1 ] are defined as the degree of membership and non-membership of the element x X to P , respectively, and satisfying
( μ p ( x ) ) 2 + ( ν p ( x ) ) 2 1 .
Definition 2.
Let a ˜ = ( μ , ν ) be a PFN, then the score function of a ˜ is defined [17]
S ( a ˜ ) = 1 2 ( 1 + μ 2 ν 2 ) , S ( a ˜ ) [ 0 , 1 ] .
Definition 3.
Let a ˜ = ( μ , ν ) be a PFN, then the accuracy function of a ˜ is defined [11]
H ( a ˜ ) = μ 2 + ν 2 , H ( a ˜ ) [ 0 , 1 ] .
Definition 4.
Let a ˜ 1 = ( μ 1 , ν 1 ) and a ˜ 2 = ( μ 2 , ν 2 ) be two PFNs [17], S ( a ˜ 1 ) = 1 2 ( 1 + ( μ 1 ) 2 ( ν 2 ) 2 ) and S ( a ˜ 2 ) = 1 2 ( 1 + ( μ 2 ) 2 ( ν 2 ) 2 ) be the scores function of a ˜ 1 and a ˜ 2 , respectively, and let H ( a ˜ 1 ) = μ 1 2 + ν 1 2 and H ( a ˜ 2 ) = μ 2 2 + ν 2 2 be the accuracy function of a ˜ 1 and a ˜ 2 , respectively, then if S ( a ˜ 1 ) < S ( a ˜ 2 ) , then a ˜ 1 < a ˜ 2 if S ( a ˜ 1 ) = S ( a ˜ 2 ) , then
(1) 
If H ( a ˜ 1 ) = H ( a ˜ 2 ) , then a ˜ 1 = a ˜ 2 ;
(2) 
If H ( a ˜ 1 ) < H ( a ˜ 2 ) , a ˜ 1 < a ˜ 2 .
Example 1.
Let a ˜ 1 = ( 0.5 , 0.3 ) , a ˜ 2 = ( 0.6 , 0.2 ) , a ˜ 3 = ( 0.4 , 0 ) , according to Definitions 2–4, we get
S ( a ˜ 1 ) = 1 2 ( 1 + 0.5 2 0.3 2 ) = 0.5800 , S ( a ˜ 2 ) = 1 2 ( 1 + 0.6 2 0.2 2 ) = 0.6600
S ( a ˜ 3 ) = 1 2 ( 1 + 0.4 2 0 2 ) = 0.5800 , H ( a ˜ 1 ) = 0.5 2 + 0.3 2 = 0.3400
H ( a ˜ 2 ) = 0.6 2 + 0.2 2 = 0.4000 , H ( a ˜ 3 ) = 0.4 2 + 0 2 = 0.1600
Then we can conclude that a ˜ 2 > a ˜ 1 > a ˜ 3 .
Definition 5.
Let a ˜ 1 = ( μ 1 , ν 1 ) , a ˜ 2 = ( μ 2 , ν 2 ) and a ˜ = ( μ , ν ) be three PFNs, and some basic operations are defined [5]:
(1) 
a ˜ 1 a ˜ 2 = ( ( μ 1 ) 2 + ( μ 2 ) 2 ( μ 1 ) 2 ( μ 2 ) 2 , ν 1 ν 2 ) ;
(2) 
a ˜ 1 a ˜ 2 = ( μ 1 μ 2 , ( ν 1 ) 2 + ( ν 2 ) 2 ( ν 1 ) 2 ( ν 2 ) 2 ) ;
(3) 
λ a ˜ = ( 1 ( 1 μ ) λ , ν λ ) , λ > 0 ;
(4) 
( a ˜ ) λ = ( μ λ , 1 ( 1 ν ) λ ) , λ > 0 ;
(5) 
a ˜ c = ( ν , μ ) .
Example 2.
Suppose that a ˜ 1 = ( 0.2 , 0.3 ) , a ˜ 2 = ( 0.6 , 0.1 ) , and λ = 5 , then we have
(1) 
a ˜ 1 a ˜ 2 = ( 0.2 2 + 0.6 2 0.2 2 × 0.6 2 , 0.3 × 0.1 ) = ( 0.5456 , 0.0300 )
(2) 
a ˜ 1 a ˜ 2 = ( 0.2 × 0.6 , 0.3 2 + 0.1 2 0.3 2 × 0.1 2 ) = ( 0.1200 , 0.3091 )
(3) 
λ a ˜ 1 = ( 1 ( 1 0.2 ) 0.5 , 0.3 0.5 ) = ( 0.1056 , 0.5477 )
(4) 
( a ˜ 1 ) λ = ( 0.2 0.5 , 1 ( 1 0.3 ) 0.5 ) = ( 0.4472 , 0.1633 )
(5) 
a ˜ 1 c = ( 0.3 , 0.2 )
The following properties are derived based on the Definition 4.
Definition 6.
Let a ˜ 1 = ( μ 1 , ν 1 )   a n d   a ˜ 2 = ( μ 2 , ν 2 ) be two PFNs, λ , λ 1 , λ 2 > 0 , then [5]
(1) 
a ˜ 1 a ˜ 2 = a ˜ 2 a ˜ 1 ;
(2) 
a ˜ 1 a ˜ 2 = a ˜ 2 a ˜ 1 ;
(3) 
λ ( a ˜ 1 a ˜ 2 ) = λ a ˜ 1 λ a ˜ 2 ;
(4) 
( a ˜ 1 a ˜ 2 ) λ = ( a ˜ 1 ) λ ( a ˜ 2 ) λ ;
(5) 
λ 1 a ˜ 1 λ 2 a ˜ 1 = ( λ 1 + λ 2 ) a ˜ 1 ;
(6) 
( a ˜ 1 ) λ 1 ( a ˜ 1 ) λ 2 = ( a ˜ 1 ) λ 1 + λ 2 ;
(7) 
( ( a ˜ 1 ) λ 1 ) λ 2 = ( a ˜ 1 ) λ 1 λ 2 .

2.2. HM Operator

Definition 7.
The HM operator is defined as follows [58]:
HM ( x ) ( a 1 , a 2 , a k ) = 1 i 1 < < i x k ( j = 1 x a i j ) 1 x C k x
where x is a parameter and x = 1 , 2 , , k , i 1 , i 2 , i x are x integer values taken from the set { 1 , 2 , , k } of k integer values, C k x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
The HM operator has three properties.
(i)
when a i = a ( i = 1 , 2 , , k ) , HM ( x ) ( a 1 , a 2 , a k ) = a ;
(ii)
when a i π i ( i = 1 , 2 , , k ) , HM ( x ) ( a 1 , a 2 , a k ) HM ( x ) ( π 1 , π 2 , π k ) ;
(iii)
when min { a i } HM ( x ) ( a 1 , a 2 , a k ) max { a i } .
Two particular cases of the HM operator are given as follows.
(1)
When x = 1 , HM ( k ) ( a 1 , a 2 , a k ) = 1 k i = 1 k a i , it becomes the arithmetic mean operator.
(2)
When x = k , HM ( k ) ( a 1 , a 2 , a k ) = ( i = 1 k a i ) 1 k , it becomes the geometric mean operator.

3. The HM Operators for PFNs

In this part, we will combine PFNs and HM operator, and propose the PFHM operator and WPFHM operator.

3.1. The PFHM Operator

Definition 8.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a set of PFNs, then we can define PFHM operator as follows:
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x C k x
where x is a parameter and x = 1 , 2 , , k , i 1 , i 2 , i x are x integer values taken from the set { 1 , 2 , , k } of k integer values, C k x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
Theorem 1.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a set of the PFNs, then the aggregate result of Definition 8 is still a PFNs, and have
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 C k x ( 1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x , ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x )
Proof. 
(1)
First of all, we prove (7) is kept.
j = 1 x a ˜ i j = ( j = 1 x ( μ i j ) , 1 j = 1 x ( 1 ( ν i j ) 2 ) )
Therefore,
( j = 1 x a ˜ i j ) 1 x = ( ( j = 1 x μ i j ) 1 x , 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x )
Moreover,
1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x = ( 1 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) , 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x )
Furthermore,
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 C k x ( 1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x , ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x )
(2)
Next, we prove (7) is a PFN. Let
p = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x , q = ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x
Then we prove the following two conditions. (i) 0 p 1 , 0 q 1 ; (ii) 0 p 2 + q 2 1 .
  • Since μ i j [ 0 , 1 ] , we can get
    j = 1 x μ i j [ 0 , 1 ] ( j = 1 x μ i j ) 1 x [ 0 , 1 ] ( ( j = 1 x μ i j ) 1 x ) 2 [ 0 , 1 ] 1 ( ( j = 1 x μ i j ) 1 x ) 2 [ 0 , 1 ]
    1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) [ 0 , 1 ] ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x [ 0 , 1 ]
    1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x [ 0 , 1 ]   i . e . ,   0 p 1 .
    Similarly, we can get
    ( 1 i 1 < < i x k 1 ( j 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x [ 0 , 1 ] , i . e . ,   0 q 1 .
  • Obviously, 0 p 2 + q 2 1 , then
    ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x ) 2 + ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x ) 2 1 ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ) 1 C k x + ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ) 1 C k x = 1
We get 0 p 2 + q 2 1 .
So the aggregated result of Definition 8 is still PFN. Next we will talk about some properties of PFHM operator.
Property 1.
(Idempotency). If a ˜ i ( 1 , 2 , , k ) and a ˜ are PFNs, and a ˜ i = a ˜ = ( μ i , ν i ) for all i = 1 , 2 , , k , then we get
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = a ˜
Proof. 
Since a ˜ = ( μ , ν ) , based on Theorem 1, we have
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x , ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x )
= ( 1 ( 1 i 1 < < i x k ( 1 μ i 2 ) ) 1 C k x , ( 1 i 1 < < i x k 1 ( 1 ( ν i ) 2 ) ) 1 C k x ) = ( 1 ( 1 μ i 2 ) , 1 ( 1 ( ν i ) 2 ) ) = ( 1 ( 1 μ 2 ) , ( 1 ( 1 ( ν ) 2 ) ) ) = ( μ , ν ) = a ˜
Property 2.
(Monotonicity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. If μ i j μ θ j , ν i j ν θ j for all j , then
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
( ( j = 1 x μ i j ) 1 x ) 2 ( ( j = 1 x μ θ j ) 1 x ) 2 1 ( ( j = 1 x μ i j ) 1 x ) 2 1 ( ( j = 1 x μ θ j ) 1 x ) 2
( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ) 1 C k x
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ) 1 C k x
Similarly, we have
1 ( ν i j ) 2 1 ( π θ j ) 2 j = 1 x ( 1 ( ν i j ) 2 ) j = 1 x ( 1 ( ν θ j ) 2 )
1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x
1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x
( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) 1 C k x
Let a ˜ = PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , π ˜ = PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) and S ( a ˜ ) , S ( π ˜ ) be the score values of a and π respectively. Based on the score value of PFN in (3) and the above inequality, we can imply that S ( a ˜ ) S ( π ˜ ) , and then we discuss the following cases:
  • If S ( a ˜ ) > S ( π ˜ ) , then we can get PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) > PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
  • If S ( a ˜ ) = S ( π ˜ ) , then
    1 2 ( 1 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x ) 2 ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x ) 2 ) = 1 2 ( 1 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ) 1 C k x ) 2 ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) 1 C k x ) 2 )
Since μ i j μ θ j 0 , ν θ j ν i j 0 , we can deduce that
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ) 1 C k x
And
( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x = ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) 1 C k x
Therefore, it follows that
H ( a ˜ ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x ) 2 + ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x ) 2
= ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ) 1 C k x ) 2 + ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) 1 C k x ) 2 = H ( π ˜ )
The PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) . □
Property 3.
(Boundedness). Let a ˜ i = ( μ i j , ν i j ) , a ˜ + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , k ) be a set of PFNs, and a ˜ = ( μ min i j , ν min i j ) then
a ˜ i < P F H M ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) < a ˜ i +
Proof. 
Based on Properties 1 and 2, we have
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFHM ( x ) ( a ˜ , a ˜ , , a ˜ ) = a ˜ , PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFHM ( x ) ( a ˜ + , a ˜ + , , a ˜ + ) = a ˜ + .
Then we have a ˜ PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ + .
Property 4.
(Commutativity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. Suppose ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then
PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Because ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then 1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x C k x = 1 i 1 < < i x k ( j = 1 x π ˜ i j ) 1 x C k x Thus, PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
Next, we will discuss some particular cases of PFHM operator when x takes different values.
Case 1: When x = 1 , then PFHM operator will become arithmetic average operator of PFNs.
PFHM ( 1 ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 i 1 < < i x k ( 1 μ i 2 ) ) 1 k , ( 1 i 1 < < i x k 1 ( 1 ( ν i ) 2 ) ) 1 k ) = ( 1 ( 1 i 1 < < i x k ( 1 μ i 2 ) ) 1 k , ( 1 i 1 < < i x k 1 ( 1 ( ν i ) 2 ) ) 1 k ) = 1 k i k a ˜ i
Case 2: When x = k , then PFHM operator will become arithmetic average operator of PFNs.
PFHM ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 C k k ( 1 i 1 < < i x k ( j = 1 x a ˜ i j ) 1 x ) = ( 1 ( 1 i 1 < < i k k ( 1 ( ( j = 1 k μ i j ) 1 k ) 2 ) ) 1 C k k , ( 1 i 1 < < i k k 1 ( j = 1 k ( 1 ( ν i j ) 2 ) ) 1 k ) 1 C k k ) = ( ( i = 1 k μ i ) 1 k , 1 ( i = 1 k ( 1 ( ν i ) 2 ) ) 1 k ) = i = 1 k a ˜ i 1 k
Example 3.
Let a ˜ 1 = ( 0.5 , 0.2 ) , a ˜ 2 = ( 0.6 , 0.3 ) , a ˜ 3 = ( 0.4 , 0.1 ) , a ˜ 4 = ( 0.7 , 0.3 ) be four PFNs. Then we use the proposed PFHM operator to aggregate four PFNs. (suppose x = 2 )
a ˜ = PFHM ( 2 ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ) 1 C k x , ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) 1 C k x ) = ( 1 ( 1 i 1 < < i 2 4 ( 1 ( ( j = 1 2 μ i j ) 1 2 ) 2 ) ) 1 C 4 2 , ( 1 i 1 < < i 2 4 1 ( j = 1 2 ( 1 ( ν i j ) 2 ) ) 1 2 ) 1 C 4 2 ) = ( 1 ( 1 0.5 × 0.6 ) × ( 1 0.5 × 0.4 ) × ( 1 0.5 × 0.7 ) × ( 1 0.6 × 0.4 ) × ( 1 0.6 × 0.7 ) × ( 1 0.4 × 0.7 ) 1 6 , ( ( 1 ( ( 1 0.2 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.2 2 ) × ( 1 0.1 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.2 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.3 2 ) × ( 1 0.1 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.3 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.1 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 ) 1 6 ) = ( 0.5497 , 0.2325 )
At last, we get PFHM ( 2 ) ( a ˜ 1 , a ˜ 2 , a ˜ 3 , a ˜ 4 ) = ( 0.5497 , 0.2325 ) .

3.2. The WPFHM Operator

The weights of attributes play an important role in practical decision making, and they can influence the decision result. Therefore, it is necessary to consider attribute weights in aggregating information. It is obvious that the PFHM operator fails to consider the problem of attribute weights. In order to overcome this defect, we propose the WPFHM operator.
Definition 9.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a group of PFNs, ω = ( ω 1 , ω 2 , ω k ) T be the weight vector for a ˜ i ( i = 1 , 2 , , k ) , which satisfies ω i [ 0.1 ] and i = 1 k ω i = 1 , then we can define WPFHM operator as follows:
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = { 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x C k 1 x ( 1 x < k ) i = 1 x a ˜ i 1 ω i k 1 ( x = k )
Theorem 2.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a group of PFNs, and their weight vector meet ω i [ 0.1 ] and i = 1 k ω i = 1 then the result from Definition 9 is still a PFN, and have.
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x C k 1 x = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) ( 1 x < k )
Or
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = i = 1 x a ˜ i 1 ω i k 1 = ( i = 1 k ( μ i ) 1 ω i k 1 , 1 i = 1 k ( 1 ( ν i ) 2 ) 1 ω i k 1 ) ( x = k )
Proof. 
(1)
First of all, we prove that (19) and (20) are kept. For the first case, when ( 1 x < k ) , according to the operational laws of PFNs, we get
( j = 1 x a ˜ i j ) 1 x = ( ( j = 1 x μ i j ) 1 x , 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x )
Thereafter,
( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x = ( 1 ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) , ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) )
Moreover,
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x = ( 1 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) , 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) )
Therefore,
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x C k 1 x = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
For the second case, when ( x = k ) , we get
a ˜ i 1 ω i k 1 = ( ( μ i ) 1 ω i k 1 , 1 ( 1 ( ν i ) 2 ) 1 ω i k 1 )
Then,
i = 1 k a ˜ i 1 ω i k 1 = ( i = 1 k ( μ i 1 ω i k 1 ) , 1 i = 1 k ( 1 ( ν i ) 2 ) 1 ω i k 1 )
(2)
Next, we prove the (19) and (20) are PFNs. For the first case, when 1 x < k ,
Let
p = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , q = ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x .
Then we need prove the following two conditions. (i) 0 p 1 , 0 q 1 . (ii) 0 p 2 + q 2 1 .
  • Since p [ 0 , 1 ] , we can get
    ( j = 1 x μ i j ) 1 x [ 0 , 1 ] ( ( j = 1 x μ i j ) 1 x ) 2 [ 0 , 1 ] 1 ( ( j = 1 x μ i j ) 1 x ) 2 [ 0 , 1 ] 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) [ 0 , 1 ] ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x [ 0 , 1 ] 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x [ 0 , 1 ]
    Therefore, 0 p 1 .
    Similarly, we can get ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x [ 0 , 1 ]
    Therefore, 0 q 1 .
  • Since 0 p 2 + q 2 1 , we can get the following inequality:
    ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 + ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 1 ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x + ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = 1
For the second case, when x = k , we can easily prove that it is kept. So the aggregation result produced by Definition 8 is still a PFN. Next, we shall deduce some desirable properties of WPFHM operator.
Property 5.
(Idempotency). If a ˜ i ( i = 1 , 2 , , k ) are equal, i.e., a ˜ i = a ˜ = ( μ , ν ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = a ˜
Proof. 
Since a ˜ i = a ˜ = ( μ i , ν i ) , based on Theorem 2, we get
(1)
For the first case, when 1 x < k .
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
= ( 1 ( 1 i 1 < < i x k ( 1 ( μ ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , ( 1 i 1 < < i x k ( 1 ( 1 ( ν ) 2 ) ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
= ( 1 ( ( 1 ( μ ) 2 ) C k x 1 i 1 < < i x k ( j = 1 x ω i j ) ) 1 C k 1 x , ( ( 1 ( 1 ( ν ) 2 ) ) C k x 1 i 1 < < i x k ( j = 1 x ω i j ) ) 1 C k 1 x )
= ( 1 ( ( 1 ( μ ) 2 ) C k x k = 1 k C k 1 x 1 ω i ) 1 C k 1 x , ( ( 1 ( 1 ( ν ) 2 ) ) C k x k = 1 k C k 1 x 1 ω i ) 1 C k 1 x )
= ( 1 ( ( 1 ( μ ) 2 ) C k x C k 1 x 1 k = 1 k ω i ) 1 C k 1 x , ( ( 1 ( 1 ( ν ) 2 ) ) C k x C k 1 x 1 k = 1 k ω i ) 1 C k 1 x )
Since i = 1 k ω i = 1 , we can get
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( ( 1 ( μ ) 2 ) C k x C k 1 x 1 ) 1 C k 1 x , ( ( 1 ( 1 ( ν ) 2 ) ) C k x C k 1 x 1 ) 1 C k 1 x )
= ( 1 ( ( 1 ( μ ) 2 ) ( k 1 ) ! x ! ( k 1 x ) ! ) x ! ( k 1 x ) ! ( k 1 ) ! , ( ( 1 ( 1 ( ν ) 2 ) ) ( k 1 ) ! x ! ( k 1 x ) ! ) x ! ( k 1 x ) ! ( k 1 ) ! ) = ( μ , ν ) = a ˜
(2)
For the second case, when x = k ,
WPFHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( i = 1 k ( μ i ) 1 ω i k 1 , 1 ( i = 1 k ( 1 ( ν i ) 2 ) ) 1 ω i k 1 ) = ( ( μ ) k 1 k 1 , 1 ( 1 ( ν ) 2 ) k 1 k 1 )
Since i = 1 k ω i = 1 , we can get WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( ( μ ) k 1 k 1 , 1 ( 1 ( ν ) 2 ) k 1 k 1 ) = ( μ , ν ) = a ˜ , which proves the idempotency property of the WPFHM operator. □
Property 6.
(Monotonicity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. If μ i j μ θ j , ν i j ν θ j for all j , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the a ˜ and π ˜ are equal, then we have
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFHM ω ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
( j = 1 x μ i j ) 1 x ( j = 1 x μ θ j ) 1 x ( ( j = 1 x μ i j ) 1 x ) 2 ( ( j = 1 x μ θ j ) 1 x ) 2 1 ( ( j = 1 x μ i j ) 1 x ) 2 1 ( ( j = 1 x μ θ j ) 1 x ) 2
1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j )
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Similarly, we have
1 ( ν i j ) 2 1 ( ν θ J ) 2 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ( j = 1 x ( 1 ( ν θ J ) 2 ) ) 1 x
( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ( 1 ( j = 1 x ( 1 ( ν θ J ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j )
( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Let a = WPFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , π = WPFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) and S ( a ) , S ( π ) be the score values of a and π respectively. Based on the score value of PFN in (3) and the above inequality, we can imply that S ( a ) S ( π ) , and then we discuss the following cases:
(1)
If S ( a ) > S ( π ) , then we can get PFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) > PFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
(2)
If S ( a ) = S ( π ) , then
1 2 ( 1 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ) = 1 2 ( 1 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) )
Since μ i j μ θ j 0 , ν θ j ν i j 0 , and based on the Equations (3) and (4), we can deduce that
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x θ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
And
( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Therefore, it follows that H ( a ˜ ) = H ( π ˜ ) , the WPFHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
When x = k , we can prove it in a similar way.
Property 7.
(Boundedness). Let a ˜ i = ( μ i j , ν i j ) , a ˜ + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , k ) be a set of PFNs, and a ˜ = ( μ min i j , ν min i j ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
a ˜ WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ +
Proof. 
Based on Properties 5 and 6, we have
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFHM ω ( x ) ( a ˜ , a ˜ , , a ˜ ) = a ˜ , WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFHM ω ( x ) ( a ˜ + , a ˜ + , , a ˜ + ) = a ˜ + ,
Then we have a ˜ WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ + .
Property 8.
(Commutativity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. Suppose ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the a ˜ and π ˜ are equal, then we have
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFHM ω ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Because ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x C k 1 x = 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x π ˜ i j ) 1 x C k 1 x ( 1 x < k ) i = 1 x a ˜ i 1 ω i k 1 = i = 1 x π ˜ i 1 ω i k 1 ( x = k )
Thus, WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFHM ω ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) . Next, we will discuss some particular cases of WPFHM operator for different value x .□
Case 1: When x = 1 , the WPFHM will reduce to the following form:
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 i 1 k ( 1 j = 1 1 ω i j ) ( j = 1 x a ˜ i j ) 1 1 C k 1 1 = ( 1 ( 1 i 1 k ( 1 ( μ i ) 2 ) ( 1 ω i ) ) 1 k 1 , ( 1 i 1 k ( ν i ) ( 1 ω i ) ) 1 k 1 )
Case 2: When x = k , the proposed WPFHM operator will reduce to the following form:
WPFHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = i = 1 k a ˜ i 1 ω i k 1 = ( i = 1 k ( μ i 1 ω i k 1 ) , 1 i = 1 k ( 1 ( ν i ) 2 ) 1 ω i k 1 )
Example 4.
Let a ˜ 1 = ( 0.6 , 0.4 ) , a ˜ 2 = ( 0.7 , 0.3 ) , a ˜ 3 = ( 0.5 , 0.1 ) , a ˜ 4 = ( 0.4 , 0.3 ) be four PFNs. the weighting vector of attributes be ω = { 0.1 , 0.3 , 0.4 , 0.2 } , Then we use the proposed WPFHM operator to aggregate four PFNs. (suppose x = 2 )
a ˜ = WPFHM ω ( 2 ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j ) 1 x C k 1 x = ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) = ( 1 ( 1 i 1 < < i 2 4 ( 1 ( ( j = 1 2 μ i j ) 1 2 ) 2 ) ( 1 j = 1 2 ω i j ) ) 1 C 3 2 , ( 1 i 1 < < i 2 4 ( 1 ( j = 1 2 ( 1 ( ν i j ) 2 ) ) 1 2 ) ( 1 j = 1 2 ω i j ) ) 1 C 3 2 )
= ( 1 ( ( 1 0.6 × 0.7 ) 1 0.1 × 0.3 × ( 1 0.6 × 0.5 ) 1 0.1 × 0.4 × ( 1 0.6 × 0.4 ) 1 0.1 × 0.2 × ( 1 0.7 × 0.5 ) 1 0.3 × 0.4 × ( 1 0.7 × 0.4 ) 1 0.3 × 0.2 × ( 1 0.5 × 0.4 ) 1 0.4 × 0.2 ) 1 3 , ( ( 1 ( ( 1 0.4 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 0.1 × 0.3 ) × ( 1 ( ( 1 0.4 2 ) × ( 1 0.1 2 ) ) 0.5 ) 0.5 × ( 1 0.1 × 0.4 ) × ( 1 ( ( 1 0.4 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 0.1 × 0.2 ) × ( 1 ( ( 1 0.3 2 ) × ( 1 0.1 2 ) ) 0.5 ) 0.5 × ( 1 0.3 × 0.4 ) × ( 1 ( ( 1 0.3 2 ) × ( 1 0.1 2 ) ) 0.5 ) 0.5 × ( 1 0.3 × 0.2 ) × ( 1 ( ( 1 0.1 2 ) × ( 1 0.3 2 ) ) 0.5 ) 0.5 × ( 1 0.4 × 0.2 ) ) 1 3 ) = ( 0.7015 , 0.3114 )
At last, we get WPFHM ω ( 2 ) ( a ˜ 1 , a ˜ 2 , a ˜ 3 , a ˜ 4 ) = ( 0.7015 , 0.3114 ) .

3.3. The PFDHM Operator

Wu et al. [59] proposed the DHM operator.
Definition 10.
The DHM operator is defined [59]:
DHM ( x ) ( φ 1 , φ 2 , , φ n ) = ( 1 i 1 < < i x n ( j = 1 x φ i j x ) ) 1 C n x
where x is a parameter and x = 1 , 2 , , k , i 1 , i 2 , , i x are x integer values taken from the set { 1 , 2 , , k } of k integer values, C n x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
In this section, we propose the Pythagorean fuzzy DHM (PFDHM) operator.
Definition 11.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a set of PFNs, then we define PFDHM operator as follows:
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( j = 1 x a ˜ i j x ) ) 1 C k x
where x is a parameter and x = 1 , 2 , , k , i 1 , i 2 , , i x are x integer values taken from the set { 1 , 2 , , k } of k integer values, C n x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
Theorem 3.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a collection of the PFNs, then the aggregate result of definition 10 is still a PFNs, and have
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( j = 1 x a ˜ i j x ) ) 1 C k x = ( ( 1 i 1 < < i x k 1 ( j 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x )
Proof. 
(1)
First of all, we prove (35) is kept.
j = 1 x a ˜ i j = ( 1 j = 1 x ( 1 ( μ i j ) 2 ) , j = 1 x ( ν i j ) )
Then,
j = 1 x a ˜ i j x = ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x , ( j = 1 x ν i j ) 1 x )
Thereafter,
1 i 1 < < i x k ( j = 1 x a ˜ i j x ) = ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x , 1 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) )
Furthermore,
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( j = 1 x a ˜ i j x ) ) 1 C k x = ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x )
(2)
Next, we prove (34) is a PFN.
Let
p = ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x ,
q = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x
Then we need to prove that it satisfies the following two conditions. (i) 0 p 1 , 0 q 1 ; (ii) 0 p 2 + q 2 1 .
  • Since μ i j [ 0 , 1 ] , we can get
    ( μ i j ) 2 [ 0 , 1 ] 1 ( μ i j ) 2 [ 0 , 1 ] j 1 x ( 1 ( μ i j ) 2 ) [ 0 , 1 ]
    ( μ i j ) 2 [ 0 , 1 ] 1 ( μ i j ) 2 [ 0 , 1 ] j 1 x ( 1 ( μ i j ) 2 ) [ 0 , 1 ]
    1 ( j 1 x ( 1 ( μ i j ) 2 ) ) 1 x [ 0 , 1 ] 1 i 1 < < i x k 1 ( j 1 x ( 1 ( μ i j ) 2 ) ) 1 x [ 0 , 1 ]
    ( 1 i 1 < < i x k 1 ( j 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x [ 0 , 1 ] .
    Therefore 0 p 1 , similarly, we can get
    1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x [ 0 , 1 ] ,   therefore ,   0 q 1 .
  • Obviously, 0 p 2 + q 2 1 , then
    ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x ) 2 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x ) 2 ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ) 1 C k x + 1 ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ) 1 C k x = 1
We get 0 p 2 + q 2 1 .
So the aggregated result of Definition 10 is still PFN. Next we will talk about some properties of PFDHM operator.
Property 9.
(Idempotency). If a ˜ i ( 1 , 2 , , k ) and a ˜ are PFNs, and a ˜ i = a ˜ = ( μ i , ν i ) for all i = 1 , 2 , , k , then we get
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = a ˜
Proof. 
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x )
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x )
= ( ( 1 i 1 < < i x k 1 ( 1 ( μ i ) 2 ) ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ν i 2 ) ) 1 C k x )
= ( 1 ( 1 ( μ i ) 2 ) , 1 ( 1 ν i 2 ) ) = ( 1 ( 1 μ 2 ) , ( 1 ( 1 ( ν ) 2 ) ) ) = ( μ , ν ) = a ˜
Property 10.
(Monotonicity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. If μ i j μ θ j , ν i j ν θ j for all j , then
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x
( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x
( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) 1 C k x
Similarly, we have
( ( j = 1 x ν i j ) 1 x ) 2 ( ( j = 1 x ν θ j ) 1 x ) 2 1 ( ( j = 1 x ν i j ) 1 x ) 2 1 ( ( j = 1 x ν θ j ) 1 x )
( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ) 1 C k x
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ) 1 C k x
Let a ˜ = PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , π ˜ = PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) and S ( a ˜ ) , S ( π ˜ ) be the score values of a and π respectively. Based on the score value of PFN in (3) and the above inequality, we can imply that S ( a ˜ ) S ( π ˜ ) , and then we discuss the following cases:
  • If S ( a ˜ ) > S ( π ˜ ) , then we can get PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) > PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
  • If S ( a ˜ ) = S ( π ˜ ) , then
    1 2 ( 1 + ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x ) 2 ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x ) 2 ) = 1 2 ( 1 + ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) 1 C k x ) 2 ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ) 1 C k x ) 2 )
Since μ i j μ θ j 0 , ν θ j ν i j 0 , and based on the Equations (3) and (4), we can deduce that
( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x = ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) 1 C k x
And
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ) 1 C k x
Therefore, it follows that
H ( a ˜ ) = ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x ) 2 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x ) 2
= ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) 1 C k x ) 2 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ) 1 C k x ) 2 = H ( π ˜ )
The PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Property 11.
(Boundedness). Let a ˜ i = ( μ i j , ν i j ) , a ˜ + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , k ) be a set of PFNs, and a ˜ = ( μ min i j , ν min i j ) then
a ˜ i < PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) < a ˜ i +
Proof. 
Based on Properties 9 and 10, we have
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFDHM ( x ) ( a ˜ , a ˜ , , a ˜ ) = a ˜ , PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) PFDHM ( x ) ( a ˜ + , a ˜ + , , a ˜ + ) = a ˜ + ,
Then we have a ˜ PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ + .
Property 12.
(Commutativity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. Suppose ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Because ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then
( 1 i 1 < < i x k ( j = 1 x a ˜ i j x ) ) 1 C k x = ( 1 i 1 < < i x k ( j = 1 x π ˜ i j x ) ) 1 C k x
Thus, PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = PFDHM ( x ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
Next, we discuss some particular cases of PFDHM operator.
Case 1: When x = 1 , then PFDHM operator will become arithmetic average operator of PFNs.
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 i 1 < < i x k ( 1 μ i 2 ) ) 1 k , ( 1 i 1 < < i x k 1 ( 1 ( ν i ) 2 ) ) 1 k )
= ( ( 1 i 1 < < i x k 1 ( 1 ( μ i ) 2 ) ) 1 k , 1 ( 1 i 1 < < i x k ( 1 ν i 2 ) ) 1 k ) = 1 k i k a ˜ i
Case 2: When x = k , then PFDHM operator will become arithmetic average operator of PFNs.
PFDHM ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( j = 1 x a ˜ i j x ) ) 1 C k k
= ( ( 1 i 1 < < i k k 1 ( j = 1 k ( 1 ( μ i j ) 2 ) ) 1 k ) 1 C k k , 1 ( 1 i 1 < < i k k ( 1 ( ( j = 1 k ν i j ) 1 k ) 2 ) ) 1 C k k )
= ( 1 ( i = 1 k ( 1 ( μ i ) 2 ) ) 1 k , ( i = 1 k ν i ) 1 k ) = i = 1 k a ˜ i 1 k
Example 5.
Let a ˜ 1 = ( 0.6 , 0.2 ) , a ˜ 2 = ( 0.5 , 0.3 ) , a ˜ 3 = ( 0.7 , 0.1 ) , a ˜ 4 = ( 0.8 , 0.2 ) be four PFNs. Then we use the PFDHM operator to fuse four PFNs. (suppose x = 2 ),
a ˜ = PFDHM ( 2 ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( μ , ν ) = ( ( 1 i 1 < < i x k 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) 1 C k x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ) 1 C k x )
= ( ( 1 i 1 < < i 2 4 1 ( j = 1 2 ( 1 ( μ i j ) 2 ) ) 1 2 ) 1 C 4 2 , 1 ( 1 i 1 < < i 2 4 ( 1 ( ( j = 1 2 ν i j ) 1 2 ) 2 ) ) 1 C 4 2 ) = ( ( ( 1 ( ( 1 0.6 2 ) × ( 1 0.5 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.6 2 ) × ( 1 0.7 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.6 2 ) × ( 1 0.8 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.5 2 ) × ( 1 0.7 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.5 2 ) × ( 1 0.8 2 ) ) 0.5 ) 0.5 × ( 1 ( ( 1 0.7 2 ) × ( 1 0.8 2 ) ) 0.5 ) 0.5 ) 1 6 , ( 1 ( ( 1 0.2 × 0.3 ) × ( 1 0.2 × 0.1 ) × ( 1 0.2 × 0.2 ) × ( 1 0.3 × 0.1 ) × ( 1 0.3 × 0.2 ) × ( 1 0.1 × 0.2 ) ) 1 6 ) 0.5 ) = ( 0.6627 , 0.1962 )
At last, we get PFDHM ( 2 ) ( a ˜ 1 , a ˜ 2 , a ˜ 3 , a ˜ 4 ) = ( 0.6627 , 0.1962 ) .

3.4. The WPFDHM Operator

The weights of attributes play an important role in practical decision making, and they can influence the decision result. Therefore, it is necessary to consider attribute weights in aggregating information. It is obvious that the PFDHM operator fails to consider the problem of attribute weights. In order to overcome this defect, we propose the WPFDHM operator.
Definition 12.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a group of PFNs, ω = ( ω 1 , ω 2 , ω k ) T be the weight vector for a ˜ i ( i = 1 , 2 , , k ) , which satisfies ω i [ 0.1 ] and i = 1 k ω i = 1 , then we can define WPFHM operator as follows:
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = { ( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) ) 1 C k 1 x ( 1 x < k ) i = 1 x a ˜ i 1 ω i k 1 ( x = k )
Theorem 4.
Let a ˜ i = ( μ i , ν i ) ( i = 1 , 2 , , k ) be a group of PFNs, and their weight vector meet ω i [ 0.1 ] and i = 1 k ω i = 1 then the result from Definition 11 is still a PFN, and have
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) ) 1 C k 1 x ( 1 x < k ) = ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
Or
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = i = 1 x a ˜ i 1 ω i k 1 = ( 1 i = 1 k ( 1 ( μ i ) 2 ) 1 ω i k 1 , i = 1 k ( ν i ) 1 ω i k 1 ) ( x = k )
Proof. 
(1)
First of all, we prove that (45) and (46) are kept.
For the first case, when ( 1 x < k ) , we get
j = 1 x a ˜ i j x = ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x , ( j = 1 x ν i j ) 1 x )
Then,
( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) = ( ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) , 1 ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) )
Thereafter,
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) = ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) , 1 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) )
Furthermore,
( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) ) 1 C k 1 x = ( , ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
For the second case, when ( x = k ) , we get
a ˜ i 1 ω i k 1 = ( ( μ i ) 1 ω i k 1 , 1 ( 1 ( ν i ) 2 ) 1 ω i k 1 ) ,
Then,
i = 1 x a ˜ i 1 ω i k 1 = ( 1 ( 1 ( μ i ) 2 ) 1 ω i k 1 , ( ν i ) 1 ω i k 1 )
(2)
Next, we prove the (45) and (46) are PFNs. For the first case, when 1 x < k
p = ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , q = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Then we need prove the following two conditions. (i) 0 p 1 , 0 q 1 . (ii) 0 p 2 + q 2 1 .
  • Since p [ 0 , 1 ] , we can get
    j = 1 x ( 1 ( μ i j ) 2 ) [ 0 , 1 ] ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x [ 0 , 1 ] 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x [ 0 , 1 ]
    1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) [ 0 , 1 ] ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x [ 0 , 1 ]
    Therefore, 0 p 1 . Similarly, we can get
    1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x [ 0 , 1 ] .
    Therefore, 0 q 1 .
  • Since 0 p 2 + q 2 1 , we can get the following inequality:
    ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 + ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2
    ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x + 1 ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( ν i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = 1
For the second case, when x = k , we can easily prove that it is kept. So the aggregation result produced by Definition 8 is still a PFN. Next, we shall deduce some desirable properties of WPFHM operator.
Property 13.
(Idempotency). If a ˜ i ( i = 1 , 2 , , k ) are equal, i.e., a ˜ i = a ˜ = ( μ , ν ) and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = a ˜ ,
Proof. 
Since a ˜ i = a ˜ = ( μ i , ν i ) , based on Theorem 4, we get
(1)
For the first case, when 1 x < k .
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x )
= ( ( ( 1 ( 1 ( μ ) 2 ) ) C k x 1 i 1 < < i x k ( j = 1 x ω i j ) ) 1 C k 1 x , 1 ( ( 1 ( ν ) 2 ) C k x 1 i 1 < < i x k ( j = 1 x ω i j ) ) 1 C k 1 x )
= ( ( ( 1 ( 1 ( μ ) 2 ) ) C k x k = 1 k C k 1 x 1 ω i ) 1 C k 1 x , 1 ( ( 1 ( ν ) 2 ) C k x k = 1 k C k 1 x 1 ω i ) 1 C k 1 x ) = ( ( ( 1 ( 1 ( μ ) 2 ) ) C k x C k 1 x 1 k = 1 k ω i ) 1 C k 1 x 1 ( ( 1 ( ν ) 2 ) C k x C k 1 x 1 k = 1 k ω i ) 1 C k 1 x )
Since i = 1 k ω i = 1 , we can get
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( ( ( 1 ( 1 ( μ ) 2 ) ) C k x C k 1 x 1 ) 1 C k 1 x , 1 ( ( 1 ( ν ) 2 ) C k x C k 1 x 1 ) 1 C k 1 x ) = ( ( ( 1 ( 1 ( μ ) 2 ) ) ( k 1 ) ! x ! ( k 1 x ) ! ) x ! ( k 1 x ) ! ( k 1 ) ! , 1 ( ( 1 ( ν ) 2 ) ( k 1 ) ! x ! ( k 1 x ) ! ) x ! ( k 1 x ) ! ( k 1 ) ! ) = ( μ , ν ) = a ˜
(2)
For the second case, when x = k
WPFDHM ω ( x ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( i = 1 k ( 1 ( μ i ) 2 ) ) 1 ω i k 1 , i = 1 k ( ν i ) 1 ω i k 1 ) = ( 1 ( 1 ( μ ) 2 ) k 1 k 1 , ( ν ) k 1 k 1 )
Since i = 1 k ω i = 1 , we can get
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 ( 1 ( ν ) 2 ) k 1 k 1 , ( μ ) k 1 k 1 ) = ( μ , ν ) = a ˜ ,
which proves the idempotency property of the WPFDHM operator. □
Property 14.
(Monotonicity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. If μ i j μ θ j , ν i j ν θ j for all j , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the a ˜ and π ˜ are equal, then we have
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
1 ( μ i j ) 2 1 ( μ θ j ) 2 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x
( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ( 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j )
( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Similarly, we have
( j = 1 x ν i j ) 1 x ( j = 1 x ν θ j ) 1 x ( ( j = 1 x ν i j ) 1 x ) 2 ( ( j = 1 x ν θ j ) 1 x ) 2 1 ( ( j = 1 x ν i j ) 1 x ) 2 1 ( ( j = 1 x ν θ j ) 1 x ) 2
1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j )
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Let a = WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , π = WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) and S ( a ) , S ( π ) be the score values of a and π respectively. Based on the score value of PFN in (3) and the above inequality, we can imply that S ( a ) S ( π ) , and then we discuss the following cases:
(1)
If S ( a ) > S ( π ) , then we can get WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) > WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) .
(2)
If S ( a ) = S ( π ) , then
1 2 ( 1 + ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x μ i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ) = 1 2 ( 1 + ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 ( 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) 2 )
Since μ i j μ θ j 0 , ν θ j ν i j 0 , and based on the Equations (3) and (4), we can deduce that
( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ θ j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
And
1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x = 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν θ j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x
Therefore, it follows that H ( a ˜ ) = H ( π ˜ ) , the WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) . When x = k , we can prove it in a similar way. □
Property 15.
(Boundedness). Let a ˜ i = ( μ i j , ν i j ) , a ˜ + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , k ) be a set of PFNs, and a ˜ = ( μ min i j , ν min i j ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
a ˜ WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ +
Proof. 
Based on Properties 13 and 14, we have
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFDHM ω ( k ) ( a ˜ , a ˜ , , a ˜ ) = a ˜ , WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) WPFDHM ω ( k ) ( a ˜ + , a ˜ + , , a ˜ + ) = a ˜ + ,
Then we have a ˜ WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) a ˜ + .
Property 16.
(Commutativity). Let a ˜ i = ( μ i j , ν i j ) , π ˜ i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , k ) be two sets of PFNs. Suppose ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the a ˜ and π ˜ are equal, then we have
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k )
Proof. 
Because ( π ˜ 1 , π ˜ 2 , , π ˜ k ) is any permutation of ( a ˜ 1 , a ˜ 2 , , a ˜ k ) , then
( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) ) 1 C k x = ( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x π ˜ i j x ) ) 1 C k x ( 1 x < k ) i = 1 x a ˜ i 1 ω i k 1 = i = 1 x π ˜ i 1 ω i k 1 ( x = k )
Thus WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = WPFDHM ω ( k ) ( π ˜ 1 , π ˜ 2 , , π ˜ k ) . Next, we will discuss some particular cases of WPFDHM operator for different value x .
Case 1: When x = 1 , the WPFDHM will reduce to the following form:
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( 1 j = 1 1 ω i j ) ( j = 1 x a ˜ i j ) ) 1 C k 1 1 = ( ( 1 i 1 k ( μ i ) ( 1 ω i ) ) 1 k 1 , 1 ( 1 i 1 k ( 1 ( ν i ) 2 ) ( 1 ω i ) ) 1 k 1 )
Case 2: When x = k , the proposed WPFDHM operator will reduce to the following form:
WPFDHM ω ( k ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = i = 1 k a ˜ i 1 ω i k 1 = ( 1 i = 1 k ( 1 ( μ i ) 2 ) 1 ω i k 1 , i = 1 k ( ν i 1 ω i k 1 ) )
Example 6.
Let a ˜ 1 = ( 0.8 , 0.2 ) , a ˜ 2 = ( 0.6 , 0.3 ) , a ˜ 3 = ( 0.5 , 0.2 ) , a ˜ 3 = ( 0.5 , 0.4 ) be four PFNs. the weighting vector of attributes be ω = { 0.3 , 0.2 , 0.4 , 0.1 } , Then we use the proposed WPFDHM operator to aggregate four PHNs. (suppose x = 2 )
a ˜ = ( μ , ν ) = WPFDHM ω ( 2 ) ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = ( 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x a ˜ i j x ) ) 1 C k 1 x ( 1 x < k ) = ( ( 1 i 1 < < i x k ( 1 ( j = 1 x ( 1 ( μ i j ) 2 ) ) 1 x ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x , 1 ( 1 i 1 < < i x k ( 1 ( ( j = 1 x ν i j ) 1 x ) 2 ) ( 1 j = 1 x ω i j ) ) 1 C k 1 x ) = ( ( 1 i 1 < < i 2 4 ( 1 ( j = 1 2 ( 1 ( μ i j ) 2 ) ) 1 2 ) ( 1 j = 1 2 ω i j ) ) 1 C 3 2 , 1 ( 1 i 1 < < i 2 4 ( 1 ( ( j = 1 2 ν i j ) 1 2 ) 2 ) ( 1 j = 1 2 ω i j ) ) 1 C 3 2 )
= ( ( ( ( 1 ( ( 1 0.8 2 ) × ( 1 0.6 2 ) ) 0.5 ) 0.5 ) 1 0.3 × 0.2 × ( ( 1 ( ( 1 0.8 2 ) × ( 1 0.5 2 ) ) 0.5 ) 0.5 ) 1 0.3 × 0.4 × ( ( 1 ( ( 1 0.8 2 ) × ( 1 0.4 2 ) ) 0.5 ) 0.5 ) 1 0.3 × 0.1 × ( ( 1 ( ( 1 0.6 2 ) × ( 1 0.5 2 ) ) 0.5 ) 0.5 ) 1 0.4 × 0.2 × ( ( 1 ( ( 1 0.6 2 ) × ( 1 0.5 2 ) ) 0.5 ) 0.5 ) 1 0.1 × 0.2 × ( ( 1 ( ( 1 0.5 2 ) × ( 1 0.5 2 ) ) 0.5 ) 0.5 ) 1 0.4 × 0.1 ) 1 3 , ( 1 ( ( 1 0.2 × 0.3 ) 1 0.3 × 0.2 × ( 1 0.2 × 0.2 ) 1 0.3 × 0.4 × ( 1 0.2 × 0.4 ) 1 0.3 × 0.1 × ( 1 0.3 × 0.2 ) 1 0.2 × 0.4 × ( 1 0.3 × 0.4 ) 1 0.2 × 0.1 × ( 1 0.2 × 0.4 ) 1 0.4 × 0.1 ) 1 3 ) 0.5 ) = ( 0.6334 , 0.2636 )
At last, we get WPFDHM ω ( 2 ) ( a ˜ 1 , a ˜ 2 , a ˜ 3 , a ˜ 4 ) = ( 0.6334 , 0.2636 ) .□

4. A MAGDM Approach Based on the Proposed PFHM Operator

In this part, we apply the WPFHM operator to process the MAGDM problem with the information expressed by PFNs. Let X = { x 1 , x 2 , , x m } be a set of alternatives, and C = { c 1 , c 2 , , c n } be a collection attributes, the weighting vector of attributes be ω = { ω 1 , ω 2 , ω n } , meet ω j [ 0 , 1 ] , j = 1 , 2 , , n , j = 1 n ω j = 1 . There are experts Y = { y 1 , y 2 , y z } who are invited to give the evaluation information, and their weighting vector is w = { w 1 , w 2 , w z } T with w b [ 0 , 1 ] , ( b = 1 , 2 , , z ) , b = 1 z w b = 1 . The expert y b evaluates each attribute c j of each alternative x i by the form of PFN a ˜ i j b = ( μ i j b , ν i j b ) ( i = 1 , 2 , , m , j = 1 , 2 , , n ) , and then the decision matrix A ˜ b = ( a ˜ i j b ) m × n = ( ( μ i j b , ν i j b ) ) m × n ( b = 1 , 2 , , z ) is constructed. The ultimate goal is to give a ranking of all alternatives.
Then, we will give the steps for solving this problem.
Step 1: Based on the WPFHM operator, calculate the collective evaluation value of each attribute for each alternative by a ˜ i j b = WPFHM w ( a ˜ i j 1 , a ˜ i j 2 , a ˜ i j z )
Step 2: Based on the WPFHM operator, calculate the comprehensive decision-making information of each alternative by a ˜ i = WPFHM ω ( a ˜ i 1 , a ˜ i 2 , a ˜ i n )
Step 3: According to Definition 3, calculate the S ( a ˜ )   a n d   H ( a ˜ ) .
Step 4: Sort all alternatives { x 1 , x 2 , , x m } and choose the best one.

5. An Illustrate Example

In this section, we will give an example to explain the proposed method. A company wants to select a supplier and now there are four suppliers as candidates A i = ( A 1 , A 2 , A 3 , A 4 ) . We evaluate each supplier from four aspects G i = ( G 1 , G 2 , G 3 , G 4 ) , which are “production cost”, “production quality”, “supplier’s service performance”, “risk factor”. The weight vector of attributes is ω = ( 0.1 , 0.3 , 0 , 4 , 0.2 ) T . There are four experts, and the weight vector of the experts is w = ( 0.2 , 0.4 , 0.1 , 0.3 ) T . Then the decision matrix R ˜ b = ( a ˜ i j b ) 4 × 4 ( b = 1 , 2 , 3 , 4 ) are shown in Table 1, Table 2, Table 3 and Table 4, and our goal is to rank four suppliers and select the best one.

5.1. Decision-Making Processes

Step 1: Since the four attributes are of the same type, we don’t need to normalize the matrix R ˜ 1 ~ R ˜ 4 .
Step 2: Use WPFHM operator to aggregate four decision matrix R ˜ b = ( a ˜ i j b ) m × n into a collective matrix R ˜ = ( a ˜ i j b ) m × n which is listed in Table 5 (suppose x = 2 ).
Use WPFDHM operator to aggregate four decision matrixes R ˜ b = ( a ˜ i j b ) m × n into a collective matrix R ˜ = ( a ˜ i j b ) m × n which is shown in Table 6 (suppose x = 2 ).
Step 3: Use the WPFHM (WPFDHM) operator to fuse all the attribute values a ˜ i j , a ˜ i j ( j = 1 , 2 , 3 , 4 ) and get the comprehensive evaluation value (suppose x = 2 ).
a ˜ 1 = ( 0.3015 , 0.0138 ) , a ˜ 2 = ( 0.3021 , 0.0370 ) , a ˜ 3 = ( 0.3083 , 0.0443 ) , a ˜ 4 = ( 0.3003 , 0.0868 ) .
a ˜ 1 = ( 0.0849 , 0.7369 ) , a ˜ 2 = ( 0.2034 , 0.7190 ) , a ˜ 3 = ( 0.3482 , 0.5772 ) , a ˜ 4 = ( 0.0573 , 0.9129 ) .
Step 4: Obtain the score values.
S ( a ˜ 1 ) = 0.5382 , S ( a ˜ 2 ) = 0.5454 , S ( a ˜ 3 ) = 0.5508 , S ( a ˜ 4 ) = 0.5272 .
S ( a ˜ 1 ) = 0.2321 , S ( a ˜ 2 ) = 0.2622 , S ( a ˜ 3 ) = 0.3940 , S ( a ˜ 4 ) = 0.0849 .
Step 5: Rank all alternatives a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4 , then the best choice is a ˜ 3 .
Considering the different parameter values in the WPFHM operator may have an impact on the ordering results, so we calculate the scores produced from the different x and the results are shown in Table 7.
Considering the different parameter values in the WPFDHM operator may have an impact on the ordering results, so we calculate the scores produced from the different x and the results are shown in Table 8.
From Table 7 and Table 8, we can get following conclusions.
When x = 1 , the sorting of alternatives is a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4 , and the best choice is a ˜ 3 .
When x = 2 , 3 , 4 , the sorting of alternatives is a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4 , and the best choice is a ˜ 3 .

5.2. Comparative Analysis

Then, we compare our proposed method with PFWA operator and PFWG operator [60] and the comparative results are listed in Table 9.
From Table 9, we can get the same ranking results. However, the PFWA operator and PFWG operator fail to consider the relationship between arguments. Our proposed WPFHM and WPFDHM operators capture the relationship among arguments being aggregated.

6. Conclusions

For this paper, we investigate the MADM problems with PFNs. Then, we utilize the HM operator, DHM operator, WHM operator and WDHM operator to develop the PFHM operator, WPFHM operator, PFDHM operator and WPFDHM operator. The prominent properties of these operators are analyzed. Then, we develop some methods to solve the MADM problems with PFNs. Finally, a practical example for supplier selection is given. In our subsequent works, the extension and application of these operators of PFNs needs to be investigated in other MADM [61,62], risk analysis and uncertain contexts [63,64].

Author Contributions

Z.L., G.W. and M.L. conceived and worked together to achieve this work, G.W. compiled the computing program by Matlab and analyzed the data, Z.L. and G.W. wrote the paper. Finally, all the authors have read and approved the final manuscript.

Funding

The work was supported by the National Natural Science Foundation of China under Grant No. 71571128 and the Humanities and Social Sciences Foundation of Ministry of Education of the People’s Republic of China (16YJA630033) and the Construction Plan of Scientific Research Innovation Team for Colleges and Universities in Sichuan Province (15TD0004).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yager, R.R. Pythagorean fuzzy subsets. In Proceedings of the Joint IFSA World Congress and NAFIPS Annual Meeting, Edmonton, AB, Canada, 24–28 June 2013; pp. 57–61. [Google Scholar]
  2. Yager, R.R. Pythagorean membership grades in multicriteria decision making. IEEE Trans. Fuzzy Syst. 2014, 22, 958–965. [Google Scholar] [CrossRef]
  3. Zhang, X.L.; Xu, Z.S. Extension of TOPSIS to multiple criteria decision making with Pythagorean fuzzy sets. Int. J. Intell. Syst. 2014, 29, 1061–1078. [Google Scholar] [CrossRef]
  4. Peng, X.; Yang, Y. Some results for Pythagorean Fuzzy Sets. Int. J. Intell. Syst. 2015, 30, 1133–1160. [Google Scholar] [CrossRef]
  5. Reformat, M.Z.; Yager, R.R. Suggesting Recommendations Using Pythagorean Fuzzy Sets illustrated Using Netflix Movie Data. In Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Montpellier, France, 15–19 July 2014; pp. 546–556. [Google Scholar]
  6. Gou, X.; Xu, Z.; Ren, P. The Properties of Continuous Pythagorean Fuzzy Information. Int. J. Intell. Syst. 2016, 31, 401–424. [Google Scholar] [CrossRef]
  7. Garg, H. A New Generalized Pythagorean Fuzzy Information Aggregation Using Einstein Operations and Its Application to Decision Making. Int. J. Intell. Syst. 2016, 31, 886–920. [Google Scholar] [CrossRef]
  8. Zeng, S.; Chen, J.; Li, X. A Hybrid Method for Pythagorean Fuzzy Multiple-Criteria Decision Making. Int. J. Inf. Technol. Decis. Making 2016, 15, 403–422. [Google Scholar] [CrossRef]
  9. Wei, G.W. Pythagorean fuzzy interaction aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 2119–2132. [Google Scholar] [CrossRef]
  10. Gao, H.; Lu, M.; Wei, G.W.; Wei, Y. Some novel Pythagorean fuzzy interaction aggregation operators in multiple attribute decision making. Fundam. Inf. 2018, 159, 385–428. [Google Scholar] [CrossRef]
  11. Ren, P.J.; Xu, Z.S.; Gou, X.J. Pythagorean fuzzy TODIM approach to multi-criteria decision making. Appl. Soft Comput. 2016, 42, 246–259. [Google Scholar] [CrossRef]
  12. Wei, G.W.; Lu, M. Pythagorean Fuzzy Maclaurin Symmetric Mean Operators in multiple attribute decision making. Int. J. Intell. Syst. 2018, 33, 1043–1070. [Google Scholar] [CrossRef]
  13. Maclaurin, C. A second letter to Martin Folkes, Esq.; concerning the roots of equations, with demonstration of other rules of algebra. Philos. Trans. R. Soc. Lond. Ser. A 1729, 36, 59–96. [Google Scholar]
  14. Wu, S.J.; Wei, G.W. Pythagorean fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. Int. J. Knowl. Based Intell. Eng. Syst. 2017, 21, 189–201. [Google Scholar] [CrossRef]
  15. Wei, G.W.; Wei, Y. Similarity measures of Pythagorean fuzzy sets based on cosine function and their applications. Int. J. Intell. Syst. 2018, 33, 634–652. [Google Scholar] [CrossRef]
  16. Wei, G.W.; Gao, H. The generalized Dice similarity measures for picture fuzzy sets and their applications. Informatica 2018, 29, 107–124. [Google Scholar] [CrossRef]
  17. Wei, G.W. Some similarity measures for picture fuzzy sets and their applications. Iran. J. Fuzzy Syst. 2018, 15, 77–89. [Google Scholar]
  18. Wei, G.W. Some cosine similarity measures for picture fuzzy sets and their applications to strategic decision making. Informatica 2017, 28, 547–564. [Google Scholar] [CrossRef]
  19. Xue, W.T.; Xu, Z.S.; Zhang, X.L.; Tian, X.L. Pythagorean Fuzzy LINMAP Method Based on the Entropy Theory for Railway Project Investment Decision Making. Int. J. Intell. Syst. 2018, 33, 93–125. [Google Scholar] [CrossRef]
  20. Wei, G.W.; Lu, M. Pythagorean fuzzy power aggregation operators in multiple attribute decision making. Int. J. Intell. Syst. 2018, 33, 169–186. [Google Scholar] [CrossRef]
  21. Wan, S.-P.; Jin, Z.; Dong, J.-Y. Pythagorean fuzzy mathematical programming method for multi-attribute group decision making with Pythagorean fuzzy truth degrees. Knowl. Inf. Syst. 2018, 55, 437–466. [Google Scholar] [CrossRef]
  22. Baloglu, U.B.; Demir, Y. An Agent-Based Pythagorean Fuzzy Approach for Demand Analysis with Incomplete Information. Int. J. Intell. Syst. 2018, 33, 983–997. [Google Scholar] [CrossRef]
  23. Liang, D.C.; Zhang, Y.R.J.; Xu, Z.S.; Darko, A.P. Pythagorean fuzzy Bonferroni mean aggregation operator and its accelerative calculating algorithm with the multithreading. Int. J. Intell. Syst. 2018, 33, 615–633. [Google Scholar] [CrossRef]
  24. Wei, G.W.; Zhang, Z.P. Some Single-Valued Neutrosophic Bonferroni Power Aggregation Operators in Multiple Attribute Decision Making. J. Ambient Intell. Humaniz. Comput. 2018. [Google Scholar] [CrossRef]
  25. Wang, J.; Wei, G.W.; Wei, Y. Models for Green Supplier Selection with Some 2-Tuple Linguistic Neutrosophic Number Bonferroni Mean Operators. Symmetry 2018, 10, 131. [Google Scholar] [CrossRef]
  26. Wei, G.W. Picture uncertain linguistic Bonferroni mean operators and their application to multiple attribute decision making. Kybernetes 2017, 46, 1777–1800. [Google Scholar] [CrossRef]
  27. Wei, G.W. Picture 2-tuple linguistic Bonferroni mean operators and their application to multiple attribute decision making. Int. J. Fuzzy Syst. 2017, 19, 997–1010. [Google Scholar] [CrossRef]
  28. Jiang, X.P.; Wei, G.W. Some Bonferroni mean operators with 2-tuple linguistic information and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 27, 2153–2162. [Google Scholar]
  29. Mandal, P.; Ranadive, A.S. Decision-theoretic rough sets under Pythagorean fuzzy information. Int. J. Intell. Syst. 2018, 33, 818–835. [Google Scholar] [CrossRef]
  30. Chen, T.-Y. An Interval-Valued Pythagorean Fuzzy Outranking Method with a Closeness-Based Assignment Model for Multiple Criteria Decision Making. Int. J. Intell. Syst. 2018, 33, 126–168. [Google Scholar] [CrossRef]
  31. Garg, H. A Linear Programming Method Based on an Improved Score Function for Interval-Valued Pythagorean Fuzzy Numbers and Its Application to Decision-Making. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2018, 26, 67–80. [Google Scholar] [CrossRef]
  32. Khan, M.S.A.; Abdullah, S.; Ali, M.Y.; Hussain, I.; Farooq, M. Extension of TOPSIS method base on Choquet integral under interval-valued Pythagorean fuzzy environment. J. Intell. Fuzzy Syst. 2018, 34, 267–282. [Google Scholar] [CrossRef]
  33. Garg, H. New exponential operational laws and their aggregation operators for interval-valued Pythagorean fuzzy multicriteria decision-making. Int. J. Intell. Syst. 2018, 33, 653–683. [Google Scholar] [CrossRef]
  34. Li, D.Q.; Zeng, W.Y. Distance Measure of Pythagorean Fuzzy Sets. Int. J. Intell. Syst. 2018, 33, 348–361. [Google Scholar] [CrossRef]
  35. Gao, H. Pythagorean Fuzzy Hamacher Prioritized Aggregation Operators in Multiple Attribute Decision Making. J. Intell. Fuzzy Syst. 2018, 35, 2229–2245. [Google Scholar] [CrossRef]
  36. Wei, G.; Wei, Y. Some single-valued neutrosophic dombi prioritized weighted aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2018, 35, 2001–2013. [Google Scholar] [CrossRef]
  37. Gao, H.; Wei, G.W.; Huang, Y.H. Dual hesitant bipolar fuzzy Hamacher prioritized aggregation operators in multiple attribute decision making. IEEE Access 2018, 6, 11508–11522. [Google Scholar] [CrossRef]
  38. Ran, L.G.; Wei, G.W. Uncertain prioritized operators and their application to multiple attribute group decision making. Technol. Econ. Dev. Econ. 2015, 21, 118–139. [Google Scholar] [CrossRef]
  39. Zhao, X.F.; Li, Q.X.; Wei, G.W. Some prioritized aggregating operators with linguistic information and their application to multiple attribute group decision making. J. Intell. Fuzzy Syst. 2014, 26, 1619–1630. [Google Scholar]
  40. Zhou, L.Y.; Lin, R.; Zhao, X.F.; Wei, G.W. Uncertain linguistic prioritized aggregation operators and their application to multiple attribute group decision making. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 2013, 21, 603–627. [Google Scholar] [CrossRef]
  41. Lin, R.; Zhao, X.F.; Wei, G.W. Fuzzy number intuitionistic fuzzy prioritized operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2013, 24, 879–888. [Google Scholar]
  42. Wei, G.W.; Lu, M. Dual hesitant Pythagorean fuzzy Hamacher aggregation operators in multiple attribute decision making. Arch. Control Sci. 2017, 27, 365–395. [Google Scholar] [CrossRef]
  43. Lu, M.; Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Hesitant pythagorean fuzzy hamacher aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1105–1117. [Google Scholar] [CrossRef]
  44. Wei, G.W.; Lu, M.; Tang, X.Y.; Wei, Y. Pythagorean Hesitant Fuzzy Hamacher Aggregation Operators and Their Application to Multiple Attribute Decision Making. J. Intell. Fuzzy Syst. 2018, 33, 1197–1233. [Google Scholar] [CrossRef]
  45. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Bipolar fuzzy Hamacher aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2018, 20, 1–12. [Google Scholar] [CrossRef]
  46. Wei, G.W. Picture fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. Fundam. Inf. 2018, 157, 271–320. [Google Scholar] [CrossRef]
  47. Zhou, L.Y.; Zhao, X.F.; Wei, G.W. Hesitant fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2014, 26, 2689–2699. [Google Scholar]
  48. Wei, G.W.; Lu, M.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Pythagorean 2-tuple linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1129–1142. [Google Scholar] [CrossRef]
  49. Tang, X.Y.; Wei, G.W. Models for green supplier selection in green supply chain management with Pythagorean 2-tuple linguistic information. IEEE Access 2018, 6, 8042–8060. [Google Scholar] [CrossRef]
  50. Huang, Y.H.; Wei, G.W. TODIM Method for Pythagorean 2-tuple Linguistic Multiple Attribute Decision Making. J. Intell. Fuzzy Syst. 2018, 35, 901–915. [Google Scholar] [CrossRef]
  51. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. A linear assignment method for multiple criteria decision analysis with hesitant fuzzy sets based on fuzzy measure. Int. J. Fuzzy Syst. 2017, 19, 607–614. [Google Scholar] [CrossRef]
  52. Wei, G.W.; Gao, H.; Wang, J.; Huang, Y.H. Research on Risk Evaluation of Enterprise Human Capital Investment with Interval-valued bipolar 2-tuple linguistic Information. IEEE Access 2018, 6, 35697–35712. [Google Scholar] [CrossRef]
  53. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Picture 2-tuple linguistic aggregation operators in multiple attribute decision making. Soft Comput. 2018, 22, 989–1002. [Google Scholar] [CrossRef]
  54. Wei, G.W. Interval-valued dual hesitant fuzzy uncertain linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1881–1893. [Google Scholar] [CrossRef]
  55. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Hesitant bipolar fuzzy aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1119–1128. [Google Scholar] [CrossRef]
  56. Wei, G.W. Picture fuzzy aggregation operators and their application to multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 713–724. [Google Scholar] [CrossRef]
  57. Wei, G.W.; Gao, H.; Wei, Y. Some q-Rung Orthopair Fuzzy Heronian Mean Operators in Multiple Attribute Decision Making. Int. J. Intell. Syst. 2018, 33, 1426–1458. [Google Scholar] [CrossRef]
  58. Hara, T.; Uchiyama, M.; Takahasi, S.E. A refinement of various mean inequalities. J. Inequal. Appl. 1998, 2, 387–395. [Google Scholar] [CrossRef]
  59. Wu, S.; Wang, J.; Wei, G.; Wei, Y. Research on Construction Engineering Project Risk Assessment with Some 2-Tuple Linguistic Neutrosophic Hamy Mean Operators. Sustainability 2018, 10, 1536. [Google Scholar] [CrossRef]
  60. Ma, Z.M.; Xu, Z.S. Symmetric Pythagorean Fuzzy Weighted Geometric_Averaging Operators and Their Application in Multicriteria Decision-Making Problems. Int. J. Intell. Syst. 2016, 31, 1198–1219. [Google Scholar] [CrossRef]
  61. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Projection models for multiple attribute decision making with picture fuzzy information. Int. J. Mach. Learn. Cybern. 2018, 9, 713–719. [Google Scholar] [CrossRef]
  62. Lu, M.; Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Bipolar 2-tuple linguistic aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2017, 33, 1197–1207. [Google Scholar] [CrossRef]
  63. Merigo, J.M.; Gil-Lafuente, A.M. Fuzzy induced generalized aggregation operators and its application in multi-person decision making. Expert Syst. Appl. 2011, 38, 9761–9772. [Google Scholar] [CrossRef]
  64. Chen, T.Y. Remoteness index-based Pythagorean fuzzy VIKOR methods with a generalized distance measure for multiple criteria decision analysis. Inf. Fusion 2018, 41, 129–150. [Google Scholar] [CrossRef]
Table 1. Decision matrix R ˜ 1 .
Table 1. Decision matrix R ˜ 1 .
G 1 G 2 G 3 G 4
A 1 ( 0.50 , 0.30 ) ( 0.40 , 0.20 ) ( 0.50 , 0.40 ) ( 0.60 , 0.50 )
A 2 ( 0.70 , 0.20 ) ( 0.60 , 0.30 ) ( 0.50 , 0.40 ) ( 0.40 , 0.30 )
A 3 ( 0.80 , 0.10 ) ( 0.60 , 0.20 ) ( 0.70 , 0.20 ) ( 0.90 , 0.10 )
A 4 ( 0.60 , 0.50 ) ( 0.30 , 0.40 ) ( 0.50 , 0.80 ) ( 0.40 , 0.70 )
Table 2. Decision matrix R ˜ 2 .
Table 2. Decision matrix R ˜ 2 .
G 1 G 2 G 3 G 4
A 1 ( 0.60 , 0.50 ) ( 0.70 , 0.60 ) ( 0.50 , 0.40 ) ( 0.40 , 0.20 )
A 2 ( 0.80 , 0.10 ) ( 0.40 , 0.30 ) ( 0.60 , 0.10 ) ( 0.80 , 0.30 )
A 3 ( 0.70 , 0.40 ) ( 0.80 , 0.20 ) ( 0.70 , 0.60 ) ( 0.60 , 0.20 )
A 4 ( 0.40 , 0.60 ) ( 0.50 , 0.40 ) ( 0.30 , 0.80 ) ( 0.30 , 0.60 )
Table 3. Decision matrix R ˜ 3 .
Table 3. Decision matrix R ˜ 3 .
G 1 G 2 G 3 G 4
A 1 ( 0.40 , 0.30 ) ( 0.40 , 0.30 ) ( 0.50 , 0.70 ) ( 0.40 , 0.20 )
A 2 ( 0.50 , 0.30 ) ( 0.80 , 0.60 ) ( 0.50 , 0.70 ) ( 0.60 , 0.50 )
A 3 ( 0.80 , 0.20 ) ( 0.40 , 0.70 ) ( 0.90 , 0.10 ) ( 0.60 , 0.30 )
A 4 ( 0.50 , 0.60 ) ( 0.60 , 0.40 ) ( 0.40 , 0.60 ) ( 0.50 , 0.60 )
Table 4. Decision matrix R ˜ 4 .
Table 4. Decision matrix R ˜ 4 .
G 1 G 2 G 3 G 4
A 1 ( 0.70 , 0.60 ) ( 0.30 , 0.40 ) ( 0.40 , 0.20 ) ( 0.50 , 0.30 )
A 2 ( 0.50 , 0.80 ) ( 0.70 , 0.60 ) ( 0.80 , 0.30 ) ( 0.70 , 0.20 )
A 3 ( 0.70 , 0.50 ) ( 0.60 , 0.30 ) ( 0.90 , 0.10 ) ( 0.80 , 0.30 )
A 4 ( 0.50 , 0.70 ) ( 0.30 , 0.50 ) ( 0.40 , 0.40 ) ( 0.50 , 0.40 )
Table 5. The collective decision matrix R ˜ .
Table 5. The collective decision matrix R ˜ .
G 1 G 2 G 3 G 4
A 1 ( 0.3857 , 0.1549 ) ( 0.4084 , 0.2074 ) ( 0.3622 , 0.1334 ) ( 0.3695 , 0.1728 )
A 2 ( 0.4071 , 0.1010 ) ( 0.4373 , 0.0533 ) ( 0.4257 , 0.3014 ) ( 0.4497 , 0.2756 )
A 3 ( 0.4673 , 0.0270 ) ( 0.4557 , 0.1155 ) ( 0.4483 , 0.1293 ) ( 0.4683 , 0.1242 )
A 4 ( 0.3657 , 0.3949 ) ( 0.3432 , 0.3752 ) ( 0.3893 , 0.3329 ) ( 0.3551 , 0.2813 )
Table 6. The collective decision matrix R ˜ .
Table 6. The collective decision matrix R ˜ .
G 1 G 2 G 3 G 4
A 1 ( 0.2751 , 0.4911 ) ( 0.3346 , 0.6162 ) ( 0.2014 , 0.5589 ) ( 0.2609 , 0.5144 )
A 2 ( 0.3357 , 0.4469 ) ( 0.4803 , 0.2891 ) ( 0.3954 , 0.7189 ) ( 0.4951 , 0.6645 )
A 3 ( 0.6078 , 0.2394 ) ( 0.5182 , 0.5385 ) ( 0.5584 , 0.4363 ) ( 0.6114 , 0.3937 )
A 4 ( 0.2323 , 0.7933 ) ( 0.1602 , 0.8023 ) ( 0.2737 , 0.7366 ) ( 0.2046 , 0.6824 )
Table 7. Score and ranking of the alternatives with different parameter values x .
Table 7. Score and ranking of the alternatives with different parameter values x .
x Score   of   S ( a ˜ i ) Ranking
x = 1 S ( a ˜ 1 ) = 0.5693 , S ( a ˜ 2 ) = 0.5454 , S ( a ˜ 3 ) = 0.7716 , S ( a ˜ 4 ) = 0.4473 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 2 S ( a ˜ 1 ) = 0.5382 , S ( a ˜ 2 ) = 0.5454 , S ( a ˜ 3 ) = 0.5508 , S ( a ˜ 4 ) = 0.5272 a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 3 S ( a ˜ 1 ) = 0.6529 , S ( a ˜ 2 ) = 0.8097 , S ( a ˜ 3 ) = 0.9126 , S ( a ˜ 4 ) = 0.6028 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 4 S ( a ˜ 1 ) = 0.5251 , S ( a ˜ 2 ) = 0.5794 , S ( a ˜ 3 ) = 0.7034 , S ( a ˜ 4 ) = 0.4030 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
Table 8. Score and order of the alternatives with different parameter values x .
Table 8. Score and order of the alternatives with different parameter values x .
x Score   of   S ( a ˜ i ) Ranking
x = 1 S ( a ˜ 1 ) = 0.5251 , S ( a ˜ 2 ) = 0.5794 , S ( a ˜ 3 ) = 0.7034 , S ( a ˜ 4 ) = 0.4030 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 2 S ( a ˜ 1 ) = 0.2321 , S ( a ˜ 2 ) = 0.2622 , S ( a ˜ 3 ) = 0.3940 , S ( a ˜ 4 ) = 0.0849 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 3 S ( a ˜ 1 ) = 0.0583 , S ( a ˜ 2 ) = 0.0861 , S ( a ˜ 3 ) = 0.1941 , S ( a ˜ 4 ) = 0.0018 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
x = 4 S ( a ˜ 1 ) = 0.8042 , S ( a ˜ 2 ) = 0.9026 , S ( a ˜ 3 ) = 0.9639 , S ( a ˜ 4 ) = 0.6632 . a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
Table 9. Ordering of the green suppliers.
Table 9. Ordering of the green suppliers.
Ordering
PFWA a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4
PFWG a ˜ 3 a ˜ 2 a ˜ 1 a ˜ 4

Share and Cite

MDPI and ACS Style

Li, Z.; Wei, G.; Lu, M. Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Group Decision Making and Their Application to Supplier Selection. Symmetry 2018, 10, 505. https://doi.org/10.3390/sym10100505

AMA Style

Li Z, Wei G, Lu M. Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Group Decision Making and Their Application to Supplier Selection. Symmetry. 2018; 10(10):505. https://doi.org/10.3390/sym10100505

Chicago/Turabian Style

Li, Zengxian, Guiwu Wei, and Mao Lu. 2018. "Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Group Decision Making and Their Application to Supplier Selection" Symmetry 10, no. 10: 505. https://doi.org/10.3390/sym10100505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop