Next Article in Journal
Efficient Association Rules Hiding Using Genetic Algorithms
Next Article in Special Issue
Assessment of Conditions for Implementing Information Technology in a Warehouse System: A Novel Fuzzy PIPRECIA Method
Previous Article in Journal
Q-Filters of Quantum B-Algebras and Basic Implication Algebras
Previous Article in Special Issue
Nuclear Power Plant Location Selection in Vietnam under Fuzzy Environment Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methods for Multiple Attribute Group Decision Making Based on Intuitionistic Fuzzy Dombi Hamy Mean Operators

School of Business, Sichuan Normal University, Chengdu 610101, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(11), 574; https://doi.org/10.3390/sym10110574
Submission received: 28 September 2018 / Revised: 24 October 2018 / Accepted: 27 October 2018 / Published: 1 November 2018
(This article belongs to the Special Issue Multi-Criteria Decision Aid methods in fuzzy decision problems)

Abstract

:
In this paper, we extended the Hamy mean (HM) operator, the Dombi Hamy mean (DHM) operator, the Dombi dual Hamy mean (DDHM), with the intuitionistic fuzzy numbers (IFNs) to propose the intuitionistic fuzzy Dombi Hamy mean (IFDHM) operator, intuitionistic fuzzy weighted Dombi Hamy mean (IFWDHM) operator, intuitionistic fuzzy Dombi dual Hamy mean (IFDDHM) operator, and intuitionistic fuzzy weighted Dombi dual Hamy mean (IFWDDHM) operator. Following this, the multiple attribute group decision-making (MAGDM) methods are proposed with these operators. To conclude, we utilized an applicable example for the selection of a car supplier to prove the proposed methods.

1. Introduction

Multiple attribute decision making (MADM) is a key branch of decision theory. The definition of intuitionistic fuzzy sets (IFSs) [1,2] has been utilized to deal with uncertainty and imprecision. The introduction of intuitionistic fuzzy entropy by Burillo and Bustince [3] caught the attention of researchers. Xu [4,5] developed a number of aggregation operators with intuitionistic fuzzy numbers (IFNs.) Xu [6] defined the intuitionistic preference relations for multiple attribute group decision making (MAGDM). Li [7] proposed the Linear Programming Technique for Multidimensional Analysis of Preference (LINMAP) models for MADM. Xu [8] developed the Choquet integrals of weighted IFNs. Ye [9] gave some Cosine similarity measures for intuitionistic fuzzy sets (IFSs). Li and Ren [10] considered the amount and reliability of IFNs for MADM. Wei [11] proposed some induced geometric aggregation operators with IFNs. Wei and Zhao [12] gave some induced correlated aggregating operators with IFNs. Wei [13,14] developed the gray relational analysis method for MADM with IFNs. Zhao and Wei [15] defined some Einstein hybrid aggregation operators with IFNs. Garg [16] proposed the generalized interactive geometric interaction operators using Einstein T-norm and T-conorm with IFNs. Chu et al. [17] gave a MAGDM model that considered both the additive consistency and group consensus with IFNs. Wan et al. [18] researched a novel risk attitudinal ranking method for MADM with IFNs. Zhao et al. [19] proposed the VIKOR (VIseKriterijumska Optimizacija I KOmpromisno Resenje) method using IFSs. Liu [20] proposed MADM methods with normal intuitionistic fuzzy interaction operators. Shi [21] developed some constructive methods for intuitionistic fuzzy implication operators. Otay et al. [22] studied the multi-expert performance evaluation of healthcare institutions with intuitionistic fuzzy Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) methodology. Ai and Xu [23] proposed the multiple definite integrals of intuitionistic fuzzy calculus and isomorphic mappings. Montes et al. [24] defined the entropy measures for IFNs based on divergence. Liu et al. [25] evaluated the commercial bank counterparty credit risk management with IFNs. Some similarity measures and information aggregating operators between intuitionistic fuzzy sets [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41] and their extension [42,43,44,45,46,47,48,49,50,51,52,53] have been proposed.
Dombi [54] proposed the operations of the Dombi T-norm and T-conorm. Following this, Liu et al. [55] proposed the Dombi operations with IFNs. Chen and Ye [56] proposed the Dombi weighted arithmetic average and geometric average fuse the single-valued neutrosophic numbers (SVNNs). Wei and Wei [57] gave some dombi prioritized weighting aggregation operators with single-valued neutrosophic numbers.
Through existing studies, we can see that the combination Hamy mean (HM) operator [58,59] and Dombi operations are not extended to IFNs so far. In order to develop Hamy mean operators and Dombi operations for IFNs, the main purposes of this study are (1) to develop some Dombi Hamy mean aggregating operators for IFNs and to investigate their properties, and (2) to propose two models to solve the MADM problems based on these operators with IFNs.
To do so, the rest of this paper is organized as follows. In the next section, we introduce some basic concepts of IFSs, Dombi operations and HM operators. In section three we propose some intuition fuzzy Hamy mean operators based on Dombi T-norm and T-conorm. In section four, we have applied these operators to solve the MAGDM problems with IFNs. In section five, a practical example for the selection of a car supplier is given. In section six, we conclude the paper and give some remarks.

2. Preliminaries

In this section, we introduce the concept of IFS, HM operator, and Dombi T-conorm and T-norm.

2.1. Intuitionistic Fuzzy Sets

Definition 1.
Let X be a fixed set, with a generic in X denoted by x . An intuitionistic fuzzy set (IFS) I in X is following [1,2]:
I = { x , μ i ( x ) , ν i ( x ) x X }  
where μ i ( x ) is the membership function, and ν i ( x ) is the non-membership function. For each point x in X , we have μ i ( x ) , ν i ( x ) [ 0 , 1 ] and 0 μ i ( x ) + ν i ( x ) 1 .
For each IFS I in X , let π i ( x ) = 1 μ i ( x ) ν i ( x ) , x X , and we call π i ( x ) the indeterminacy degree of the element x to the set I . It can be easily proved that 0 π i ( x ) 1 , x X . For convenience, we call k = ( μ k , ν k ) an IFN, where μ k [ 0 , 1 ] , ν k [ 0 , 1 ] , and 0 μ k + ν k 1 .
Definition 2.
Let k 1 = ( μ 1 , ν 1 ) and k 2 = ( μ 2 , ν 2 ) be two IFNs, then operational laws are defined [4,5].
1.
k 1 k 2 = ( μ 1 + μ 2 μ 1 μ 2 , ν 1 ν 2 )
2.
k 1 k 2 = ( μ 1 μ 2 , ν 1 + ν 2 ν 1 ν 2 )
3.
λ k 1 = ( 1 ( 1 μ 1 ) λ , ν 1 λ ) ,   λ > 0
4.
k 1 λ = ( μ 1 λ , 1 ( 1 ν 1 ) λ ) ,   λ > 0
Example 1.
Suppose that k 1 = ( 0.4 0.5 ) ,   k 2 = ( 0.6 0.4 ) , and λ = 4 , then we have
1.
k 1 k 2 = ( 0.4 + 0.6 0.4 × 0.6 0.5 × 0.4 ) = ( 0.7600 0.2000 )
2.
k 1 k 2 = ( 0.4 × 0.6 0.5 + 0.4 0.5 × 0.4 ) = ( 0.2400 0.7000 )
3.
λ k 1 = ( 1 ( 1 0.4 ) 4 , 0.5 4 ) = ( 0.8704 , 0.0625 )
4.
k 1 λ = ( 0.4 4 , 1 ( 1 0.5 ) 4 ) = ( 0.0256 , 0.9375 )
Definition 3.
Let k = ( μ k , ν k ) be an IFN, then a score function is [60]:
S ( k ) = μ k ν k  
where S ( k ) [ 1 , 1 ] , from (2), we can give the comparison method of IFNs on the basis of the above score function. For the difference μ k ν k , the larger the S ( k ) is, the greater the IFN k is.
Example 2.
Let k 1 = ( 0.5   0.4 ) , k 2 = ( 0.6   0.2 ) be two IFNs, we can get the scores of k 1 and k 2 . S ( k 1 ) = 0.5 0.4 = 0.1 , S ( k 2 ) = 0.6 0.2 = 0.4 , since S ( k 2 ) > S ( k 1 ) , we get k 2 > k 1 .
Definition 4.
Let k = ( μ k , ν k ) be an IFN, then an accuracy function H of k can be defined as follows [61]:
H ( k ) = μ k + ν k  
where H ( k ) [ 0 , 1 ] , for the difference μ k + ν k , the larger the H ( k ) is, the greater the IFN k is.
Xu and Yager [5] develop a comparison method of IFNs.
Definition 5.
Let k 1 = ( μ 1 , ν 1 ) and k 2 = ( μ 2 , ν 2 ) be two IFNs, S ( k 1 ) and S ( k 2 ) are the score function of k 1 and k 2 respectively, H ( k 1 ) and H ( k 2 ) are the score function of k 1 and k 2 respectively. Then,
(1) 
If S ( k 1 ) > S ( k 2 ) , then k 1 > k 2 ;
(2) 
If S ( k 1 ) = S ( k 2 ) , then
(3) 
If H ( k 1 ) > H ( k 2 ) , then k 1 > k 2 ;
(4) 
If H ( k 1 ) = H ( k 2 ) , then k 1 = k 2 .
Example 3.
Let k 1 = ( 0.6 , 0.3 ) , k 2 = ( 0.4 , 0.1 ) be two IFNs, we can get the scores and the accuracy of k 1 and k 2 . S ( k 1 ) = 0.6 0.3 = 0.3 , S ( k 2 ) = 0.4 0.1 = 0.3 . Since S ( k 1 ) = S ( k 2 ) , we can’t get the difference of k 1 and k 2 , then H ( k 1 ) = 0.6 + 0.3 = 0.9 , H ( k 2 ) = 0.4 + 0.1 = 0.5 , since H ( k 1 ) > H ( k 2 ) , we can get k 1 > k 2 .

2.2. HM Operator

Definition 6.
The HM operator is defined as follows [58]:
HM ( x ) ( k 1 , k 2 , k n ) = 1 i 1 < < i x n ( j = 1 x k i j ) 1 x C n x  
where x is a parameter and x = 1 , 2 , , n , i 1 , i 2 , i x are x integer values taken from the set { 1 , 2 , , n } of k integer values, C n x denotes the binomial coefficient and C n x = n ! x ! ( n x ) ! . The properties of the operator are shown as follows:
(i) 
When k i = k ( i = 1 , 2 , , n ) , HM ( x ) ( k 1 , k 2 , k n ) = k ;
(ii) 
When k i π i ( i = 1 , 2 , , n ) , HM ( x ) ( k 1 , k 2 , k n ) HM ( x ) ( π 1 , π 2 , π n ) ;
(iii) 
When min { k i } HM ( x ) ( k 1 , k 2 , k n ) max { k i } .
Two particular cases of the HM operator are given as follows.
(i) 
When x = 1 , HM ( 1 ) ( a 1 , a 2 , a n ) = 1 n i = 1 n k i , it becomes the arithmetic mean operator.
(ii) 
When x = k , HM ( k ) ( k 1 , k 2 , k k ) = ( i = 1 n k i ) 1 n , it becomes the geometric mean operator.

2.3. Dombi T-Conorm and T-Norm

Dombi operations involve the Dombi product and Dombi sum, which are special cases of T-norms and T-conorms, respectively.
Definition 7.
Suppose M = { x , μ M ( x ) , ν M ( x ) } and N = { x , μ N ( x ) , ν N ( x ) } are any two IFNs, then the generalized intersection and generalized union are proposed as follows [54]:
M T , T * N = { x , T ( μ M ( x ) , μ N ( x ) ) , T * ( ν M ( x ) , ν N ( x ) ) x X }  
M T , T * N = { x , T * ( μ M ( x ) , μ N ( x ) ) , T ( ν M ( x ) , ν N ( x ) ) x X }  
where T denotes a T-norm and T * denotes a T-conorm.
Dombi proposed a generator to produce Dombi T-norm and T-conorm which are shown as follows.
T D , λ ( x , y ) = 1 1 + ( ( 1 x x ) λ + ( 1 y y ) λ ) 1 λ  
T D , λ * ( x , y ) = 1 1 1 + ( ( x 1 x ) λ + ( y 1 y ) λ ) 1 λ  
where λ > 0 , x , y [ 0 , 1 ] .
Based on the Dombi T-norm and T-conorm, we can give the operational rules of IFNs as follows. Suppose k 1 = ( μ 1 , ν 1 ) and k 2 = ( μ 2 , ν 2 ) are any two IFNs, then operational laws of IFNs based on the Dombi T-norm and T-conorm can be defined as follows ( λ > 0 ) :
  • k 1 k 2 = ( 1 1 1 + ( ( μ 1 1 μ 1 ) λ + ( μ 2 1 μ 2 ) λ ) 1 λ , 1 1 + ( ( 1 ν 1 ν 1 ) λ + ( 1 ν 2 ν 2 ) λ ) 1 λ )
  • k 1 k 2 = ( 1 1 + ( ( 1 μ 1 μ 1 ) λ + ( 1 μ 2 μ 2 ) λ ) 1 λ , 1 1 1 + ( ( ν 1 1 ν 1 ) λ + ( ν 2 1 ν 2 ) λ ) 1 λ )
  • n k 1 = ( 1 1 1 + ( n ( μ 1 1 μ 1 ) λ ) 1 λ , 1 1 + ( n ( 1 ν 1 ν 1 ) λ ) 1 λ ) ( n > 0 )
  • k 1 n = ( 1 1 + ( n ( 1 μ 1 μ 1 ) λ ) 1 λ , 1 1 1 + ( n ( ν 1 1 ν 1 ) λ ) 1 λ ) ( n > 0 )
Example 4.
Suppose that k 1 = ( 0.6 , 0.1 ) , k 2 = ( 0.7 , 0.3 ) , and λ = 2 , n = 3 , then we have
(1) 
k 1 k 2 = ( 1 1 1 + ( ( 0.6 1 0.6 ) 2 + ( 0.7 1 0.7 ) 2 ) 1 2 , 1 1 + ( ( 1 0.1 0.1 ) 2 + ( 1 0.3 0.3 ) 2 ) 1 2 ) = ( 0.7350 , 0.0384 )
(2) 
k 1 k 2 = ( 1 1 + ( ( 1 0.6 0.6 ) 2 + ( 1 0.7 0.7 ) 2 ) 1 2 , 1 1 1 + ( ( 0.1 1 0.1 ) 2 + ( 0.3 1 0.3 ) 2 ) 1 2 ) = ( 0.5579 , 0.3069 )
(3) 
n k 1 = ( 1 1 1 + ( 3 × ( 0.6 1 0.6 ) 2 ) 1 2 , 1 1 + ( 3 × ( 1 0.1 0.1 ) 2 ) 1 2 ) = ( 0.7221 , 0.0603 )
(4) 
k 1 n = ( 1 1 + ( 3 × ( 1 0.6 0.6 ) 2 ) 1 2 , 1 1 1 + ( 3 × ( 0.1 1 0.1 ) 2 ) 1 2 ) = ( 0.4641 , 0.1613 )

3. Intuition Fuzzy Hamy Mean Operators Based on Dombi T-Norm and T-Conorm

In this section, we propose the intuitionistic fuzzy Dombi Hamy mean (IFDHM) operator and intuitionistic fuzzy weighted Dombi Hamy mean (IFWDHM) operator.

3.1. The IFDHM Operator

Definition 8.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a collection of IFNs, then we can define IFDHM operator as follows:
IFDHM ( x ) ( k 1 , k 2 , k n ) = 1 C n x ( 1 i 1 < < i x n ( j = 1 x k i j ) 1 x )  
where x is a parameter and x = 1 , 2 , , n , i 1 , i 2 , i x , are x integer values taken from the set { 1 , 2 , , n } of n integer values, C n x denotes the binomial coefficient and C n x = n ! x ! ( n x ) ! .
Theorem 1.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a collection of IFNs, then the aggregate result of Definition 8 is still an IFN, and have
IFDHM ( x ) ( k 1 , k 2 , k n ) = 1 C n x ( 1 i 1 < < i x n ( j = 1 x k i j ) 1 x ) = ( 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ ,   1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )
Proof. 
  • First of all, we prove (10) is kept. According to the operational laws of IFNs, we have
    j = 1 x k i j = ( 1 1 + ( j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 1 + ( j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
    ( j = 1 x k i j ) 1 x = ( 1 1 + ( 1 x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 1 + ( 1 x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
    Moreover,
    1 i 1 < < i x n ( j = 1 x k i j ) 1 x = ( 1 1 1 + ( 1 i 1 < < i x n x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( 1 i 1 < < i x n x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )
    Furthermore,
    IFDHM ( x ) ( k 1 , k 2 , k n ) = 1 C n x ( 1 i 1 < < i x n ( j = 1 x k i j ) 1 x ) = ( 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )
  • Next, we prove (10) is an IFN.
    Let
    a = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , b = 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ  
    Then we need to prove that the following two conditions which are satisfied,
    (i) 0 a 1 , 0 b 1 ;
    (ii) 0 a + b 1 .
    (i) Since μ i j [ 0 , 1 ] , we can get
    1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 / ( 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ ) [ 0 , 1 ] 1 1 / ( 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ ) [ 0 , 1 ]
    Therefore, 0 a 1 . Similarly, 0 b 1 .
    (ii) Obviously, 0 a + b 1 , then
    1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ  
    1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j ν i j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1  
We get 0 a + b 1 , so the aggregated result of Definition 8 is still an IFN. Next we will discuss about some of the properties of the IFDHM operator. □
Property 1 (Idempotency).
If k i ( 1 , 2 , n ) and k are IFNs, and k i = k = ( μ i , ν i ) for all i = 1 , 2 , n , then we get
IFDHM ( x ) ( k 1 , k 2 , k n ) = k  
Proof. 
Since k = ( μ , ν ) , based on Theorem 1, we have
IFDHM ( x ) ( k 1 , k 2 , k n ) = 1 C n x ( 1 i 1 < < i x n ( j = 1 x k i j ) 1 x ) = ( 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )
= ( 1 1 1 + ( 1 C n x 1 i 1 < < i x n 1 ( 1 μ i μ i ) λ ) 1 λ , 1 1 + ( 1 C n x 1 i 1 < < i x n 1 ( ν i 1 ν i ) λ ) 1 λ ) = ( 1 1 1 + ( 1 ( 1 μ i μ i ) λ ) 1 λ , 1 1 + ( 1 ( ν i 1 ν i ) λ ) 1 λ ) = ( 1 1 1 + 1 1 μ i μ i , 1 1 + 1 ν i 1 ν i ) = ( μ i , ν i ) = ( μ , ν ) = k
 □
Property 2 (Monotonicity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. If μ i j μ θ j , ν i j ν θ j , for all j , then
IFDHM ( x ) ( k 1 , k 2 , , k n ) IFDHM ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
j = 1 x ( 1 μ i j μ i j ) λ j = 1 x ( 1 μ θ j μ θ j ) λ 1 / j = 1 x ( 1 μ i j μ i j ) λ 1 / j = 1 x ( 1 μ θ j μ θ j ) λ  
x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 μ i j μ i j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 μ θ j μ θ j ) λ  
1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ
Similarly, we have
j = 1 x ( ν i j 1 ν i j ) λ j = 1 x ( ν θ j 1 ν θ j ) λ 1 / j = 1 x ( ν i j 1 ν i j ) λ 1 / j = 1 x ( ν θ j 1 ν θ j ) λ  
x C n x 1 i 1 < < i x n 1 / j = 1 x ( ν i j 1 ν i j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( ν θ j 1 ν θ j ) λ  
1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ  
1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ  
Let k = IFDHM ( x ) ( k 1 , k 2 , , k n ) , π = IFDHM ( x ) ( π 1 , π 2 , , π n ) and S ( k ) , S ( π ) be the score values of k and π respectively. Based on the score value of IFN in (2) and the above inequality, we can imply that S ( k ) S ( π ) , and then we discuss the following cases:
(1)
If S ( k ) > S ( π ) , then we can get IFDHM ( x ) ( k 1 , k 2 , , k n ) > IFDHM ( x ) ( π 1 , π 2 , , π n ) .
(2)
If S ( k ) = S ( π ) , then
1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ
Since μ i j μ θ j 0 , ν θ j ν i j 0 , we can deduce that
1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ  
and
1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ  
Therefore, it follows that
H ( k ) = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ + 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ = H ( π )
IFDHM ( x ) ( k 1 , k 2 , , k n ) = IFDHM ( x ) ( π 1 , π 2 , , π n )
 □
Property 3 (Boundedness).
Let k i = ( μ i j , ν i j ) , k + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , k ) be a set of IFNs, and k = ( μ min i j , ν min i j ) then
k < IFDHM ( x ) ( k 1 , k 2 , , k n ) < k +  
Proof. 
Based on Properties 1 and 2, we have
IFDHM ( x ) ( k 1 , k 2 , , k k ) IFDHM ( x ) ( k , k , , k ) = k , IFDHM ( x ) ( k 1 , k 2 , , k k ) IFDHM ( x ) ( k + , k + , , k + ) = k + .
Then we have k < IFDHM ( x ) ( k 1 , k 2 , , k n ) < k + . □
Property 4 (Commutativity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. Suppose ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then
IFDHM ( x ) ( k 1 , k 2 , , k n ) = IFDHM ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Because ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then 1 C n x ( 1 i 1 < < i x n ( j = 1 x k i j ) 1 x ) = 1 C n x ( 1 i 1 < < i x n ( j = 1 x π i j ) 1 x ) , thus
IFDHM ( x ) ( k 1 , k 2 , , k n ) = IFDHM ( x ) ( π 1 , π 2 , , π n ) .
 □
Example 5.
Let k 1 = ( 0.6 , 0.3 ) , k 2 = ( 0.5 , 0.1 ) , k 3 = ( 0.7 , 0.2 ) , k 4 = ( 0.8 , 0.1 ) be four IFNs. Then we use the proposed IFDHM operator to aggregate four IFNs (suppose x = 2 , λ = 2 ).
Let
k = IFDHM ( 2 ) ( k 1 , k 2 , , k n ) = ( 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ ) = ( 1 1 1 + ( 1 3 × ( 1 ( 1 0.6 0.6 ) 2 + ( 1 0.5 0.5 ) 2 + 1 ( 1 0.6 0.6 ) 2 + ( 1 0.7 0.7 ) 2 + 1 ( 1 0.6 0.6 ) 2 + ( 1 0.8 0.8 ) 2 + 1 ( 1 0.5 0.5 ) 2 + ( 1 0.7 0.7 ) 2 + 1 ( 1 0.5 0.5 ) 2 + ( 1 0.8 0.8 ) 2 + 1 ( 1 0.7 0.7 ) 2 + ( 1 0.8 0.8 ) 2 ) ) 1 2 , 1 1 + ( 1 3 × ( 1 ( 0.3 1 0.3 ) 2 + ( 0.1 1 0.1 ) 2 + 1 ( 0.3 1 0.3 ) 2 + ( 0.2 1 0.2 ) 2 + 1 ( 0.3 1 0.3 ) 2 + ( 0.1 1 0.1 ) 2 + 1 ( 0.1 1 0.1 ) 2 + ( 0.2 1 0.2 ) 2 + 1 ( 0.1 1 0.1 ) 2 + ( 0.1 1 0.1 ) 2 + 1 ( 0.2 1 0.2 ) 2 + ( 0.1 1 0.1 ) 2 ) ) 1 2 = ( 0.5831 , 0.3355 ) )
At last, we get IFDHM ( 2 ) ( k 1 , k 2 , k 3 , k 4 ) = ( 0.5831 , 0.3355 ) .

3.2. The IFWDHM Operator

The weights of attributes play an important role in practical decision making, and they can influence the decision result. Therefore, it is necessary to consider attribute weights in aggregating information. It is obvious that the IFWDHM operator fails to consider the problem of attribute weights. In order to overcome this defect, we propose the IFWDHM operator.
Definition 9.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a group of IFNs, ω = ( ω 1 , ω 2 , ω n ) T be the weight vector for k i   ( i = 1 , 2 , , n ) , which satisfies ω i [ 0.1 ] and i = 1 n ω i = 1 , then we can define IFWDHM operator as follows:
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = { 1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C n 1 x ( 1 x < n ) i = 1 x k i 1 ω i n 1 ( x = n )  
Theorem 2.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a group of IFNs, and their weight vector meet ω i [ 0.1 ] and i = 1 n ω i = 1 then the result from Definition 9 is still an IFN, and have
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = 1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C n 1 x = ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ ) ( 1 x < n )
or
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = i = 1 x k i 1 ω i n 1 = ( 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 μ i μ i ) λ ) 1 λ , 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( ν i 1 ν i ) λ ) 1 λ ) ( x = k )
Proof. 
(1) First of all, we prove that (20) and (21) are kept. For the first case, when ( 1 x < n ) , according to the operational laws of IFNs, we get
j = 1 x k i j = ( 1 1 + ( j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 1 + ( j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
( j = 1 x k i j ) 1 x = ( 1 1 + ( 1 x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 1 + ( 1 x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
Thereafter,
( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x = ( 1 1 1 + ( ( 1 j = 1 x ω i j ) x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( ( 1 j = 1 x ω i j ) x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
Moreover,
1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x = ( 1 1 1 + ( 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )  
Therefore,
1 C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x = ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ )
For the second case, when ( x = n ) , we get
k i 1 ω i n 1 = ( 1 1 + ( ( 1 ω i n 1 ) ( 1 μ i μ i ) λ ) 1 λ , 1 1 1 + ( ( 1 ω i n 1 ) ( ν i 1 ν i ) λ ) 1 λ )  
Then,
i = 1 x k i 1 ω i n 1 = ( 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 μ i μ i ) λ ) 1 λ , 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( ν i 1 ν i ) λ ) 1 λ )  
(2) Next, we prove the (20) and (21) are IFNs. For the first case, when 1 x < n ,
Let
a = 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ  
b = 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ  
Then we need prove the following two conditions.
(I) 0 a 1 , 0 b 1 . (II) 0 a + b 1 .
(I) Since a [ 0 , 1 ] , we can get
1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1  
1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ [ 0 , 1 ]  
1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ [ 0 , 1 ]  
Therefore, 0 a 1 . Similarly, we can get
1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ [ 0 , 1 ]  
Therefore, 0 b 1 .
(II) Since 0 a + b 1 , we can get the following inequality.
1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ + 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ + 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1
For the second case, when x = n , we can easily prove that it is kept. So the aggregation result produced by Definition 9 is still an IFN. Next, we shall deduce some desirable properties of IFWDHM operator. □
Property 5 (Idempotency).
If k i ( i = 1 , 2 , , n ) are equal, i.e., k i = k = ( μ , ν ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = k  
Proof. 
Since k i = k = ( μ , ν ) , based on Theorem 2, we get
(1) For the first case, when 1 x < n .
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ ) = ( 1 1 1 + ( 1 C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 ( ν 1 ν ) λ ) 1 λ ) = ( 1 1 1 + ( 1 C n 1 x ( C n x 1 i 1 < < i x n ( i = 1 x ω i ) ) 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 C n 1 x ( C n x 1 i 1 < < i x n ( i = 1 x ω i ) ) 1 ( ν 1 ν ) λ ) 1 λ ) = ( 1 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 i = 1 x ω i ) 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 i = 1 x ω i ) 1 ( ν 1 ν ) λ ) 1 λ )
Since i = 1 k ω i = 1 , we can get
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 ) 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 ) 1 ( ν 1 ν ) λ ) 1 λ ) = ( 1 1 1 + ( 1 ( n 1 ) ! x ! ( k 1 x ) ! ( n 1 ) ! x ! ( k 1 x ) ! 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 ( n 1 ) ! x ! ( k 1 x ) ! ( n 1 ) ! x ! ( k 1 x ) ! 1 ( ν 1 ν ) λ ) 1 λ ) = ( 1 1 1 + ( 1 ( 1 μ μ ) λ ) 1 λ , 1 1 + ( 1 ( ν 1 ν ) λ ) 1 λ ) = ( μ , ν ) = k
(2) For the second case, when x = n ,
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 μ i μ i ) λ ) 1 λ , 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( ν i 1 ν i ) λ ) 1 λ ) = ( 1 1 + ( n 1 n 1 ( 1 μ μ ) λ ) 1 λ , 1 1 1 + ( n 1 n 1 ( ν 1 ν ) λ ) 1 λ ) = ( μ , ν ) = k
which proves the idempotency property of the IFWDHM operator. □
Property 6 (Monotonicity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. If μ i j μ θ j , ν i j ν θ j for all j , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the k and π are equal, then we have
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDHM ω ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
j = 1 x ( 1 μ i j μ i j ) λ j = 1 x ( 1 μ θ j μ θ j ) λ 1 / j = 1 x ( 1 μ i j μ i j ) λ 1 / j = 1 x ( 1 μ θ j μ θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ i j μ i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ θ j μ θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ i j μ i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ θ j μ θ j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ i j μ i j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 μ θ j μ θ j ) λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ
Similarly, we have
j = 1 x ( ν i j 1 ν i j ) λ j = 1 x ( ν θ j 1 ν θ j ) λ 1 / j = 1 x ( ν i j 1 ν i j ) λ 1 / j = 1 x ( ν θ j 1 ν θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( ν i j 1 ν i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( ν θ j 1 ν θ j ) λ . x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( ν i j 1 ν i j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( ν θ j 1 ν θ j ) λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ
Let a = IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) , π = IFWDHM ω ( x ) ( π 1 , π 2 , , π n ) and S ( k ) , S ( π ) be the score values of a and π respectively. Based on the score value of IFN in (2) and the above inequality, we can imply that S ( k ) S ( π ) , and then we discuss the following cases:
(1) If S ( k ) > S ( π ) , then we can get
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) > IFWDHM ω ( x ) ( π 1 , π 2 , , π n )  
(2) If S ( k ) = S ( π ) , then
1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ  
Since μ i j μ θ j 0 , ν θ j ν i j 0 , and based on the Equations (2) and (3), we can deduce that
  1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ = 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ θ j μ θ j ) λ ) 1 λ  
and
1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν θ j 1 ν θ j ) λ ) 1 λ  
Therefore, it follows that H ( k ) = H ( π ) , the IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDHM ω ( x ) ( π 1 , π 2 , , π n ) , When x = n , we can prove it in a similar way. □
Property 7 (Boundedness).
Let k i = ( μ i j , ν i j ) , k + = ( μ max   i j , ν max   i j ) ( i = 1 , 2 , , n ) be a set of IFNs, and k = ( μ min   i j , ν min   i j ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 n ω i = 1 then
k IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) k +  
Proof. 
Based on Properties 5 and 6, we have
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDHM ω ( x ) ( k , k , , k ) = k IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDHM ω ( x ) ( k + , k + , , k + ) = k +
Then we have k IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) k + . □
Property 8 (Commutativity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. Suppose ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the k and π are equal, then we have
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDHM ω ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Because ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C k 1 x = 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x π i j ) 1 x C k 1 x ( 1 x < k ) i = 1 x k i 1 ω i k 1 = i = 1 x π i 1 ω i k 1 ( x = k )
Thus, IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDHM ω ( x ) ( π 1 , π 2 , , π n ) .  □
Example 6.
Let k 1 = ( 0.8 , 0.2 ) , k 2 = ( 0.6 , 0.1 ) , k 3 = ( 0.7 , 0.3 ) , k 4 = ( 0.4 , 0.2 ) be four IFNs, the weighting vector of attributes is ω = { 0.2 , 0.3 , 0.4 , 0.1 } . Then we use the proposed IFWDHM operator to aggregate four IFNs (suppose x = 2 , λ = 2 ).
Let
IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = 1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C n 1 x = ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ , 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ ) = ( 1 1 1 + ( 2 C 4 1 2 1 i 1 < < i 2 4 ( 1 j = 1 2 ω i j ) 1 j = 1 2 ( 1 μ i j μ i j ) 2 ) 1 2 , 1 1 + ( 2 C 4 1 2 1 i 1 < < i 2 4 ( 1 j = 1 2 ω i j ) 1 j = 1 2 ( ν i j 1 ν i j ) 2 ) 1 2 )   = ( 1 1 1 + ( 2 3 ( 1 0.2 0.3 ( 1 0.8 0.8 ) 2 + ( 1 0.6 0.6 ) 2 + 1 0.2 0.4 ( 1 0.8 0.8 ) 2 + ( 1 0.7 0.7 ) 2 + 1 0.2 0.1 ( 1 0.8 0.8 ) 2 + ( 1 0.4 0.4 ) 2 + 1 0.3 0.4 ( 1 0.6 0.6 ) 2 + ( 1 0.7 0.7 ) 2 + 1 0.3 0.1 ( 1 0.6 0.6 ) 2 + ( 1 0.4 0.4 ) 2 + 1 0.4 0.1 ( 1 0.7 0.7 ) 2 + ( 1 0.4 0.4 ) 2 ) ) 1 2 , 1 1 1 + ( 2 3 ( 1 0.2 0.3 ( 0.2 1 0.2 ) 2 + ( 0.1 1 0.1 ) 2 + 1 0.2 0.4 ( 0.2 1 0.2 ) 2 + ( 0.3 1 0.3 ) 2 + 1 0.2 0.1 ( 0.2 1 0.2 ) 2 + ( 0.2 1 0.2 ) 2 + 1 0.3 0.4 ( 0.1 1 0.1 ) 2 + ( 0.3 1 0.3 ) 2 + 1 0.3 0.1 ( 0.1 1 0.1 ) 2 + ( 0.2 1 0.2 ) 2 + 1 0.4 0.1 ( 0.3 1 0.3 ) 2 + ( 0.2 1 0.2 ) 2 ) ) 1 2 ) = ( 0.5302 , 0.1952 )
At last, we get IFWDHM ω ( 2 ) ( k 1 , k 2 , k 3 , k 4 ) = ( 0.5302 , 0.1952 ) .

3.3. The IFDDHM Operator

Wu et al. [59] proposed the dual Hamy mean (DHM) operator.
Definition 10.
The DHM operator is defined as follows [59]:
DHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x  
where x is a parameter and x = 1 , 2 , , n , i 1 , i 2 , , i n are x integer values taken from the set { 1 , 2 , , n } of n integer values, C n x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
In this section, we will propose the intuitionistic fuzzy Dombi dual Hamy mean DHM (IFDDHM) operator.
Definition 11.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a collection of IFNs, then we can define IFDDHM operator as follows:
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x  
where x is a parameter and x = 1 , 2 , , n , i 1 , i 2 , , i n are x integer values taken from the set { 1 , 2 , , n } of n integer values, C n x denotes the binomial coefficient and C k x = k ! x ! ( k x ) ! .
Theorem 3.
Let k i = ( μ i , ν i ) ( i = 1 , 2 , , n ) be a collection of the IFNs, then the aggregate result of Definition 10 is still an IFNs, and have
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x = ( 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )
Proof. 
(1) First of all, we prove (42) is kept. According to the operational laws of IFNs, we get
j = 1 x k i j = ( 1 1 1 + ( j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 + ( j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )  
j = 1 x k i j x = ( 1 1 1 + ( 1 x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 + ( 1 x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )  
Moreover,
1 i 1 < < i x n ( j = 1 x k i j x ) = ( 1 1 + ( 1 i 1 < < i x n x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( 1 i 1 < < i x n x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )
Furthermore,
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x = ( 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )
(2) Next, we prove (42) is an IFN.
Let
a = 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , b = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ  
Then we need to prove that the following two conditions which are satisfied.
(i) 0 a 1 , 0 b 1 ;
(ii) 0 a + b 1 .
(i) Since μ i j [ 0 , 1 ] , we can get
1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ 1 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ [ 0 , 1 ]  
Therefore, 0 a 1 . Similarly, we can get 0 b 1 .
(ii) Obviously, 0 a + b 1 , then
1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ = 1
We get 0 a + b 1 . so the aggregated result of Definition 10 is still an IFN. Next we will discuss some properties of IFDDHM operator. □
Property 9 (Idempotency).
If k i ( 1 , 2 , n ) and k are IFNs, and k i = k = ( μ i , ν i ) for all i = 1 , 2 , n , then we get
IFDDHM ( x ) ( k 1 , k 2 , k n ) = k  
Proof. 
Since k = ( μ , ν ) , based on Theorem 3, we have
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x = ( 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ ) = ( 1 1 + ( 1 C n x 1 i 1 < < i x n 1 ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( 1 C n x 1 i 1 < < i x n 1 ( 1 ν i j ν i j ) λ ) 1 λ ) = ( 1 1 + ( 1 ( μ i 1 μ i ) λ ) 1 λ , 1 1 1 + ( 1 ( 1 ν i ν i ) λ ) 1 λ ) = ( 1 1 + 1 μ i 1 μ i , 1 1 1 + 1 1 ν i ν i ) = ( μ i , ν i ) = ( μ , ν ) = k
 □
Property 10 (Monotonicity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. If μ i j μ θ j , ν i j ν θ j , for all j , then
IFDDHM ( x ) ( k 1 , k 2 , , k n ) IFDDHM ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
j = 1 x ( μ i j 1 μ i j ) λ j = 1 x ( μ θ j 1 μ θ j ) λ 1 / j = 1 x ( μ i j 1 μ i j ) λ 1 / j = 1 x ( μ θ j 1 μ θ j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( μ i j 1 μ i j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( μ θ j 1 μ θ j ) λ 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ
Similarly, we have
j = 1 x ( 1 ν i j ν i j ) λ j = 1 x ( 1 ν θ j ν θ j ) λ 1 / j = 1 x ( 1 ν i j ν i j ) λ 1 / j = 1 x ( 1 ν θ j ν θ j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν i j ν i j ) λ x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν θ j ν θ j ) λ 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ 1 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ 1 1 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ 1 1 1 + ( x C n x 1 i 1 < < i x n 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ
Let k = IFDDHM ( x ) ( k 1 , k 2 , , k n ) , π = IFDDHM ( x ) ( π 1 , π 2 , , π n ) and S ( k ) , S ( π ) be the score values of k and π respectively. Based on the score value of IFN in (2) and the above inequality, we can imply that S ( k ) S ( π ) , and then we discuss the following cases:
(1) If S ( k ) > S ( π ) , then we get IFDDHM ( x ) ( k 1 , k 2 , , k n ) > IFDDHM ( x ) ( π 1 , π 2 , , π n ) .
(2) If S ( k ) = S ( π ) , then
1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ = 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ
Since μ i j μ θ j 0 , ν θ j ν i j 0 , we can deduce that
1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ = 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ  
and
1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ = 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ  
Therefore, it follows that
H ( k ) = 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ = 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ + 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ = H ( π )
Then IFDDHM ( x ) ( k 1 , k 2 , , k n ) = IFDDHM ( x ) ( π 1 , π 2 , , π n ) .  □
Property 11 (Boundedness).
Let k i = ( μ i j , ν i j ) , k + = ( μ max   i j , ν max   i j ) ( i = 1 , 2 , , k ) be a set of IFNs, and k = ( μ min   i j , ν min   i j ) then
k < IFDDHM ( x ) ( k 1 , k 2 , , k n ) < k +  
Proof. 
Based on Properties 9 and 10, we have
IFDDHM ( x ) ( k 1 , k 2 , , k k ) IFDDHM ( x ) ( k , k , , k ) = k , IFDDHM ( x ) ( k 1 , k 2 , , k k ) IFDDHM ( x ) ( k + , k + , , k + ) = k + .
Then we have k < IFDDHM ( x ) ( k 1 , k 2 , , k n ) < k + . □
Property 12 (Commutativity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. Suppose ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = IFDDHM ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Because ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then
( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x = ( 1 i 1 < < i x n ( j = 1 x π i j x ) ) 1 C n x  
thus
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = IFDDHM ( x ) ( π 1 , π 2 , , π n )
 □
Example 7.
Let k 1 = ( 0.7 , 0.3 ) , k 2 = ( 0.4 , 0.1 ) , k 3 = ( 0.5 , 0.2 ) , k 4 = ( 0.6 , 0.2 ) be four IFNs. Then we use the proposed IFDDHM operator to aggregate four IFNs (suppose x = 2 , λ = 2 ).
Let
IFDDHM ( x ) ( k 1 , k 2 , , k n ) = ( 1 i 1 < < i x n ( j = 1 x k i j x ) ) 1 C n x = ( 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n x 1 i 1 < < i x n 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ ) = ( 1 1 + ( 2 C 4 2 1 i 1 < < i 2 4 1 j = 1 4 ( μ i j 1 μ i j ) 2 ) 1 2 , 1 1 1 + ( 2 C 4 2 1 i 1 < < i 4 4 1 j = 1 4 ( 1 ν i j ν i j ) 4 ) 1 2 ) = ( 1 1 + ( 2 6 × ( 1 ( 0.7 1 0.7 ) 2 + ( 0.4 1 0.4 ) 2 + 1 ( 0.7 1 0.7 ) 2 + ( 0.5 1 0.5 ) 2 + 1 ( 0.7 1 0.7 ) 2 + ( 0.6 1 0.6 ) 2 + 1 ( 0.4 1 0.4 ) 2 + ( 0.5 1 0.5 ) 2 + 1 ( 0.4 1 0.4 ) 2 + ( 0.6 1 0.6 ) 2 + 1 ( 0.5 1 0.5 ) 2 + ( 0.6 1 0.6 ) 2 ) ) 1 2 , 1 1 1 + ( 2 6 × ( 1 ( 1 0.3 0.3 ) 2 + ( 1 0.1 0.1 ) 2 + 1 ( 1 0.3 0.3 ) 2 + ( 1 0.2 0.2 ) 2 + 1 ( 1 0.3 0.3 ) 2 + ( 1 0.2 0.2 ) 2 + 1 ( 1 0.1 0.1 ) 2 + ( 1 0.2 0.2 ) 2 + 1 ( 1 0.1 0.1 ) 2 + ( 1 0.2 0.2 ) 2 + 1 ( 1 0.2 0.2 ) 2 + ( 1 0.2 0.2 ) 2 ) ) 1 2 ) = ( 0.5617 , 0.1596 )
At last, we get IFDDHM ( 4 ) ( k 1 , k 2 , k 3 , k 4 ) = ( 0.5617 , 0.1596 ) .

3.4. The IFWDDHM Operator

The weights of attributes play an important role in practical decision making, and they can influence the decision result. Therefore, it is necessary to consider attribute weights in aggregating information. It is obvious that the IFWDDHM operator fails to consider the problem of attribute weights. In order to overcome this defect, we propose the IFWDDHM operator.
Definition 12.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a group of IFNs, ω = ( ω 1 , ω 2 , ω n ) T be the weight vector for ( i = 1 , 2 , , n ) , which satisfies ω i [ 0.1 ] and i = 1 n ω i = 1 , then we can define IFWDDHM operator as follows:
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = { 1 i 1 < < i x n ( 1 i = 1 x ω i ) ( j = 1 x k i j ) 1 x C n 1 x ( 1 x < n ) i = 1 x k i 1 ω i n 1 ( x = n )  
Theorem 4.
Let k i = ( μ i j , ν i j ) ( i = 1 , 2 , , n ) be a group of IFNs, and their weight vector meet ω i [ 0 , 1 ] , then the result from Definition 12 is still an IFN, and has
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = 1 i 1 < < i x n ( 1 i = 1 x ω i ) ( j = 1 x k i j ) 1 x C n 1 x = ( 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ ) ( 1 x < n )
or
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = i = 1 x k i 1 ω i n 1 = ( 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( μ i 1 μ i ) λ ) 1 λ , 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 ν i ν i ) λ ) 1 λ ) ( x = n )
Proof. 
j = 1 x k i j = ( 1 1 1 + ( j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 + ( j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )  
( j = 1 x k i j ) 1 x = ( 1 1 1 + ( 1 x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 + ( 1 x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )  
Thereafter,
( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x = ( 1 1 + ( ( 1 i = 1 x ω i ) x j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ , 1 1 1 + ( ( 1 i = 1 x ω i ) x j = 1 x ( 1 μ i j μ i j ) λ ) 1 λ )  
Moreover,
1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x = ( 1 1 + ( 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )  
Therefore,
1 C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) ( j = 1 x k i j ) 1 x = ( 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )
For the second case, when ( x = n ) , we get
k i 1 ω i n 1 = ( 1 1 + ( ( 1 ω i n 1 ) ( 1 μ i μ i ) λ ) 1 λ , 1 1 1 + ( ( 1 ω i n 1 ) ( ν i 1 ν i ) λ ) 1 λ )  
Then,
i = 1 x k i 1 ω i n 1 = ( 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( μ i 1 μ i ) λ ) 1 λ , 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 ν i ν i ) λ ) 1 λ )  
Next, we prove the (52) and (53) are IFNs. For the first case, when 1 x < n ,
Let
a = 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ  
b = 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ  
Then we need prove the following two conditions.
(I) 0 a 1 , 0 b 1 ;
(II) 0 a + b 1 .
(I) Since a [ 0 , 1 ] , we can get
1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ > 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) x j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ [ 0 , 1 ]
Therefore, 0 a 1 . Similarly, we can get
1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ [ 0 , 1 ]  
Therefore, 0 b 1 .
(II) Since 0 a + b 1 , we can get the following inequality.
1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ + 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ + 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ + 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( ν i j 1 ν i j ) λ ) 1 λ = 1
For the second case, when x = n , we can easily prove that it is kept. So the aggregation result produced by Definition 9 is still an IFN. Next, we shall deduce some desirable properties of IFWDDHM operator. □
Property 13 (Idempotency).
If k i ( i = 1 , 2 , , n ) are equal, i.e., k i = k = ( μ , ν ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 then
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = k  
Proof. 
Since k i = k = ( μ , ν ) , based on Theorem 4, we get
(1) For the first case, when 1 x < n .
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ ) = ( 1 1 + ( 1 C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 ( μ 1 μ ) λ ) 1 λ , 1 1 1 + ( 1 C n 1 x 1 i 1 < < i x n ( 1 i = 1 x ω i ) 1 ( 1 ν ν ) λ ) 1 λ )   = ( 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 i = 1 x ω i ) 1 ( μ 1 μ ) λ ) 1 λ , 1 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 i = 1 x ω i ) 1 ( 1 ν ν ) λ ) 1 λ )  
Since i = 1 k ω i = 1 , we can get
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 ) 1 ( ν 1 ν ) λ ) 1 λ , 1 1 1 + ( 1 C n 1 x ( C n x C n 1 x 1 ) 1 ( 1 μ μ ) λ ) 1 λ ) = ( 1 1 + ( 1 ( n 1 ) ! x ! ( k 1 x ) ! ( n 1 ) ! x ! ( k 1 x ) ! 1 ( μ 1 μ ) λ ) 1 λ , 1 1 1 + ( 1 ( n 1 ) ! x ! ( k 1 x ) ! ( n 1 ) ! x ! ( k 1 x ) ! 1 ( 1 ν ν ) λ ) 1 λ ) = ( 1 1 + ( 1 ( μ 1 μ ) λ ) 1 λ , 1 1 1 + ( 1 ( 1 ν ν ) λ ) 1 λ ) = ( μ , ν ) = k
(2) For the second case, when x = n ,
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = ( 1 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( μ i 1 μ i ) λ ) 1 λ , 1 1 + ( i = 1 x ( 1 ω i n 1 ) ( 1 ν i ν i ) λ ) 1 λ ) = ( 1 1 1 + ( n 1 n 1 ( μ 1 μ ) λ ) 1 λ , 1 1 + ( n 1 n 1 ( 1 ν ν ) λ ) 1 λ ) = ( μ , ν ) = k
which proves the idempotency property of the IFWDDHM operator. □
Property 14 (Monotonicity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. If μ i j μ θ j , ν i j ν θ j for all j , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the k and π are equal, then we have
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDDHM ω ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Since x 1 , μ i j μ θ j 0 , ν θ j ν i j 0 , then
j = 1 x ( μ i j 1 μ i j ) λ j = 1 x ( μ θ j 1 μ θ j ) λ 1 / j = 1 x ( μ i j 1 μ i j ) λ 1 / j = 1 x ( μ θ j 1 μ θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ i j 1 μ i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ θ j 1 μ θ j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ i j 1 μ i j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ θ j 1 μ θ j ) λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ
Similarly, we have
j = 1 x ( 1 ν i j ν i j ) λ j = 1 x ( 1 ν θ j ν θ j ) λ 1 / j = 1 x ( 1 ν i j ν i j ) λ 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν i j ν i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν i j ν i j ) λ ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν θ j ν θ j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν i j ν i j ) λ x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν θ j ν θ j ) λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 / j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ
Let a = IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) , π = IFWDDHM ω ( x ) ( π 1 , π 2 , , π n ) and S ( k ) , S ( π ) be the score values of a and π respectively. Based on the score value of IFN in (2) and the above inequality, we can imply that S ( k ) S ( π ) , and then we discuss the following cases:
(1) If S ( k ) > S ( π ) , then we can get
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) > IFWDDHM ω ( x ) ( π 1 , π 2 , , π n )  
(2) If S ( k ) = S ( π ) , then
  1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ ) = 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ ( 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ )  
Since μ i j μ θ j 0 , ν θ j ν i j 0 , and based on the Equations (2) and (3), we can deduce
1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ = 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( μ θ j 1 μ θ j ) λ ) 1 λ  
and
  1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ = 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 ν θ j ν θ j ) λ ) 1 λ  
Therefore, it follows that H ( k ) = H ( π ) , IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDDHM ω ( x ) ( π 1 , π 2 , , π n ) , When x = n , we can prove it in a similar way. □
Property 15 (Boundedness).
Let k i = ( μ i j , ν i j ) , k + = ( μ max i j , ν max i j ) ( i = 1 , 2 , , n ) be a set of IFNs, and k = ( μ min i j , ν min i j ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 n ω i = 1
k IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) k +  
Proof. 
Based on Properties 13 and 14, we have
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDDHM ω ( x ) ( k , k , , k ) = k IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) IFWDDHM ω ( x ) ( k + , k + , , k + ) = k +
Then we have k IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) k + . □
Property 16 (Commutativity).
Let k i = ( μ i j , ν i j ) , π i = ( μ θ j , ν θ j ) ( i = 1 , 2 , , n ) be two sets of IFNs. Suppose ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , and weight vector meets ω i [ 0 , 1 ] and i = 1 k ω i = 1 , the k and π are equal, then we have
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDDHM ω ( x ) ( π 1 , π 2 , , π n )  
Proof. 
Because ( π 1 , π 2 , , π n ) is any permutation of ( k 1 , k 2 , , k n ) , then
1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C k 1 x = 1 i 1 < < i x k ( 1 j = 1 x ω i j ) ( j = 1 x π i j ) 1 x C k 1 x ( 1 x < k ) i = 1 x k i 1 ω i k 1 = i = 1 x π i 1 ω i k 1 ( x = k )
Thus, IFWDHM ω ( x ) ( k 1 , k 2 , , k n ) = IFWDHM ω ( x ) ( π 1 , π 2 , , π n ) . □
Example 8.
Let k 1 = ( 0.6 , 0.1 ) , k 2 = ( 0.5 , 0.4 ) , k 3 = ( 0.8 , 0.3 ) , k 4 = ( 0.7 , 0.2 ) be four IFNs. Then we use the IFWDDHM operator to fuse four IFNs, the weighting vector of attributes be ω = { 0.2 , 0.3 , 0.4 , 0.1 } (suppose x = 2 , λ = 2 ).
Let
IFWDDHM ω ( x ) ( k 1 , k 2 , , k n ) = 1 i 1 < < i x n ( 1 j = 1 x ω i j ) ( j = 1 x k i j ) 1 x C n 1 x = ( 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( μ i j 1 μ i j ) λ ) 1 λ , 1 1 1 + ( x C n 1 x 1 i 1 < < i x n ( 1 j = 1 x ω i j ) 1 j = 1 x ( 1 ν i j ν i j ) λ ) 1 λ )   = ( 1 1 + ( 2 C 3 2 1 i 1 < < i 2 4 ( 1 j = 1 2 ω i j ) 1 j = 1 2 ( μ i j 1 μ i j ) 2 ) 1 2 , 1 1 1 + ( 2 C 3 2 1 i 1 < < i 2 4 ( 1 j = 1 2 ω i j ) 1 j = 1 2 ( 1 ν i j ν i j ) 2 ) 1 2 )   = ( 1 1 + ( 2 3 × + ( 1 0.2 0.3 ( 0.6 1 0.6 ) 2 + ( 0.5 1 0.5 ) 2 + 1 0.2 0.4 ( 0.6 1 0.6 ) 2 + ( 0.8 1 0.8 ) 2 + 1 0.2 0.1 ( 0.6 1 0.6 ) 2 + ( 0.7 1 0.7 ) 2 + 1 0.3 0.4 ( 0.5 1 0.5 ) 2 + ( 0.8 1 0.8 ) 2 + 1 0.3 0.1 ( 0.5 1 0.5 ) 2 + ( 0.7 1 0.7 ) 2 + 1 0.4 0.1 ( 0.8 1 0.8 ) 2 + ( 0.7 1 0.7 ) 2 ) ) , 1 1 1 + ( 2 3 × ( 1 0.2 0.3 ( 1 0.1 0.1 ) 2 + ( 1 0.4 0.4 ) 2 + 1 0.2 0.4 ( 1 0.1 0.1 ) 2 + ( 1 0.3 0.3 ) 2 + 1 0.2 0.1 ( 1 0.1 0.1 ) 2 + ( 1 0.2 0.2 ) 2 + 1 0.3 0.4 ( 1 0.4 0.4 ) 2 + ( 1 0.3 0.3 ) 2 + 1 0.3 0.1 ( 1 0.4 0.4 ) 2 + ( 1 0.2 0.2 ) 2 + 1 0.4 0.1 ( 1 0.3 0.3 ) 2 + ( 1 0.2 0.2 ) 2 ) ) ) = ( 0.6592 , 0.2153 )
At last, we get IFWDDHM ω ( 2 ) ( k 1 , k 2 , k 3 , k 4 ) = ( 0.6592 , 0.2153 ) .

4. A MAGDM Approach Based on the Proposed Operators

In this section, we will apply the proposed IFWDHM (IFWDDHM) operator to cope with the MAGDM problem with IFNs. Let X = { x 1 , x 2 , , x m } be a set of alternatives, and C = { c 1 , c 2 , , c n } be a set of attributes, the weighting vector of attributes be ω = { ω 1 , ω 2 , ω n } , meet ω j [ 0 , 1 ] , j = 1 , 2 , , n , j = 1 n ω j = 1 . There are experts Y = { y 1 , y 2 , y z } who are invited to give the evaluation information, and their weighting vector is w = { w 1 , w 2 , w z } T with w t [ 0 , 1 ] , ( t = 1 , 2 , , z ) , t = 1 z w t = 1 . The expert y t evaluates each attribute c j of each alternative x i by the form of IFN a i j t = ( μ i j t , ν i j t ) ( i = 1 , 2 , , m , j = 1 , 2 , , n ) , and then the decision matrix A t = ( a ˜ i j t ) m × n = ( ( μ i j t , ν i j t ) ) m × n ( t = 1 , 2 , , z ) is constructed. The ultimate goal is to give a ranking of all alternatives.
Then, we will give the steps for solving this problem.
Step1: Calculate the collective evaluation value of each attribute for each alternative by
a ˜ i j t = IFWDHM ω ( x ) ( a ˜ i j 1 , a ˜ i j 2 , a ˜ i j z )   and   a ˜ i j t = IFWDDHM ω ( x ) ( a ˜ i j 1 , a ˜ i j 2 , a ˜ i j z )  
Step2: Calculate the overall value of each alternative with the IFWDHM (IFWDDHM) operator
a ˜ i = IFWDHM ω ( x ) ( a ˜ i 1 , a ˜ i 2 , a ˜ i n ) ,   a ˜ i = IFWDDHM ω ( x ) ( a ˜ i 1 , a ˜ i 2 , a ˜ i n )  
Step3: Calculate the S ( a ˜ ) and H ( a ˜ ) .
Step4: Sort all alternatives { x 1 , x 2 , , x m } and choose the best one.

5. An Illustrate Example

In this section, we give an example to explain the proposed method. A transportation company wants to pick a car and there are four cars as candidates M i = ( M 1 , M 2 , M 3 , M 4 ) . We evaluate each supplier from four aspects E i = ( E 1 , E 2 , E 3 , E 4 ) , which are “production price”, “production quality”, “production’s service performance”, and “risk factor”. The weight vector of attributes is ω = ( 0.1 , 0.4 , 0.3 , 0.2 ) T . There are four experts, and the weight vector of the experts is ( 0.3 , 0.4 , 0.2 , 0.1 ) T . Then the decision matrix R t = ( a i j t ) 4 × 4 ( t = 1 , 2 , 3 , 4 ) are shown in Table 1, Table 2, Table 3 and Table 4, and our goal is to rank four cars and select the best one.

5.1. Decision-Making Processes

Step 1: Since the four attributes are of the same type, thus, we don’t need to normalize the matrix R 1 ~ R 4 .
Step 2: Use IFWDHM operator to fuse four decision matrix R t = ( a i j t ) m × n into a collective matrix R = ( a i j t ) m × n which is shown in Table 5 (suppose x = 2 , λ = 2 ).
Use IFWDDHM operator to aggregate four decision matrixes R t = ( a i j t ) m × n into a collective matrix R = ( a i j t ) m × n , which is shown in Table 6 (suppose x = 2 , λ = 2 ).
Step 3: Use the IFWDHM (IFWDDHM) operator to aggregate all the attribute values a i j , a i j ( j = 1 , 2 , 3 , 4 ) and get the comprehensive evaluation value (suppose x = 2 , λ = 2 ).
a 1 = ( 0.0694 , 0.4051 ) , a 2 = ( 0.5357 , 0.2264 ) , a 3 = ( 0.1464 , 0.3736 ) , a 4 = ( 0.0330 , 0.6366 ) . a 1 = ( 0.8010 , 0.0103 ) , a 2 = ( 0.9380 , 0.0011 ) , a 3 = ( 0.8584 , 0.0087 ) , a 4 = ( 0.6690 , 0.0290 ) .  
Step 4: Obtain the score values.
S ( a 1 ) = 0.3357 , S ( a 2 ) = 0.3093 , S ( a 3 ) = 0.2272 , S ( a 4 ) = 0.6036. S ( a 1 ) = 0.7907 , S ( a 2 ) = 0.9369 , S ( a 3 ) = 0.8497 , S ( a 4 ) = 0.6399.  
Step 5: Rank all alternatives. a 2 a 3 a 1 a 4 , then the best choice is a 2 .
Considering the different parameter values of an IFWDHM operator that may have an impact on the ranking results, we calculated the scores produced from the different x and the results are listed in Table 7.
Considering the different parameter values of an IFWDDHM operator that may have an impact on the ordering results, we calculated the scores with different x and the results are listed in Table 8.
From Table 7 and Table 8, we get following conclusions.
When x = 1 , the sorting of alternatives is a 2 a 3 a 1 a 4 , and the best choice is a 2 .
When x = 2 , 3 , 4 , the sorting of alternatives is a 2 a 3 a 1 a 4 , and the best choice is a 2 .
Although there is the same best selection, the ranking is different. When x = 1 , the interrelationship between the attributes is not considered, and when x = 2 , 3 , 4 , we can consider the interrelationship for different number of attributes. So these results are reasonable for these two conditions.

5.2. Comparative Analysis

Following this, we compare the proposed method with IFWA operator [4], IFWG operator [5], IFWMM operator [62], and IFDWMM operator [62] and the comparative results are depicted in Table 9.
From above analysis, we arrived at the same results. However, the existing operators, such as IFWA operator and IFWG operator do not consider the relationship between arguments, and thus cannot eliminate the corresponding influence of unfair arguments on decision result. The IFWMM operator, IFDWMM operator, IFWDHM and IFWDDHM operators consider the relationship among the arguments.

6. Conclusions

In this paper, we investigated the MADM problems with IFNs. Following this, we utilized the HM operator, DHM operator, DDHM operator, WDHM operator, and WDDHM operator to develop some novel operators with IFNs: Intuitionistic fuzzy DHM (IFDHM) operator, intuitionistic fuzzy WDHM operator, intuitionistic fuzzy DDHM (IFDDHM) operator, and intuitionistic fuzzy WDDHM (IFWDDHM) operator. The prominent characteristic of these proposed operators were studied. Moreover, we have utilized these operators to develop some models to solve the MAGDM problems with IFNs. Finally, a practical example for the selection of a car car supplier was given. In the future, the application of the IFNs needs to be explored in decision-making processes [63,64,65,66,67,68,69,70,71,72], risk analysis [73,74], and other fuzzy environments [75,76,77,78,79,80].

Author Contributions

Z.L., H.G. and G.W. conceived and worked together to achieve this work, Z.L. compiled the computing program by Matlab and analyzed the data, Z.L. and G.W. wrote the paper. Finally, all the authors have read and approved the final manuscript.

Funding

The work was supported by the National Natural Science Foundation of China under Grant No. 71571128 and the Humanities and Social Sciences Foundation of Ministry of Education of the People’s Republic of China (17XJA630003) and the Construction Plan of Scientific Research Innovation Team for Colleges and Universities in Sichuan Province (15TD0004).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Atanassov, K. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  2. Atanassov, K. More on intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 33, 37–46. [Google Scholar] [CrossRef]
  3. Burillo, P.; Bustince, H. Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets. Fuzzy Sets Syst. 2001, 118, 305–316. [Google Scholar] [CrossRef]
  4. Xu, Z.S. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187. [Google Scholar]
  5. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gener. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  6. Xu, Z.S. Intuitionistic preference relations and their application in group decision making. Inform. Sci. 2007, 177, 2363–2379. [Google Scholar] [CrossRef]
  7. Li, D.F. Extension of the LINMAP for multiattribute decision making under Atanassov’s intuitionistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2008, 7, 17–34. [Google Scholar] [CrossRef]
  8. Xu, Z.S. Choquet integrals of weighted intuitionistic fuzzy information. Inform. Sci. 2010, 180, 726–736. [Google Scholar] [CrossRef]
  9. Ye, J. Cosine similarity measures for intuitionistic fuzzy sets and their applications. Math. Comput. Model. 2011, 53, 91–97. [Google Scholar] [CrossRef]
  10. Li, D.-F.; Ren, H.-P. Multi-attribute decision making method considering the amount and reliability of intuitionistic fuzzy information. J. Intell. Fuzzy Syst. 2015, 28, 1877–1883. [Google Scholar]
  11. Wei, G.W. Some induced geometric aggregation operators with intuitionistic fuzzy information and their application to group decision making. Appl. Soft Comput. 2010, 10, 423–431. [Google Scholar] [CrossRef]
  12. Wei, G.W.; Zhao, X.F. Some induced correlated aggregating operators with intuitionistic fuzzy information and their application to multiple attribute group decision making. Expert Syst. Appl. 2012, 39, 2026–2034. [Google Scholar] [CrossRef]
  13. Wei, G. Gray relational analysis method for intuitionistic fuzzy multiple attribute decision making. Expert Syst. Appl. 2011, 38, 11671–11677. [Google Scholar] [CrossRef]
  14. Wei, G.W. GRA method for multiple attribute decision making with incomplete weight information in intuitionistic fuzzy setting. Knowl.-Based Syst. 2010, 23, 243–247. [Google Scholar] [CrossRef]
  15. Zhao, X.; Wei, G. Some Intuitionistic Fuzzy Einstein Hybrid Aggregation Operators and Their Application to Multiple Attribute Decision Making. Knowl.-Based Syst. 2013, 37, 472–479. [Google Scholar] [CrossRef]
  16. Garg, H. Generalized intuitionistic fuzzy interactive geometric interaction operators using Einstein t-norm and t-conorm and their application to decision making. Comput. Ind. Eng. 2016, 101, 53–69. [Google Scholar] [CrossRef]
  17. Chu, J.; Liu, X.; Wang, Y.-M.; Chin, K.-S. A group decision making model considering both the additive consistency and group consensus of intuitionistic fuzzy preference relations. Comput. Ind. Eng. 2016, 101, 227–242. [Google Scholar] [CrossRef]
  18. Wan, S.-P.; Wang, F.; Dong, J.-Y. A novel risk attitudinal ranking method for intuitionistic fuzzy values and application to MADM. Appl. Soft Comput. 2016, 40, 98–112. [Google Scholar] [CrossRef]
  19. Zhao, J.; You, X.-Y.; Liu, H.-C.; Wu, S.-M. An Extended VIKOR Method Using Intuitionistic Fuzzy Sets and Combination Weights for Supplier Selection. Symmetry 2017, 9, 169. [Google Scholar] [CrossRef]
  20. Liu, P. Multiple Attribute Decision-Making Methods Based on Normal Intuitionistic Fuzzy Interaction Aggregation Operators. Symmetry 2017, 9, 261. [Google Scholar] [CrossRef]
  21. Shi, Y.; Yuan, X.; Zhang, Y. Constructive methods for intuitionistic fuzzy implication operators. Soft Comput. 2017, 21, 5245–5264. [Google Scholar] [CrossRef]
  22. Otay, I.; Öztaysi, B.; Onar, S.Ç.; Kahraman, C. Multi-expert performance evaluation of healthcare institutions using an integrated intuitionistic fuzzy AHP&DEA methodology. Knowl.-Based Syst. 2017, 133, 90–106. [Google Scholar]
  23. Ai, Z.; Xu, Z. Multiple Definite Integrals of Intuitionistic Fuzzy Calculus and Isomorphic Mappings. IEEE Trans. Fuzzy Syst. 2018, 26, 670–680. [Google Scholar] [CrossRef]
  24. Montes, I.; Pal, N.R.; Montes, S. Entropy measures for Atanassov intuitionistic fuzzy sets based on divergence. Soft Comput. 2018, 22, 5051–5071. [Google Scholar] [CrossRef]
  25. Liu, Q.; Wu, C.; Lou, L. Evaluation research on commercial bank counterparty credit risk management based on new intuitionistic fuzzy method. Soft Comput. 2018, 22, 5363–5375. [Google Scholar] [CrossRef]
  26. Li, Y.H.; Olson, D.L.; Zheng, Q. Similarity measures between intuitionistic fuzzy (vague) sets: A comparative analysis. Pattern Recognit. Lett. 2007, 28, 278–285. [Google Scholar] [CrossRef]
  27. Szmidt, E.; Kacprzyk, J. Distances between intuitionistic fuzzy sets. Fuzzy Sets Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
  28. Szmidt, E.; Kacprzyk, J. A new concept of a similarity measure for intuitionistic fuzzy sets and its use in group decision making. In Lecture Notes in Computer Science, (Subseries LNAI); Springer: Cham, Switzerland, 2005; Volume 3558, pp. 272–282. [Google Scholar]
  29. Hung, W.L.; Yang, M.S. Similarity measures of intuitionistic fuzzy sets based on Hausdorff distance. Pattern Recognit. Lett. 2004, 25, 1603–1611. [Google Scholar] [CrossRef]
  30. Ziemba, P. NEAT F-PROMETHEE—A new fuzzy multiple criteria decision making method based on the adjustment of mapping trapezoidal fuzzy numbers. Expert Syst. Appl. 2018, 110, 363–380. [Google Scholar] [CrossRef]
  31. Hung, W.L.; Yang, M.S. Similarity measures of intuitionistic fuzzy sets based on Lp metric. Int. J. Approx. Reason. 2007, 46, 120–136. [Google Scholar] [CrossRef]
  32. Xu, Z.S.; Xia, M.M. Some new similarity measures for intuitionistic fuzzy values and their application in group decision making. J. Syst. Sci. Eng. 2010, 19, 430–452. [Google Scholar]
  33. Li, Z.; Wei, G.; Gao, H. Methods for Multiple Attribute Decision Making with Interval-Valued Pythagorean Fuzzy Information. Mathematics 2018, 6, 228. [Google Scholar] [CrossRef]
  34. Rajarajeswari, P.; Uma, N. Intuitionistic fuzzy multi similarity measure based on cotangent function. Int. J. Eng. Res. Technol. 2013, 2, 1323–1329. [Google Scholar]
  35. Hwang, C.-M.; Yang, M.-S.; Hung, W.-L. New similarity measures of intuitionistic fuzzy sets based on the Jaccard index with its application to clustering. Int. J. Intell. Syst. 2018, 33, 1672–1688. [Google Scholar] [CrossRef]
  36. Ye, J. Similarity measures of intuitionistic fuzzy sets based on cosine function for the decision making of mechanical design schemes. J. Intell. Fuzzy Syst. 2016, 30, 151–158. [Google Scholar] [CrossRef]
  37. Ye, J. Multicriteria group decision-making method using vector similarity measures for trapezoidal intuitionistic fuzzy numbers. Group Decis. Negotiat. 2012, 21, 519–530. [Google Scholar] [CrossRef]
  38. Wei, G.W. Some similarity measures for picture fuzzy sets and their applications. Iran. J. Fuzzy Syst. 2018, 15, 77–89. [Google Scholar]
  39. Wei, G.W.; Gao, H. The generalized Dice similarity measures for picture fuzzy sets and their applications. Informatica 2018, 29, 1–18. [Google Scholar] [CrossRef]
  40. Wei, G.W.; Wei, Y. Similarity measures of Pythagorean fuzzy sets based on cosine function and their applications. Int. J. Intell. Fuzzy Syst. 2018, 33, 634–652. [Google Scholar] [CrossRef]
  41. Wei, G.W. Some cosine similarity measures for picture fuzzy sets and their applications to strategic decision making. Informatica 2017, 28, 547–564. [Google Scholar] [CrossRef]
  42. Wang, C.-Y.; Chen, S.-M. A new multiple attribute decision making method based on linear programming methodology and novel score function and novel accuracy function of interval-valued intuitionistic fuzzy values. Inf. Sci. 2018, 438, 145–155. [Google Scholar] [CrossRef]
  43. Merigo, J.M.; Gil-Lafuente, A.M. Fuzzy induced generalized aggregation operators and its application in multi-person decision making. Expert Syst. Appl. 2011, 38, 9761–9772. [Google Scholar] [CrossRef]
  44. Wei, G.W. Picture fuzzy Hamacher aggregation operators and their application to multiple attribute decision making. Fundam. Inform. 2018, 157, 271–320. [Google Scholar] [CrossRef]
  45. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Picture 2-tuple linguistic aggregation operators in multiple attribute decision making. Soft Comput. 2018, 22, 989–1002. [Google Scholar] [CrossRef]
  46. Gao, H.; Lu, M.; Wei, G.W.; Wei, Y. Some novel Pythagorean fuzzy interaction aggregation operators in multiple attribute decision making. Fundam. Inform. 2018, 159, 385–428. [Google Scholar] [CrossRef]
  47. Wei, G.W.; Gao, H.; Wei, Y. Some q-Rung Orthopair Fuzzy Heronian Mean Operators in Multiple Attribute Decision Making. Int. J. Intell. Syst. 2018, 33, 1426–1458. [Google Scholar] [CrossRef]
  48. Mohagheghi, V.; Mousavi, S.M.; Vahdani, B. Enhancing decision-making flexibility by introducing a new last aggregation evaluating approach based on multi-criteria group decision making and Pythagorean fuzzy sets. Appl. Soft Comput. 2017, 61, 527–535. [Google Scholar] [CrossRef]
  49. Wei, G.W. Picture uncertain linguistic Bonferroni mean operators and their application to multiple attribute decision making. Kybernetes 2017, 46, 1777–1800. [Google Scholar] [CrossRef]
  50. Zhang, L.; Zhan, J.; Alcantud, J.C.R. Novel classes of fuzzy soft β-coverings-based fuzzy rough sets with applications to multi-criteria fuzzy group decision making. Soft Comput. 2018. [Google Scholar] [CrossRef]
  51. Wei, G.W.; Gao, H.; Wang, J.; Huang, Y.H. Research on Risk Evaluation of Enterprise Human Capital Investment with Interval-valued bipolar 2-tuple linguistic Information. IEEE Access 2018, 6, 35697–35712. [Google Scholar] [CrossRef]
  52. Ziemba, P.; Jankowski, J.; Watróbski, J. Online Comparison System with Certain and Uncertain Criteria Based on Multi-criteria Decision Analysis Method. In Proceedings of the Computational Collective Intelligence ICCCI 2017, Nicosia, Cyprus, 27 September 2017; pp. 579–589. [Google Scholar]
  53. Wang, J.; Wei, G.; Lu, M. TODIM Method for Multiple Attribute Group Decision Making under 2-Tuple Linguistic Neutrosophic Environment. Symmetry 2018, 10, 486. [Google Scholar] [CrossRef]
  54. Dombi, J. A general class of fuzzy operators, the demorgan class of fuzzy operators and fuzziness measures induced by fuzzy operators. Fuzzy Sets Syst. 1982, 8, 149–163. [Google Scholar] [CrossRef]
  55. Liu, P.D.; Liu, J.L.; Chen, S.M. Some intuitionistic fuzzy Dombi Bonferroni mean operators and their application to multi-attribute group decision making. J. Oper. Res. Soc. 2018, 69. [Google Scholar] [CrossRef]
  56. Chen, J.Q.; Ye, J. Some Single-Valued Neutrosophic Dombi Weighted Aggregation Operators for Multiple Attribute Decision-Making. Symmetry 2017, 9, 82. [Google Scholar] [CrossRef]
  57. Wei, G.; Wei, Y. Some single-valued neutrosophic dombi prioritized weighted aggregation operators in multiple attribute decision making. J. Intell. Fuzzy Syst. 2018, 35, 2001–2013. [Google Scholar] [CrossRef]
  58. Hara, T.; Uchiyama, M.; Takahasi, S.E. A refinement of various mean inequalities. J. Inequal. Appl. 1998, 2, 387–395. [Google Scholar] [CrossRef]
  59. Wu, S.; Wang, J.; Wei, G.; Wei, Y. Research on Construction Engineering Project Risk Assessment with Some 2-Tuple Linguistic Neutrosophic Hamy Mean Operators. Sustainability 2018, 10, 1536. [Google Scholar] [CrossRef]
  60. Chen, S.M.; Tan, J.M. Handling multicriteria fuzzy decision-making problems based on vague set theory. Fuzzy Sets Syst. 1994, 67, 163–172. [Google Scholar] [CrossRef]
  61. Hong, D.H.; Choi, C.H. Multicriteria fuzzy problems based on vague set theory. Fuzzy Sets Syst. 2000, 114, 103–113. [Google Scholar] [CrossRef]
  62. Liu, P.D.; Li, D.F. Some Muirhead Mean Operators for Intuitionistic Fuzzy Numbers and Their Applications to Group Decision Making. PLoS ONE 2017, 12, e0168767. [Google Scholar] [CrossRef] [PubMed]
  63. Gao, H.; Wei, G.W.; Huang, Y.H. Dual hesitant bipolar fuzzy Hamacher prioritized aggregation operators in multiple attribute decision making. IEEE Access 2018, 6, 11508–11522. [Google Scholar] [CrossRef]
  64. Merigó, J.M.; Gil-Lafuente, A.M. Induced 2-tuple linguistic generalized aggregation operators and their application in decision-making. Inf. Sci. 2013, 236, 1–16. [Google Scholar] [CrossRef]
  65. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Projection models for multiple attribute decision making with picture fuzzy information. Int. J. Mach. Learn. Cybern. 2018, 9, 713–719. [Google Scholar] [CrossRef]
  66. Chen, T.-Y. The inclusion-based TOPSIS method with interval-valued intuitionistic fuzzy sets for multiple criteria group decision making. Appl. Soft Comput. 2015, 26, 57–73. [Google Scholar] [CrossRef]
  67. Li, Z.; Wei, G.; Lu, M. Pythagorean Fuzzy Hamy Mean Operators in Multiple Attribute Group Decision Making and Their Application to Supplier Selection. Symmetry 2018, 10, 505. [Google Scholar] [CrossRef]
  68. Al-Quran, A.; Hassan, N. The Complex Neutrosophic Soft Expert Relation and Its Multiple Attribute Decision-Making Method. Entropy 2018, 20, 101. [Google Scholar] [CrossRef]
  69. Wei, G.W.; Alsaadi, F.E.; Hayat, T.; Alsaedi, A. Bipolar fuzzy Hamacher aggregation operators in multiple attribute decision making. Int. J. Fuzzy Syst. 2018, 20, 1–12. [Google Scholar] [CrossRef]
  70. Verma, R. Multiple attribute group decision making based on generalized trapezoid fuzzy linguistic prioritized weighted average operator. Int. J. Mach. Learn. Cybern. 2017, 8, 1993–2007. [Google Scholar] [CrossRef]
  71. Wang, J.; Wei, G.; Lu, M. An Extended VIKOR Method for Multiple Criteria Group Decision Making with Triangular Fuzzy Neutrosophic Numbers. Symmetry 2018, 10, 497. [Google Scholar] [CrossRef]
  72. Wang, J.; Wei, G.; Gao, H. Approaches to Multiple Attribute Decision Making with Interval-Valued 2-Tuple Linguistic Pythagorean Fuzzy Information. Mathematics 2018, 6, 201. [Google Scholar] [CrossRef]
  73. Wei, Y.; Liu, J.; Lai, X.; Hu, Y. Which determinant is the most informative in forecasting crude oil market volatility: Fundamental, speculation, or uncertainty? Energy Econ. 2017, 68, 141–150. [Google Scholar] [CrossRef]
  74. Wei, Y.; Yu, Q.; Liu, J.; Cao, Y. Hot money and China’s stock market volatility: Further evidence using the GARCH-MIDAS model. Phys. A Stat. Mech. Appl. 2018, 492, 923–930. [Google Scholar] [CrossRef]
  75. Wei, G.W. TODIM method for picture fuzzy multiple attribute decision making. Informatica 2018, 29, 555–566. [Google Scholar]
  76. Deng, X.M.; Wei, G.W.; Gao, H.; Wang, J. Models for safety assessment of construction project with some 2-tuple linguistic Pythagorean fuzzy Bonferroni mean operators. IEEE Access 2018, 6, 52105–52137. [Google Scholar] [CrossRef]
  77. Akram, M.; Shahzadi, S. Novel intuitionistic fuzzy soft multiple-attribute decision-making methods. Neural Comput. Appl. 2018, 29, 435–447. [Google Scholar] [CrossRef]
  78. Huang, Y.H.; Wei, G.W. TODIM Method for Pythagorean 2-tuple Linguistic Multiple Attribute Decision Making. J. Intell. Fuzzy Syst. 2018, 35, 901–915. [Google Scholar] [CrossRef]
  79. Hao, Y.; Chen, X. Study on the ranking problems in multiple attribute decision making based on interval-valued intuitionistic fuzzy numbers. Int. J. Intell. Syst. 2018, 33, 560–572. [Google Scholar] [CrossRef]
  80. José Carlos, R. Alcantud: Some formal relationships among soft sets, fuzzy sets, and their extensions. Int. J. Approx. Reason. 2016, 68, 45–53. [Google Scholar]
Table 1. Decision matrix R 1 .
Table 1. Decision matrix R 1 .
E 1   E 2   E 3   E 4  
M 1 (0.5 0.3)(0.6 0.3)(0.5 0.2)(0.6 0.4)
M 2 (0.7 0.3)(0.9 0.1)(0.8 0.1)(0.7 0.2)
M 3 (0.7 0.2)(0.5 0.4)(0.6 0.1)(0.4 0.2)
M 4 (0.5 0.3)(0.3 0.4)(0.5 0.4)(0.4 0.5)
Table 2. Decision matrix R 2 .
Table 2. Decision matrix R 2 .
E 1   E 2   E 3   E 4  
M 1 (0.6 0.2)(0.7 0.1)(0.6 0.2)(0.6 0.3)
M 2 (0.9 0.1)(0.8 0.2)(0.7 0.1)(0.6 0.4)
M 3 (0.5 0.2)(0.6 0.3)(0.7 0.2)(0.8 0.1)
M 4 (0.7 0.2)(0.4 0.3)(0.5 0.5)(0.6 0.3)
Table 3. Decision matrix R 3 .
Table 3. Decision matrix R 3 .
E 1   E 2   E 3   E 4  
M 1 (0.6 0.4)(0.7 0.2)(0.6 0.3)(0.5 0.4)
M 2 (0.8 0.6)(0.7 0.1)(0.6 0.4)(0.9 0.1)
M 3 (0.5 0.2)(0.4 0.5)(0.4 0.3)(0.5 0.4)
M 4 (0.2 0.5)(0.5 0.4)(0.7 0.2)(0.5 0.4)
Table 4. Decision matrix R 4 .
Table 4. Decision matrix R 4 .
E 1   E 2   E 3   E 4  
M 1 (0.6 0.2)(0.5 0.4)(0.6 0.4)(0.4 0.5)
M 2 (0.7 0.3)(0.8 0.1)(0.6 0.2)(0.9 0.1)
M 3 (0.6 0.4)(0.3 0.6)(0.2 0.6)(0.5 0.3)
M 4 (0.3 0.5)(0.2 0.7)(0.5 0.4)(0.3 0.6)
Table 5. The collective decision matrix R .
Table 5. The collective decision matrix R .
G 1   G 2   G 3   G 4  
A 1 (0.2976 0.3875)(0.3504 0.2771)(0.2156 0.2689)(0.3996 0.3304)
A 2 (0.5818 0.2103)(0.5000 0.1638)(0.4172 0.1781)(0.4554 0.2073)
A 3 (0.3282 0.2872)(0.4554 0.3079)(0.3095 0.2316)(0.2857 0.3472)
A 4 (0.2411 0.5299)(0.3671 0.2684)(0.1813 0.2504)(0.0371 0.7363)
Table 6. The collective decision matrix R .
Table 6. The collective decision matrix R .
G 1   G 2   G 3   G 4  
A 1 (0.5372 0.0288)(0.6100 0.0063)(0.5618 0.0043)(0.6535 0.0116)
A 2 (0.7000 0.0021)(0.6667 0.0006)(0.6422 0.0009)(0.6646 0.0012)
A 3 (0.6300 0.0066)(0.6344 0.0105)(0.6066 0.0025)(0.5969 0.0196)
A 4 (0.5827 0.1360)(0.6729 0.0052)(0.5818 0.0035)(0.4172 0.6485)
Table 7. Score and ranking of the alternatives with different parameter values x .
Table 7. Score and ranking of the alternatives with different parameter values x .
x Score   of   S ( a ˜ i )   Ranking
x = 1 S ( a 1 ) = 0.0073 , S ( a 2 ) = 0.0352 , S ( a 3 ) = 0.0002 , S ( a 4 ) = 0.0383. a 2 a 3 a 1 a 4
x = 2 S ( a 1 ) = 0.3357 , S ( a 2 ) = 0.3093 , S ( a 3 ) = 0.2272 , S ( a 4 ) = 0.6036 a 2 a 3 a 1 a 4
x = 3 S ( a 1 ) = 0.2988 , S ( a 2 ) = 0.1265 , S ( a 3 ) = 0.2595 , S ( a 4 ) = 0.4853. a 2 a 3 a 1 a 4
x = 4 S ( a 1 ) = 0.1391 , S ( a 2 ) = 0.3007 , S ( a 3 ) = 0.1860 , S ( a ˜ 4 ) = 0.1440. a 2 a 3 a 1 a 4
Table 8. Score and order of the alternatives with different parameter values x .
Table 8. Score and order of the alternatives with different parameter values x .
x Score   of   S ( a i )   Ranking
x = 1 S ( a 1 ) = 0.0437 , S ( a 2 ) = 0.0915 , S ( a 3 ) = 0.0548 , S ( a 4 ) = 0.0110. a 2 a 3 a 1 a 4
x = 2 S ( a 1 ) = 0.7907 , S ( a 2 ) = 0.9369 , S ( a 3 ) = 0.8497 , S ( a 4 ) = 0.6399. a 2 a 3 a 1 a 4
x = 3 S ( a 1 ) = 0.2059 , S ( a 2 ) = 0.3597 , S ( a 3 ) = 0.2397 , S ( a 4 ) = 0.0320. a 2 a 3 a 1 a 4
x = 4 S ( a 1 ) = 0.2398 , S ( a 2 ) = 0.3695 , S ( a 3 ) = 0.2477 , S ( a 4 ) = 0.0424. a 2 a 3 a 1 a 4
Table 9. Ordering of the green suppliers.
Table 9. Ordering of the green suppliers.
Ordering
IFWA a 2 a 3 a 1 a 4
IFWG a 2 a 3 a 1 a 4
IFWMM a 2 a 3 a 1 a 4
IFDWMM a 2 a 3 a 1 a 4

Share and Cite

MDPI and ACS Style

Li, Z.; Gao, H.; Wei, G. Methods for Multiple Attribute Group Decision Making Based on Intuitionistic Fuzzy Dombi Hamy Mean Operators. Symmetry 2018, 10, 574. https://doi.org/10.3390/sym10110574

AMA Style

Li Z, Gao H, Wei G. Methods for Multiple Attribute Group Decision Making Based on Intuitionistic Fuzzy Dombi Hamy Mean Operators. Symmetry. 2018; 10(11):574. https://doi.org/10.3390/sym10110574

Chicago/Turabian Style

Li, Zengxian, Hui Gao, and Guiwu Wei. 2018. "Methods for Multiple Attribute Group Decision Making Based on Intuitionistic Fuzzy Dombi Hamy Mean Operators" Symmetry 10, no. 11: 574. https://doi.org/10.3390/sym10110574

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop