Next Article in Journal
Frequent Releases in Open Source Software: A Systematic Review
Previous Article in Journal
Arabic Handwritten Alphanumeric Character Recognition Using Very Deep Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods

1
Department of Computer Science, Shaoxing University, 508 Huancheng West Road, Shaoxing, Zhejiang 312000, China
2
Department of Electrical and Information Engineering, Shaoxing University, 508 Huancheng West Road, Shaoxing, Zhejiang 312000, China
*
Author to whom correspondence should be addressed.
Information 2017, 8(3), 107; https://doi.org/10.3390/info8030107
Submission received: 11 August 2017 / Revised: 31 August 2017 / Accepted: 31 August 2017 / Published: 1 September 2017

Abstract

:
Linguistic neutrosophic numbers (LNN) is presented by Fang and Ye in 2017, which can describe the truth, falsity, and indeterminacy linguistic information independently. In this paper, the LNN and the Bonferroni mean operator are merged together to propose a LNN normalized weighted Bonferroni mean (LNNNWBM) operator and a LNN normalized weighted geometric Bonferroni mean (LNNNWGBM) operator and the properties of these two operators are proved. Further, multi-attribute group decision methods are introduced based on the proposed LNNNWBM and LNNNWGBM operators, and then an example is provided to demonstrate the application and validity of the proposed methods. In addition, in order to consider the effect of the parameters p and q on the decision results, different pairs of parameter values are employed to verify the decision results.

1. Introduction

In dealing with the complex, unknown, and uncertain decision-making problems, a group of decision-makers are usually employed to analyze a set of alternatives and to get the optimal result in a certain way. Such a decision-making process is called multiple attribute group decision-making (MAGDM) problem. When making decisions, decision-makers tend to use words such as “excellent”, “good”, and “poor” to express their evaluations for objects. Zadeh proposed a linguistic variable set S = { S 0 , S 1 , S 2 , S 3 , , S g } (g is an even number) to deal with the approximate reasoning problems [1,2]. The linguistic variable is an effective tool, it improves the reliability and flexibility of classical decision models [3,4]. In recent years, the linguistic variables have been frequently linked to other theories. Liu proposed the intuitionistic linguistic set (ILS) composed of linguistic variables and IFS, where the first component provides its qualitative evaluation value/linguistic value and the second component gives the credibility of its intuitionistic fuzzy value for the given linguistic value [5]. Then, Chen et al. proposed the linguistic intuitionistic fuzzy number (LIFN), which is composed of the intuitionistic fuzzy number (the basic element in IFS) and the linguistic variable [6]. On the other hand, some methods for multiple attribute group decision-making (MAGDM) were proposed based on two-dimension uncertain linguistic variable [7,8]. Some improved linguistic intuitionistic fuzzy aggregation operators and several corresponding applications were given in decision-making [9]. Although the IFS theory considers not only T(x), but also F(x), IFS is still not perfect enough because it ignores the indeterminate and inconsistent information. Thus, the intuitionistic fuzzy number can only be used for expressing incomplete information, but not for expressing indeterminate and inconsistent information. To make up for the insufficiency of the IFS theory, Smarandache put forward the neutrosophic set (NS) composed of three parts: truth T(x), falsity F(x), and indeterminacy I(x) [10,11]. Wang et al. and Smarandache also proposed the concept of a single-valued neutrosophic set (SVNS) satisfying T(x), I(x), F(x) [0, 1], 0 ≤ T(x) + F(x) + I(x) ≤ 3 [10,11,12]. Ye proposed an extended TOPSIS (technique for order preference by similarity to an ideal Solution) method for MAGDM based on single valued neutrosophic linguistic numbers (SVNLNs), which are basic elements in a single-valued neutrosophic linguistic set (SVNLS) [13]. Liu and Shi presented some neutrosophic uncertain linguistic number Heronian mean operators and their application to MAGDM [14]. Since the Bonferroni mean (BM) is a useful operator in decision-making [15], it was extended to hesitant fuzzy sets, IFSs, and interval-valued IFSs to propose their some Bonferroni mean operators for decision making [16,17,18,19,20]. Then, Fang and Ye proposed the linguistic neutrosophic numbers (LNN) and their basic operational laws [21]. LNN consists of the truth, indeterminacy, and falsity linguistic degrees, which can be expressed as the form a = <lT, lI, lF>, but the LIFN and SVNLN cannot express such linguistic evaluation value. In [21], Fang and Ye also presented a LNN-weighted arithmetic averaging (LNNWAA) operator and a LNN-weighted geometric averaging (LNNWGA) operator for MAGDM. However, the Bonferroni mean operator is not extended to LNNs so far. Hence, this paper proposes a LNN normalized weighted Bonferroni mean (LNNNWBM) operator, a LNN normalized weighted geometric Bonferroni mean (LNNNWGBM) operator and their MAGDM methods. Compared with the aggregation operators in [14,21], the LNNNWBM and LNNNWGBM operators can calculate the final weights by the relation between attribute values, which can make the information aggregation more objective and reliable.
The rest organizations of this paper are as follows. Section 2 describes some basic concepts of LNN, the basic operational laws of LNNs, and the basic concepts of BM and the normalized weighted BM. Section 3 proposes the LNNNWBM and LNNNWGBM operators and investigates their properties. Section 4 establishes MAGDM methods by using the LNNNWBM operator and LNNNWGBM operator. Section 5 provides an illustrative example with different values of the parameters p and q to demonstrate the application of the proposed methods. Section 6 gives conclusions.

2. Some Concepts of LNNs and BM

2.1. Linguistic Neutrosophic Numbers and Their Operational Laws

Definition 1
[21]. Set L = { l 0 , l 1 , l 2 , , l g } as a language term set, in which g is an even number and g + 1 is the particle size of L. If a = l T , l I , l F is defined for l T , l I , l F L and T , I , F [0, g], where l T expresses the truth degree, l I   expresses indeterminacy degree, and l F expresses falsity degree by linguistic terms, then a is called an LNN.
Definition 2
[21]. Set a = l T , l I , l F , a 1 = l T 1 , l I 1 , l F 1 , and a 2 = l T 2 , l I 2 , l F 2 as three LNNs in L, the number λ 0 , they have the follow operational laws:
a 1 a 2 = l T 1 , l I 1 , l F 1 l T 2 , l I 2 , l F 2 = l T 1 + T 2 T 1 T 2 g , l I 1 I 2 g , l F 1 F 2 g ;
a 1 a 2 = l T 1 , l I 1 , l F 1 l T 2 , l I 2 , l F 2 = l T 1 T 2 g , l I 1 + I 2 I 1 I 2 g , l F 1 + F 2 F 1 F 2 g ;
λ a = λ l T , l I , l F = l g g ( 1 T g ) λ , l g ( I g ) λ , l g ( F g ) λ ;
a λ = l T , l I , l F λ = l g ( T g ) λ , l g g ( 1 I g ) λ , l g g ( 1 F g ) λ .
Definition 3
[21]. Set a = l T , l I , l F as an LNN in L, then the expectation E(a) and the accuracy H(a) can be defined as follows:
E(a) = (2g + TIF)/3g
H(a) = (TF)/g
Definition 4
[21]. Set a 1 = l T 1 , l I 1 , l F 1 and a 2 = l T 2 , l I 2 , l F 2 as two LNNs, then:
  • If E( a 1 ) > E( a 2 ), then a 1 a 2 ;
  • If E( a 1 ) = E( a 2 ) then
  • If H( a 1 ) > H( a 2 ), then a 1 a 2 ;
  • If H( a 1 ) = H( a 2 ), then a 1 a 2 ;
  • If H( a 1 ) < H( a 2 ), then a 1 a 2 .

2.2. Bonferroni Mean Operators

Definition 5
[15]. Let ( a 1 , a 2 , , a n ) be a set of non-negative numbers, the function BM: Rn→R. If p, q ≥ 0 and BM satisfies:
B M p , q ( a 1 , a 2 , , a n ) = ( 1 n ( n 1 ) i , j = 1 j i n a i p a j q ) 1 p + q
then BMp,q is called a BM operator.
Definition 6
[16]. Let ( a 1 , a 2 , , a n ) be a set of non-negative numbers, the function NWBM: Rn→R, w i (i = 1, 2, …, n) be the relative weight of a i (i = 1, 2, …, n), w i [ 0 , 1 ] , and i = 1 n w i = 1 . If p, q ≥ 0 and NWBM satisfies:
N W B M p , q ( a 1 , a 2 , , a n ) = ( i , j = 1 j i n w i w j 1 w i a i p a j q ) 1 p + q
then NWBMp,q is called a normalized weighted BM operator.
Definition 7
[17]. Let ( a 1 , a 2 , , a n ) be a set of non-negative numbers, the function GBM: Rn→R. If p, q ≥ 0 and GBM satisfies:
G B M p , q ( a 1 , a 2 , , a n ) = ( 1 n i n a i p ( j = 1 , j i n a j q ) 1 n 1 ) 1 p + q
then GBMp,q is called a geometric BM operator.
Definition 8
[18,19,20]. Let ( a 1 , a 2 , a n ) be a set of non-negative numbers, the function NWGBM: Rn→R, w i (i = 1, 2, …, n) be the relative weight of a i (i = 1, 2, …, n), w i [ 0 , 1 ] , and i = 1 n w i = 1 . If p, q ≥ 0 and NWGBM satisfies:
N W G B M p , q ( a 1 , a 2 , , a n ) = 1 p + q i , j = 1 , j i n ( p a i q a j ) w i w j 1 w i
then NWGBMp,q is called a normalized weighted geometric BM (NWGBM) operator.

3. Two BM Aggregation Operators of LNNs

3.1. Normalized Weighted BM Operators of LNNs

Definition 9.
Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, then the LNNNWBM operator can be defined as follows:
L N N N W B M p , q ( a 1 , a 2 , , a n ) = ( i , j = 1 j i n w i w j 1 w i a i p a j q ) 1 p + q
where w i is the relative weight of a i , w i [ 0 ,   1 ] , and i = 1 n w i = 1 , w j is the relative weight of a j , w j [ 0 ,   1 ] , and j = 1 n w j = 1 .
According to Definitions 2 and 9, we can get the following theorem:
Theorem 1.
Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, then by the Equation (11), the aggregation result obtained is still an LNN, and we can get the following aggregation formula:
L N N N W B M p , q ( a 1 , a 2 , , a n ) = ( i , j = 1 j i n w i w j 1 w i a i p a j q ) 1 p + q = l g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q
where w i is the relative weight of a i , w i [ 0 ,   1 ] , and i = 1 n w i = 1 , w j is the relative weight of a j , w j [ 0 ,   1 ] , and j = 1 n w j = 1 .
Proof 1.
(1)
a i p = l g ( T i g ) p , l g g ( 1 I i g ) p , l g g ( 1 F i g ) p ;
(2)
a j q = l g ( T j g ) q , l g g ( 1 I j g ) q , l g g ( 1 F j g ) q ;
(3)
a i p a j q = l g ( T i g ) p g ( T j g ) q g , l g g ( 1 I i g ) p + g g ( 1 I j g ) q ( g g ( 1 I i g ) p ) ( g g ( 1 I j g ) q ) g , l g g ( 1 F i g ) p + g g ( 1 F j g ) q ( g g ( 1 F i g ) p ) ( g g ( 1 F j g ) q ) g = l g ( T i g ) p ( T j g ) q , l g g ( 1 I i g ) p ( 1 I j g ) q , l g g ( 1 F i g ) p ( 1 F j g ) q
(4)
w i w j 1 w i a i p a j q = l g g ( 1 g ( T i g ) p ( T j g ) q g ) w i w j 1 w i , l g ( g g ( 1 I i g ) p ( 1 I j g ) q g ) w i w j 1 w i , l g ( g g ( 1 F i g ) p ( 1 F j g ) q g ) w i w j 1 w i = l g g ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i , l g ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i , l g ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i
(5)
i = 1 n j = 1 , j i n w i w j 1 w i a i p a j q = l g g i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i , l g i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i , l g i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i
(6)
( i = 1 n j = 1 , j i n w i w j 1 w i a i p a j q ) 1 p + q = l g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q
So ( i , j = 1 j i n w i w j 1 w i a i p a j q ) 1 p + q = l g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q
The proof of Theorem 1 is completed. □
Theorem 2.
(Idempotency). Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, if a i = a, then
L N N N W B M p , q ( a 1 , a 2 , , a n ) = L N N N W B M p , q ( a , a , , a ) = a .
Proof 2.
Since a i = a, i.e., Ti = T; Ii = I; Fi = F for i = 1, 2, …, n, there are the following result:
L N N N W B M p , q ( a 1 , a 2 , , a n ) = L N N N W B M p , q ( a , a , , a ) = ( i , j = 1 j i n w i w j 1 w i a p a q ) 1 p + q = l g ( 1 ( 1 ( T g ) p ( T g ) q ) i = 1 n j = 1 j i n w i w j 1 w i ) 1 p + q , l g g ( 1 ( 1 ( 1 I g ) p ( 1 I g ) q ) i = 1 n j = 1 j i n w i w j 1 w i ) 1 p + q , l g g ( 1 ( 1 ( 1 F g ) p ( 1 F g ) q ) i = 1 n j = 1 j i n w i w j 1 w i ) 1 p + q = l g ( 1 ( 1 ( T g ) p + q ) ) 1 p + q , l g g ( 1 ( 1 ( 1 I g ) p + q ) ) 1 p + q , l g g ( 1 ( 1 ( 1 F g ) p + q ) ) 1 p + q = l T , l I , l F = a .
The proof of Theorem 2 is completed. □
Theorem 3.
(Monotonicity). Set a i = l T i , l I i , l F i and bi = l T i , l I i , l F i (i = 1, 2, …, n) as two collections of LNNs in L, if T i T i , I i I i ,   a n d   F i F i then L N N N W B M p , q ( a 1 , a 2 , , a n ) L N N N W B M p , q ( b 1 , b 2 , , b n ) .
Proof 3.
Since T i T i , I i I i   a n d   F i F i , we can easy obtain:
1 ( T i g ) p ( T j g ) q 1 ( T i g ) p ( T j g ) q 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i , g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q .
Similarly
( 1 I i g ) p ( 1 I j g ) q ( 1 I i g ) p ( 1 I j g ) q , 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i , g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q
and
g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q .
So, L N N N W B M p , q ( a 1 , a 2 , , a n ) L N N N W B M p , q ( b 1 , b 2 , , b n ) is true according to Theorem 3. Therefore, the proof of Theorem 3 is completed. □
Theorem 4.
(Boundedness). Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collections of LNNs in L, let a = m i n ( l T i ) , m a x ( l I i ) , m a x ( l F i ) and a + = m a x ( l T i ) , m i n ( l I i ) , m i n ( l F i ) , then:
a L N N N W B M p , q ( a 1 , a 2 , , a n ) a +
Proof 4.
According Theorem 2, we can obtain:
a = L N N N W B p , q ( a , a a ) and a + = L N N N W B M p , q ( a + , a + a + )
According Theorem3, we can obtain:
L N N N W B M p , q ( a , a a ) L N N N W B M p , q ( a 1 , a 2 , , a n ) L N N N W B M p , q ( a + , a + a + ) .
Then a L N N N W B M p , q ( a 1 , a 2 , , a n ) a + .
The proof of Theorem 4 is completed. □

3.2. Normalized Weighted Geometric BM Operators of LNNs

Definition 10.
Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, then the LNNNWGBM operator can be defined as follows:
L N N N W G B M p , q ( a 1 , a 2 , , a n ) = 1 p + q i = 1 n j = 1 , j i n ( p a i q a j ) w i w j 1 w i
where w i is the relative weight of a i , w i [ 0 , 1 ] , and i = 1 n w i = 1 , w j is the relative weighted of a j , w j [ 0 , 1 ] , and j = 1 n w j = 1 .
According to Definitions 2 and 10, we can get the following theorem:
Theorem 5.
Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, then by the Equation (13) the aggregation result obtained is still an LNN, and we can get the following aggregation formula:
L N N N W G B M p , g ( a 1 , a 2 , , a n ) = 1 p + q i = 1 n j = 1 , j i n ( p a i q a j ) w i w j 1 w i = l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 T i g ) p ( 1 T j g ) q ) w i w j 1 w i ) 1 p + q , l g ( 1 i = 1 n j = 1 j i n ( 1 ( I i g ) p ( I j g ) q ) w i w j 1 w i ) 1 p + q , l g ( 1 i = 1 n j = 1 j i n ( 1 ( F i g ) p ( F j g ) q ) w i w j 1 w i ) 1 p + q
where w i is the relative weight of a i , w i [ 0 , 1 ] , and i = 1 n w i = 1 , w j is the relative weighted of a j , w j [ 0 , 1 ] , and j = 1 n w j = 1 .
The proof of Theorem 5 is similar to that of Theorem 1, so we do not repeat it again.
Theorem 6.
(Idempotency). Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collection of LNNs in L, if a i = a, then
L N N N W G B M p , g ( a 1 , a 2 , , a n ) = L N N N W G B M p , g ( a , a a ) = a
The proof of Theorem 6 is similar to that of Theorem 2, so we don’t repeat it again.
Theorem 7.
(Monotonicity). Set a i = l T i , l I i , l F i and bi = l T i , l I i , l F i (i = 1, 2, …, n) as two collections of LNNs in L, if T i T i , I i I i   a n d   F i F i then:
L N N N W G B M p , q ( a 1 , a 2 , , a n ) L N N N W G B M p , q ( b 1 , b 2 , , b n )
The proof of Theorem 7 is similar to that of Theorem 3, so we do not repeat it again.
Theorem 8.
(Boundedness). Set a i = l T i , l I i , l F i (i = 1, 2, …, n) as a collections of LNNs in L, let a = m i n ( l T i ) , m a x ( l I i ) , m a x ( l F i ) and a + = m a x ( l T i ) , m i n ( l I i ) , m i n ( l F i ) , then:
a L N N N W G B M p , q ( a 1 , a 2 , , a n ) a +
The proof of Theorem 8 is similar to that of Theorem 4, so we do not repeat it again.

4. MAGDM Methods Based on the LNNNWBM or LNNNWGBM Operator

In this section, we will use the LNNNWBM or LNNNWGBM operator to deal with the MAGDM problems with LNN information.
In a MAGDM problem, there is a set of several alternatives A = { A 1 , A 2 , , A m } with a set of some attributes C = { C 1 , C 2 , , C n } . Then, λ = ( λ 1 , λ 2 , , λ n ) T   with   λ i 0   and   i = 1 n λ i = 1 are the weights of C i ( i = 1 , 2 , , n ) . Now, there is a set of t experts E = { E 1 , E 2 , , E t } to evaluate the MAGDM problem. Assume that w = ( w 1 , w 2 , , w t ) T with w j 0 and j = 1 t w j = 1 is the vector of the weights for E y ( y = 1 , 2 , , t ) and L = { l 1 , l 2 , , l g } is the given linguistic term set. The assessed value of the expert E y for A i with attribute C j is a i j ( y ) = l T i j y , l I i j y , l F i j y A ( y = 1 , 2 , , t ;   i = 1 , 2 , , m ; j = 1 , 2 , , n ) , l T i j y , l I i j y , l F i j y L . Then, we can get the neutrosophic linguistic decision evaluation matrix R y , which is shown in Table 1.
Then, based on the LNNNWBM or LNNNWGBM operator, we propose two decision-making methods, which are described as the following decision steps:
Step 1: According to the weight vector w = ( w 1 , w 2 , , w t ) T of experts and the LNNNWBM operator, we can obtain the integrated matrix R = ( a i j ) m × n , where the collective LNN a i j can be obtained by the following formula:
a i , j = L N N N W B M ( a i , j 1 , a i , j 2 , , a i , j t ) = ( i , j = 1 j i t w i w j 1 w i a i p a j q ) 1 p + q = l g ( 1 i = 1 t j = 1 j i t ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 t j = 1 j i t ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 t j = 1 j i t ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q
Step 2: According to the weight vector λ = ( λ 1 , λ 2 , , λ n ) T of attributes and the LNNNWBM operator or the LNNNWGBM operator, we can obtain the total collective LNN a i for A i ( i = 1 , 2 , , m ) .
a i = L N N N W B M ( a i 1 , a i 2 , , a i n ) = ( i , j = 1 j i n w i w j 1 w i a i p a j q ) 1 p + q = l g ( 1 i = 1 n j = 1 j i n ( 1 ( T i g ) p ( T j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 I i g ) p ( 1 I j g ) q ) w i w j 1 w i ) 1 p + q , l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 F i g ) p ( 1 F j g ) q ) w i w j 1 w i ) 1 p + q
or:
a i = L N N N W G B M ( a i 1 , a i 2 , , a i n ) = 1 p + q i = 1 n j = 1 , j i n ( p a i q a j ) w i w j 1 w i = l g g ( 1 i = 1 n j = 1 j i n ( 1 ( 1 T i g ) p ( 1 T j g ) q ) w i w j 1 w i ) 1 p + q , l g ( 1 i = 1 n j = 1 j i n ( 1 ( I i g ) p ( I j g ) q ) w i w j 1 w i ) 1 p + q , l g ( 1 i = 1 n j = 1 j i n ( 1 ( F i g ) p ( F j g ) q ) w i w j 1 w i ) 1 p + q
Step 3: According to the Equation (5) (Equation (6) if necessary), we calculate the expected value E( a i ) and the accuracy H( a i ) of the LNN A i ( i = 1 , 2 , , m ) .
Step 4: According to the value E( a i ) (H( a i ) if necessary), then we can rank the alternatives and choose the best one.

5. Illustrative Examples

The decision-making problem used in the literature [21] is considered here. There are four companies as a set of alternatives A = { A 1 , A 2 , A 3 , A 4 } , which are a car company ( A 1 ), a food company ( A 2 ), a computer company ( A 3 ), and an arm company ( A 4 ). An investment company needs to invest the best company, so they invite a set of three experts E = { E 1 , E 2 , E 3 }   to evaluate these four companies. The evaluation of the alternatives must satisfy a set of three attributes C = { C 1 , C 2 , C 2 } , which are the risk (C1), the growth (C2), and the environmental impact (C3). The importance of three experts is given as a weight vector w = ( 0.37 , 0.33 , 0.3 ) T and the importance of three attributes is given as a weight vector λ = ( 0.35 , 0.25 , 0.4 ) T . Then, the evaluation criteria are based on the linguistic term set L = { l 0 = extremely   bad , l 1 = very   bad , l 2 = bad , l 3 = slightly   bad ,   l 4 = medium ,   l 5 = slightly   good ,   l 6 =   good , l 7 =   very   good , l 8 =   extremely   good } . Thus, we can establish the LNN decision matrix R i (i = 1, 2, 3), which is listed in Table 2, Table 3 and Table 4.

5.1. The Decision-Making Process Based on the LNNNWBM Operator or LNNNWGBM Operator

Step 1: According to the weight vector w = ( 0.37 , 0.33 , 0.3 ) T of experts and the LNNNWBM operator (set p = 1 and q = 1), we can obtain the integrated matrix R = ( a i j ) m × n , which is listed in Table 5.
Step 2: According to the weight vector λ = ( 0.35 , 0.25 , 0.4 ) T of attributes and the LNNNWBM operator (set p = 1 and q = 1), we can obtain the collective overall LNNs of a i for A i ( i = 1 , 2 , 3 , 4 ) as follows:
a 1 = l 5.9328 , l 1.8388 , l 2.5784 , a 2 = l 6.1489 , l 1.8908 , l 2.2399 ,
a 3 = l 6.0032 , l 1.8430 , l 2.3427 , and a 4 = l 6.1675 , l 1.5412 , l 1.7536 .
Step 3: Calculating the expected values of E( a i ) for a i ( i = 1 , 2 , 3 , 4 ) :
E(a1) = 0.7298, E(a2) = 0.7508, E(a3) = 0.7424, and E(a4) = 0.7864.
According to the results, we can rank E ( a 4 ) > E ( a 2 ) > E ( a 3 ) > E ( a 1 ) , so the company A 4 is the best choice among all the companies.
On the other hand, we also use the LNNNWGBM operator (set p = 1 and q = 1) to deal with this decision-making problem:
Step 1’: Just as step 1;
Step 2’: According to the weight vector λ = ( 0.35 , 0.25 , 0.4 ) T of attributes and the LNNNWGBM operator (set p = 1 and q = 1), we can obtain the collective overall LNNs of a i for A i ( i = 1 , 2 , 3 , 4 ) as follows:
a 1 = l 5.9970 , l 1.8333 , l 2.5434 , a 2 = l 6.1790 , l 1.8897 , l 2.2324 ,
a 3 = l 6.0032 , l 1.7928 , l 2.3332 , and a 4 = l 6.1824 , l 1.5362 , l 1.7500 .
Step 3’: Calculating the expected values of E( a i ) for a i ( i = 1 , 2 , 3 , 4 ) :
E(a1) = 0.7342, E(a2) = 0.7524, E(a3) = 0.7449, and E(a4) = 0.7873.
According to the results, the ranking is E ( a 4 ) > E ( a 2 ) > E ( a 3 ) > E ( a 1 ) , so the company A4 is the best choice among all the companies.

5.2. Analysis the Influence of the Parameters p and q on Decision Results

In order to analyze the effects of different parameters p and q on the decision results, in Steps 1 and 2, we take the different values of p and q, and all the results are shown in the Table 6 and Table 7.
From above two tables, we can see that when the parameters p and q take different values, the sorting results are the same. Therefore, the influence of the two parameters is very little in this decision-making problem.
In the literature [21], the ranking is A 4 A 2 A 3 A 1 , just according with the ranking result of this paper. Compared with the literature [21], the correlation between attributes is considered by the LNNNWBM operator and the LNNNWGBM operator for MAGDM, which make the information aggregation more objective and reliable. Hence, the proposed MAGDM methods with different p and q values are more flexible than the method in [21]. Compared to the literature [14], on the one hand, the literature [14] cannot express and deal with the decision-making problems with pure linguistic information like LNNs. However, in this paper, the proposed decision-making methods based on the LNNNWBM operator and the LNNNWGBM operator provide a new way for decision-makers under LNN environment.

6. Conclusions

In MADGM, how to tackle the problem of the interdependence between attributes is a challenging issue. Thus, MADGM methods based on the LNNNWGBM and LNNNWGBM operators for LNNs are proposed in this paper. First, a LNN normalized weight Bonferroni mean (LNNNWBM) operator and a LNN normalized weight geometric Bonferroni mean (LNNNWGBM) operator are proposed based on the BM operator, and the related properties of these operators are discussed. Second, based on the LNNNWBM operator and the LNNNWGBM operator, this paper puts forward two methods of MADGM in a LNN setting. Finally, an illustrative example was presented to show that these two methods were used for solving the MADGM problem with LNN information. In addition, the proposed decision-making methods may affect the decision results based on various parameters of p and q in some decision-making problems.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under grant Nos. 61603258 and 61703280.

Author Contributions

Changxing Fan originally proposed the LNNNWBM and LNNNWGBM operators and investigated their properties; Jun Ye, Keli Hu and En Fan provided the calculation and comparative analysis; and we wrote the paper together.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zadeh, L.A. The concept of a linguistic variable and its application to approximate reasoning-I. Inf. Sci. 1975, 8, 199–249. [Google Scholar] [CrossRef]
  2. Zadeh, L.A. A concept of a linguistic variable and its application to approximate reasoning-II. Inf. Sci. 1975, 8, 301–357. [Google Scholar] [CrossRef]
  3. Herrera, F.; Herrera-Viedma, E. Linguistic decision analysis: Steps for solving decision problems under linguistic information. Fuzzy Sets Syst. 2000, 115, 67–82. [Google Scholar] [CrossRef]
  4. Herrera, F.; Herrera-Viedma, E. A model of consensus in group decision making under linguistic assessments. Fuzzy Sets Syst. 1996, 78, 73–87. [Google Scholar] [CrossRef]
  5. Liu, P.D. Some generalized dependent aggregation operators with intuitionistic linguistic numbers and their application to group decision making. J. Comput. Syst. Sci. 2013, 79, 131–143. [Google Scholar] [CrossRef]
  6. Chen, Z.C.; Liu, P.D.; Pei, Z. An approach to multiple attribute group decision making based on linguistic intuitionistic fuzzy numbers. Int. J. Comput. Intell. Syst. 2015, 8, 747–760. [Google Scholar] [CrossRef]
  7. Liu, P.D.; Teng, F. An extended TODIM method for multiple attribute group decision-making based on 2-dimension uncertain linguistic variable. Complexity 2016, 21, 20–30. [Google Scholar] [CrossRef]
  8. Liu, P.D.; Li, H.; Yu, X.C. Generalized hybrid aggregation operators based on the 2-dimension uncertain linguistic information for multiple attribute group decision making. Group Decis. Negot. 2016, 25, 103–126. [Google Scholar] [CrossRef]
  9. Liu, P.D.; Wang, P. Some improved linguistic intuitionistic fuzzy aggregation operators and their applications to multiple-attribute decision making. Int. J. Inf. Technol. Decis. Mak. 2017, 16, 817–850. [Google Scholar] [CrossRef]
  10. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic; American Research Press: Rehoboth, DE, USA, 1998. [Google Scholar]
  11. Smarandache, F. A unifying field in logics. In Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, DE, USA, 1999. [Google Scholar]
  12. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  13. Ye, J. An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. J. Intell. Fuzzy Syst. 2015, 28, 247–255. [Google Scholar]
  14. Liu, P.D.; Shi, L.L. Some neutrosophic uncertain linguistic number Heronian mean operators and their application to multi-attribute group decision making. Neural Comput. Appl. 2017, 28, 1079–1093. [Google Scholar] [CrossRef]
  15. Bonferroni, C. Sulle medie multiple di potenze. Bollettino dell'Unione Matematica Italiana 1950, 5, 267–270. (In Italian) [Google Scholar]
  16. Zhou, W.; He, J.M. Intuitionistic fuzzy normalized weighted Bonferroni mean and its application in multicriteria decision making. J. Appl. Math. 2012, 2012, 1–22. [Google Scholar] [CrossRef]
  17. Zhu, B.; Xu, Z.S.; Xia, M.M. Hesitant fuzzy geometric Bonferroni means. Inf. Sci. 2012, 205, 72–85. [Google Scholar] [CrossRef]
  18. Sun, M.; Liu, J. Normalized geometric Bonferroni operators of hesitant fuzzy sets and their application in multiple attribute decision making. J. Inf. Comput. Sci. 2013, 10, 2815–2822. [Google Scholar] [CrossRef]
  19. Liu, P.D.; Chen, S.M.; Liu, J.L. Some intuitionistic fuzzy interaction partitioned Bonferroni mean operators and their application to multi-attribute group decision making. Inf. Sci. 2017, 411, 98–121. [Google Scholar] [CrossRef]
  20. Liu, P.D.; Li, H.G. Interval-valued intuitionistic fuzzy power Bonferroni aggregation operators and their application to group decision making. Cogn. Comput. 2017, 9, 494–512. [Google Scholar] [CrossRef]
  21. Fang, Z.B.; Ye, J. Multiple Attribute Group Decision-Making Method Based on Linguistic Neutrosophic Numbers. Symmetry 2017, 9, 111. [Google Scholar] [CrossRef]
Table 1. The neutrosophic linguistic decision matrix R y of the expert E y .
Table 1. The neutrosophic linguistic decision matrix R y of the expert E y .
C 1 C n
A 1 l T 11 y , l I 11 y , l F 11 y l T 1 n y , l I 1 n y , l F 1 n y
A 2 l T 21 y , l I 21 y , l F 21 y l T 2 n y , l I 2 n y , l F 2 n y
A m l T m 1 y , l I m 1 y , l F m 1 y l T m n y , l I m n y , l F m n y
Table 2. The LNN decision matrix R 1 of the expert E 1 .
Table 2. The LNN decision matrix R 1 of the expert E 1 .
C1C2C3
A1 l 6 1 , l 1 1 , l 2 1 l 7 1 , l 2 1 , l 1 1 l 6 1 , l 2 1 , l 2 1
A2 l 7 1 , l 1 1 , l 1 1 l 7 1 , l 3 1 , l 2 1 l 7 1 , l 2 1 , l 1 1
A3 l 6 1 , l 2 1 , l 2 1 l 7 1 , l 1 1 , l 1 1 l 6 1 , l 2 1 , l 2 1
A4 l 7 1 , l 1 1 , l 2 1 l 7 1 , l 2 1 , l 3 1 l 7 1 , l 2 1 , l 1 1
Table 3. The LNN decision matrix R 2 of the expert E 2 .
Table 3. The LNN decision matrix R 2 of the expert E 2 .
C1C2C3
A1 l 6 2 , l 1 2 , l 2 2 l 6 2 , l 1 2 , l 1 2 l 4 2 , l 2 2 , l 3 2
A2 l 7 2 , l 2 2 , l 3 2 l 6 2 , l 1 2 , l 1 2 l 4 2 , l 2 2 , l 3 2
A3 l 5 2 , l 1 2 , l 2 2 l 5 2 , l 1 2 , l 2 2 l 5 2 , l 4 2 , l 2 2
A4 l 6 2 , l 1 2 , l 1 2 l 5 2 , l 1 2 , l 1 2 l 5 2 , l 2 2 , l 3 2
Table 4. The LNN decision matrix R 3 of the expert E 3 .
Table 4. The LNN decision matrix R 3 of the expert E 3 .
C1C2C3
A1 l 7 3 , l 3 3 , l 4 3 l 7 3 , l 3 3 , l 3 3 l 5 3 , l 2 3 , l 5 3
A2 l 6 3 , l 3 3 , l 4 3 l 5 3 , l 1 3 , l 2 3 l 6 3 , l 2 3 , l 3 3
A3 l 7 3 , l 2 3 , l 4 3 l 6 3 , l 1 3 , l 2 3 l 7 3 , l 2 3 , l 4 3
A4 l 7 3 , l 2 3 , l 3 3 l 5 3 , l 2 3 , l 1 3 l 6 3 , l 1 3 , l 1 3
Table 5. The integrated matrix R.
Table 5. The integrated matrix R.
C1C2C3
A 1 l 6.3176 , l 1.5682 , l 2.6129 l 6.6819 , l 1.9641 , l 1.5682 l 5.0059 , l 2.000 , l 3.2898
A 2 l 6.7045 , l 1.9476 , l 2.6308 l 6.0524 , l 1.6728 , l 1.6636 l 5.7033 , l 2.000 , l 2.3074
A 3 l 5.9943 , l 1.6636 , l 2.6129 l 6.0264 , l 1.000 , l 1.6430 l 5.9943 , l 2.6613 , l 2.6129
A 4 l 6.6819 , l 1.2955 , l 1.9641 l 5.6926 , l 1.6636 , l 1.6728 l 6.0264 , l 1.6824 , l 1.6170
Table 6. The ranking based on the LNNNWBM operator with the different values of p and q.
Table 6. The ranking based on the LNNNWBM operator with the different values of p and q.
p, qLNNNWBM OperatorRanking
p = 1, q = 0E(a1) = 0.7528, E(a2) = 0.7777, E(a3) = 0.7613, E(a4) = 0.8060 A 4 A 2 A 3 A 1
p = 1, q = 0.5E(a1) = 0.7311, E(a2) = 0.7534, E(a3) = 0.7435, E(a4) = 0.7886 A 4 A 2 A 3 A 1
p = 1, q = 2E(a1) = 0.7329, E(a2) = 0.7545, E(a3) = 0.7453, E(a4) = 0.7897 A 4 A 2 A 3 A 1
p = 0, q = 0E(a1) = 0.7573, E(a2) = 0.7766, E(a3) = 0.7656, E(a4) = 0.8046 A 4 A 2 A 3 A 1
p = 0.5, q = 1E(a1) = 0.7326, E(a2) = 0.7530, E(a3) = 0.7449, E(a4) = 0.7879 A 4 A 2 A 3 A 1
p = 2, q = 1E(a1) = 0.7349, E(a2) = 0.7562, E(a3) = 0.7463, E(a4) = 0.7902 A 4 A 2 A 3 A 1
p = 2, q = 2E(a1) = 0.7343, E(a2) = 0.7537, E(a3) = 0.7458, E(a4) = 0.7884 A 4 A 2 A 3 A 1
Table 7. The ranking based on the LNNNWGBM operator with the different values of p and q.
Table 7. The ranking based on the LNNNWGBM operator with the different values of p and q.
p, qLNNNWGBM OperatorRanking
p = 1, q = 0E(a1) = 0.7397, E(a2) = 0.7747, E(a3) = 0.7531, E(a4) = 0.8035 A 4 A 2 A 3 A 1
p = 1, q = 0.5E(a1) = 0.7342, E(a2) = 0.7545, E(a3) = 0.7453, E(a4) = 0.7891 A 4 A 2 A 3 A 1
p = 1, q = 2E(a1) = 0.7343, E(a2) = 0.7548, E(a3) = 0.7457, E(a4) = 0.7889 A 4 A 2 A 3 A 1
p = 0, q = 1E(a1) = 0.7437, E(a2) = 0.7730, E(a3) = 0.7570, E(a4) = 0.8019 A 4 A 2 A 3 A 1
p = 0.5, q = 1E(a1) = 0.7356, E(a2) = 0.7541, E(a3) = 0.7467, E(a4) = 0.7885 A 4 A 2 A 3 A 1
p = 2, q = 1E(a1) = 0.7330, E(a2) = 0.7553, E(a3) = 0.7445, E(a4) = 0.7895 A 4 A 2 A 3 A 1
p = 2, q = 2E(a1) = 0.7334, E(a2) = 0.7530, E(a3) = 0.7441, E(a4) = 0.7877 A 4 A 2 A 3 A 1

Share and Cite

MDPI and ACS Style

Fan, C.; Ye, J.; Hu, K.; Fan, E. Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods. Information 2017, 8, 107. https://doi.org/10.3390/info8030107

AMA Style

Fan C, Ye J, Hu K, Fan E. Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods. Information. 2017; 8(3):107. https://doi.org/10.3390/info8030107

Chicago/Turabian Style

Fan, Changxing, Jun Ye, Keli Hu, and En Fan. 2017. "Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods" Information 8, no. 3: 107. https://doi.org/10.3390/info8030107

APA Style

Fan, C., Ye, J., Hu, K., & Fan, E. (2017). Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods. Information, 8(3), 107. https://doi.org/10.3390/info8030107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop