Next Article in Journal
Numerical Reckoning Fixed Points of (ρE)-Type Mappings in Modular Vector Spaces
Next Article in Special Issue
Semi-Idempotents in Neutrosophic Rings
Previous Article in Journal
The Solution of Backward Heat Conduction Problem with Piecewise Linear Heat Transfer Coefficient
Previous Article in Special Issue
Refined Neutrosophy and Lattices vs. Pair Structures and YinYang Bipolar Fuzzy Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linguistic Neutrosophic Numbers Einstein Operator and Its Application in Decision Making

Department of Computer Science, Shaoxing University, Shaoxing 312000, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 389; https://doi.org/10.3390/math7050389
Submission received: 28 March 2019 / Revised: 24 April 2019 / Accepted: 25 April 2019 / Published: 28 April 2019
(This article belongs to the Special Issue New Challenges in Neutrosophic Theory and Applications)

Abstract

:
Linguistic neutrosophic numbers (LNNs) include single-value neutrosophic numbers and linguistic variable numbers, which have been proposed by Fang and Ye. In this paper, we define the linguistic neutrosophic number Einstein sum, linguistic neutrosophic number Einstein product, and linguistic neutrosophic number Einstein exponentiation operations based on the Einstein operation. Then, we analyze some of the relationships between these operations. For LNN aggregation problems, we put forward two kinds of LNN aggregation operators, one is the LNN Einstein weighted average operator and the other is the LNN Einstein geometry (LNNEWG) operator. Then we present a method for solving decision-making problems based on LNNEWA and LNNEWG operators in the linguistic neutrosophic environment. Finally, we apply an example to verify the feasibility of these two methods.

1. Introduction

Smarandache [1] proposed the neutrosophic set (NS) in 1998. Compared with the intuitionistic fuzzy sets (IFSs), the NS increases the uncertainty measurement, from which decision makers can use the truth, uncertainty and falsity degrees to describe evaluation, respectively. In the NS, the degree of uncertainty is quantified, and these three degrees are completely independent of each other, so, the NS is a generalization set with more capacity to express and deal with the fuzzy data. At present, the study of NS theory has been a part of research that mainly includes the research of the basic theory of NS, the fuzzy decision of NS, and the extension of NS, etc. [2,3,4,5,6,7,8,9,10,11,12,13,14]. Recently, Fang and Ye [15] presented the linguistic neutrosophic number (LNN). Soon afterwards, many research topics about LNN were proposed [16,17,18].
Information aggregation operators have become an important research topic and obtained a wide range of research results. Yager [19] put forward the ordered weighted average (OWA) operator considering the data sorting position. Xu [20] presented the arithmetic aggregation (AA) of IFS. Xu and Yager [21] presented the geometry aggregation (GA) operator of IFS. Zhao [22] proposed generalized aggregation operators based on IFS and proved that AA and GA were special cases of generalized aggregation operator. The operators mentioned above are established based on the algebraic sum and the algebraic product of number sets. They are respectively referred to as a special case of Archimedes t-conorm and t-norm to establish union or intersection operation of the number set. The union and intersection of Einstein operation is a kind of Archimedes t-conorm and t-norm with good smooth characteristics [23]. Wang and Liu [24] built some IF Einstein aggregation operators and proved that the Einstein aggregation operator has better smoothness than the arithmetic aggregation operator. Zhao and Wei [25] put forward the IF Einstein hybrid-average (IFEHA) operator and IF Einstein hybrid-geometry (IFEHG) operator. Further, Guo etc. [26] applied the Einstein operation to a hesitate fuzzy set. Lihua Yang etc. [27] put forward novel power aggregation operators based on Einstein operations for interval neutrosophic linguistic sets. However, neutrosophic linguistic sets are different from linguistic neutrosophic sets. The former still use two values to describe the evaluation value, while the latter can use a pure language value to describe the evaluation value. As far as we know, this is the first work on Einstein aggregation operators for LNN. It must be noticed that the aggregation operators in References [15,16,17,18] are almost based on the most commonly used algebraic product and algebraic sum of LNNs for carrying the combination process, which is not the only operation law that can be chosen to model the intersection and union on LNNs. Thus, we establish the operation rules of LNN based on Einstein operation and put forward the LNN Einstein weighted-average (LNNEWA) operator and LNN Einstein weighted-geometry (LNNEWG) operator. These operators are finally utilized to solve some relevant problems.
The other organizations: in Section 2, concepts of LNN and Einstein are described, operational laws of LNNs based on Einstein operation are defined, and their performance is analyzed. In Section 3, LNNEWA and LNNEWG operators are proposed. In Section 4, multiple attribute group decision making (MAGDM) methods are built based on LNNEWA and LNNEWG operators. In Section 5, an instance is given. In Section 6, conclusions and future research are given.

2. Basic Theories

2.1. LNN and Its Operational Laws

Definition 1.
[15] Set a finite language set Ψ = { ψ t | t [ 0 , k ] | } , where ψ t is a linguistic variable, k +1 is the cardinality of Ψ . Then, we define   u = ψ β , ψ γ , ψ δ , in which ψ β , ψ γ , ψ δ Ψ and β , γ , δ [0, k], ψ β , ψ δ   a n d   ψ γ expresse truth, falsity and indeterminacy degree, respectively, we call u an LNN.
Definition 2.
[15] Set three LNNs   u = ψ β , ψ γ , ψ δ , u 1 = ψ β 1 , ψ γ 1 , ψ δ 1 and u 2   = ψ β 2 , ψ γ 2 , ψ δ 2 in   Ψ   and   λ 0 , then, the operational rules are as following:
u 2 = ψ β 1 , ψ γ 1 , ψ δ 1   ψ β 2 , ψ γ 2 , ψ δ 2 = ψ β 1 + β 2 β 1 β 2 k , ψ γ 1 γ 2 k , ψ δ 1 δ 2 k ;
u 1 u 2 = ψ β 1 , ψ γ 1 , ψ δ 1 ψ β 2 , ψ γ 2 , ψ δ 2 = ψ β 1 β 2 k , ψ γ 1 + γ 2 γ 1 γ 2 k , ψ δ 1 + δ 2 δ 1 δ 2 k ;
λ u = λ ψ β 1 , ψ γ 1 , ψ δ 1 = ψ k k ( 1 β k ) λ , ψ k ( γ k ) λ , ψ k ( δ k ) λ ;
u λ = ψ β 1 , ψ γ 1 , ψ δ 1 λ = ψ k ( β k ) λ , ψ k k ( 1 γ k ) λ , ψ k k ( 1 δ k ) λ .
Definition 3.
[15] Set an LNN u   = ψ β , ψ γ , ψ δ in   Ψ , we define   ζ ( u ) as the expectation and   η ( u ) as the accuracy:
ζ ( u ) = ( 2 k + β γ δ ) / 3 k
η ( u ) = ( β δ ) / k
Definition 4.
[15]: Set two LNNs u 1 = ψ β 1 , ψ γ 1 , ψ δ 1 and u 2 = ψ β 2 , ψ γ 2 , ψ δ 2 in   Ψ , then
If   ζ ( u 1 ) > ζ ( u 2 ) , then   u 1 u 2 ;
If   ζ ( u 1 ) = ζ ( u 2 ) then
If η ( u 1 ) > η ( u 2 ) , then   u 1 u 2 ;
If η ( u 1 ) = η ( u 2 ) , then   u 1 u 2 .

2.2. Einstein Operation

Definition 5.
[28,29] For any two real Numbers a, b [ 0 , 1 ] , Einstein e is an Archimedes t-conorms, Einstein   e is an Archimedes t-norms, then
a e b = a + b 1 + a b , a e b = a b 1 + ( 1 a ) ( 1 b ) .

2.3. Einstein Operation Under the Linguistic Neutrosophic Number

Definition 6.
Set u = ψ β , ψ γ , ψ δ , u 1 = ψ β 1 , ψ γ 1 , ψ δ 1 and u 2 = ψ β 2 , ψ γ 2 , ψ δ 2 as three LNNs in Ψ , λ 0 , the operation of Einstein e and Einstein e under the linguistic neutrosophic number are defined as follows:
u 1 e u 2 = ψ k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 , ψ k γ 1 γ 2 k 2 + ( k γ 1 ) ( k γ 2 ) , ψ k δ 1 δ 2 k 2 + ( k δ 1 ) ( k δ 2 ) ;
u 1 e u 2 = ψ k β 1 β 2 k 2 + ( k β 1 ) ( k β 2 ) , ψ k 2 ( γ 1 + γ 2 ) k 2 + γ 1 γ 2 , ψ k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ;
λ u = ψ k ( k + β ) λ ( k β ) λ ( k + β ) λ + ( k β ) λ , ψ k 2 γ λ ( 2 k γ ) λ + γ λ , ψ k 2 δ λ ( 2 k δ ) λ + δ λ ;
u λ = ψ k 2 β λ ( 2 k β ) λ + β λ , ψ k ( k + γ ) λ ( k γ ) λ ( k + γ ) λ + ( k γ ) λ , ψ k ( k + δ ) λ ( k δ ) λ ( k + δ ) λ + ( k δ ) λ .
Theorem 1.
Set   u   = ψ β , ψ γ , ψ δ , u 1   = ψ β 1 , ψ γ 1 , ψ δ 1 and u 2   = ψ β 2 , ψ γ 2 , ψ δ 2 as three LNNs in    Ψ ,   λ 0 , then, the operation of Einstein e and Einstein   e have the following performance:
u 1 e u 2 = u 2 e u 1 ;
u 1 e u 2 = u 2 e u 1 ;
λ ( u 1 e u 2 ) = λ u 1 e λ u 2 ;
( u 1 e u 2 ) λ = u 1 λ e u 2 λ ;
Proof. 
Performance (1) and (2) are easy to be obtained, so we omit it; Now we prove the performance (3):
According to Definition 6, we can get
  u 1 e u 2 = ψ k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 , ψ k γ 1 γ 2 k 2 + ( k γ 1 ) ( k γ 2 ) , ψ k δ 1 δ 2 k 2 + ( k δ 1 ) ( k δ 2 ) ;
  λ ( u 1 e u 2 )    = ψ k ( k + k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 ) λ ( k k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 ) λ ( k + k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 ) λ + ( k k 2 ( β 1 + β 2 ) k 2 + β 1 β 2 ) λ , ψ k 2 ( k γ 1 γ 2 k 2 + ( k γ 1 ) ( k γ 2 ) ) λ ( 2 k k γ 1 γ 2 k 2 + ( k γ 1 ) ( k γ 2 ) ) λ + ( k γ 1 γ 2 k 2 + ( k γ 1 ) ( k γ 2 ) ) λ , ψ k 2 ( k δ 1 δ 2 k 2 + ( k δ 1 ) ( k δ 2 ) ) λ ( 2 k k δ 1 δ 2 k 2 + ( k δ 1 ) ( k δ 2 ) ) λ + ( k δ 1 δ 2 k 2 + ( k δ 1 ) ( k δ 2 ) ) λ    = ψ k ( k + β 1 ) λ ( k + β 2 ) λ ( k β 1 ) λ ( k β 2 ) λ ( k + β 1 ) λ ( k + β 2 ) λ + ( k β 1 ) λ ( k β 2 ) λ , ψ k 2 ( γ 1 γ 2 ) λ ( ( 2 k γ 1 ) λ ( 2 k γ 2 ) λ ) + ( γ 1 γ 2 ) λ , ψ k 2 ( δ 1 δ 2 ) λ ( ( 2 k δ 1 ) λ ( 2 k δ 2 ) λ ) + ( δ 1 δ 2 ) λ ;
  λ u 1 = ψ k ( k + β 1 ) λ ( k β 1 ) λ ( k + β 1 ) λ + ( k β 1 ) λ , ψ k 2 γ 1 λ ( 2 k γ 1 ) λ + γ 1 λ , ψ k 2 δ 1 λ ( 2 k δ 1 ) λ + δ 1 λ ;
  λ u 2 = ψ k ( k + β 2 ) λ ( k β 2 ) λ ( k + β 2 ) λ + ( k β 2 ) λ , ψ k 2 γ 2 λ ( 2 k γ 2 ) λ + γ 2 λ , ψ k 2 δ 2 λ ( 2 k δ 2 ) λ + δ 2 λ ;
   λ u 1 e λ u 2 = ψ k 2 ( k ( k + β 1 ) λ ( k β 1 ) λ ( k + β 1 ) λ + ( k β 1 ) λ + k ( k + β 2 ) λ ( k β 2 ) λ ( k + β 2 ) λ + ( k β 2 ) λ ) k 2 + ( ( k ( k + β 1 ) λ ( k β 1 ) λ ( k + β 1 ) λ + ( k β 1 ) λ ) ( k ( k + β 2 ) λ ( k β 2 ) λ ( k + β 2 ) λ + ( k β 2 ) λ ) ) , ψ k ( k 2 γ 1 λ ( 2 k γ 1 ) λ + γ 1 λ ) ( k 2 γ 2 λ ( 2 k γ 2 ) λ + γ 2 λ ) k 2 + ( k ( k 2 γ 1 λ ( 2 k γ 1 ) λ + γ 1 λ ) ) ( k ( k 2 γ 2 λ ( 2 k γ 2 ) λ + γ 2 λ ) ) , ψ k ( k 2 δ 1 λ ( 2 k δ 1 ) λ + δ 1 λ ) ( k 2 δ 2 λ ( 2 k δ 2 ) λ + δ 2 λ ) k 2 + ( k ( k 2 δ 1 λ ( 2 k δ 1 ) λ + δ 1 λ ) ) ( k ( k 2 δ 2 λ ( 2 k δ 2 ) λ + δ 2 λ ) ) = ψ k ( k + β 1 ) λ ( k + β 2 ) λ ( k β 1 ) λ ( k β 2 ) λ ( k + β 1 ) λ ( k + β 2 ) λ + ( k β 1 ) λ ( k β 2 ) λ , ψ k 2 ( γ 1 γ 2 ) λ ( ( 2 k γ 1 ) λ ( 2 k γ 2 ) λ ) + ( γ 1 γ 2 ) λ , ψ k 2 ( γ 1 δ 2 ) λ ( ( 2 k δ 1 ) λ ( 2 k δ 2 ) λ ) + ( δ 1 δ 2 ) λ
So, we can get   λ ( u 1 e u 2 ) = λ u 1 e λ u 2 .
Now, we prove the performance (4):
  u 1 λ = ψ k 2 β 1 λ ( 2 k β 1 ) λ + β 1 λ , ψ k ( k + γ 1 ) λ ( k γ 1 ) λ ( k + γ 1 ) λ + ( k γ 1 ) λ , ψ k ( k + δ 1 ) λ ( k δ 1 ) λ ( k + δ 1 ) λ + ( k δ 1 ) λ ;
  u 2 λ = ψ k 2 β 2 λ ( 2 k β 2 ) λ + β 2 λ , ψ k ( k + γ 2 ) λ ( k γ 2 ) λ ( k + γ 2 ) λ + ( k γ 2 ) λ , ψ k ( k + δ 2 ) λ ( k δ 2 ) λ ( k + δ 2 ) λ + ( k δ 2 ) λ ;
  u 1 λ e u 2 λ = ψ k ( k 2 β 1 λ ( 2 k β 1 ) λ + β 1 λ ) ( k 2 β 2 λ ( 2 k β 2 ) λ + β 2 λ ) k 2 + ( k ( k 2 β 1 λ ( 2 k β 1 ) λ + β 1 λ ) ) ( k ( k 2 β 2 λ ( 2 k β 2 ) λ + β 2 λ ) ) , ψ k 2 ( ( k ( k + γ 1 ) λ ( k γ 1 ) λ ( k + γ 1 ) λ + ( k γ 1 ) λ ) + ( k 2 β 2 λ ( 2 k β 2 ) λ + β 2 λ ) ) k 2 + ( k ( k + γ 1 ) λ ( k γ 1 ) λ ( k + γ 1 ) λ + ( k γ 1 ) λ ) ( k 2 β 2 λ ( 2 k β 2 ) λ + β 2 λ ) , ψ k 2 ( ( k ( k + δ 1 ) λ ( k δ 1 ) λ ( k + δ 1 ) λ + ( k δ 1 ) λ ) + ( k ( k + δ 2 ) λ ( k δ 2 ) λ ( k + δ 2 ) λ + ( k δ 2 ) λ ) ) k 2 + ( k ( k + δ 1 ) λ ( k δ 1 ) λ ( k + δ 1 ) λ + ( k δ 1 ) λ ) ( k ( k + δ 2 ) λ ( k δ 2 ) λ ( k + δ 2 ) λ + ( k δ 2 ) λ ) = ψ k 2 ( β 1 β 2 ) λ ( ( 2 k β 1 ) λ ( 2 k β 2 ) λ ) + ( β 1 β 2 ) λ , ψ k ( k + γ 1 ) λ ( k + γ 2 ) λ ( k γ 1 ) λ ( k γ 2 ) λ ( k + γ 1 ) λ ( k + γ 2 ) λ + ( k γ 1 ) λ ( k γ 2 ) λ , ψ k ( k + δ 1 ) λ ( k + δ 2 ) λ ( k δ 1 ) λ ( k δ 2 ) λ ( k + δ 1 ) λ ( k + δ 2 ) λ + ( k δ 1 ) λ ( k δ 2 ) λ ;
  u 1 e u 2 = ψ k β 1 β 2 k 2 + ( k β 1 ) ( k β 2 ) , ψ k 2 ( γ 1 + γ 2 ) k 2 + γ 1 γ 2 , ψ k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ;
  ( u 1 e u 2 ) λ = ψ k 2 ( k β 1 β 2 k 2 + ( k β 1 ) ( k β 2 ) ) λ ( 2 k k β 1 β 2 k 2 + ( k β 1 ) ( k β 2 ) ) λ + ( k β 1 β 2 k 2 + ( k β 1 ) ( k β 2 ) ) λ , ψ k ( k + k 2 ( γ 1 + γ 2 ) k 2 + γ 1 γ 2 ) λ ( k k 2 ( γ 1 + γ 2 ) k 2 + γ 1 γ 2 ) λ ( k + k 2 ( γ 1 + γ 2 ) k 2 + γ 1 γ 2 ) λ + ( k k 2 ( γ 1 + γ 2 ) k + γ 1 γ 2 ) λ , ψ k ( k + k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ) λ ( k k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ) λ ( k + k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ) λ + ( k k 2 ( δ 1 + δ 2 ) k 2 + δ 1 δ 2 ) λ = ψ k 2 ( β 1 β 2 ) λ ( ( 2 k β 1 ) λ ( 2 k β 2 ) λ ) + ( β 1 β 2 ) λ , ψ k ( k + γ 1 ) λ ( k + γ 2 ) λ ( k γ 1 ) λ ( k γ 2 ) λ ( k + γ 1 ) λ ( k + γ 2 ) λ + ( k γ 1 ) λ ( k γ 2 ) λ , ψ k ( k + δ 1 ) λ ( k + δ 2 ) λ ( k δ 1 ) λ ( k δ 2 ) λ ( k + δ 1 ) λ ( k + δ 2 ) λ + ( k δ 1 ) λ ( k δ 2 ) λ ;
So, we can get ( u 1 e u 2 ) λ = u 1 λ e u 2 λ . □

3. Einstein Aggregation Operators

3.1. LNNEWA Operator

Definition 7.
Set a LNN u i   = ψ β i , ψ γ i , ψ δ i in   Ψ , for i = 1,2, …, z, we define the LNNEWA operator:
L N N E W A ( u 1 , u 2 , u z ) =    e z i = 1 ϵ i u i ,
with the relative weight vector ϵ = ( ϵ 1 , ϵ 2 , , ϵ z ) T , i = 1 z ϵ i = 1 and ϵ i [ 0 , 1 ] .
Theorem 2.
Set a collection u i = ψ β i , ψ γ i , ψ δ i in   Ψ , for i = 1,2,…,z, then according to the LNNEWA aggregation operator, we can get the following result:
   L N N E W A ( u 1 , u 2 , u z ) = e z i = 1 ϵ i u i = ψ k i = 1 z ( k + β i ) ϵ i i = 1 z ( k β i ) ϵ i i = 1 z ( k + β i ) ϵ i + i = 1 z ( k β i ) ϵ i , ψ k 2 i = 1 z γ i ϵ i i = 1 z ( 2 k γ i ) ϵ i + i = 1 z γ i ϵ i , ψ k 2 i = 1 z δ i ϵ i i = 1 z ( 2 k δ i ) ϵ i + i = 1 z δ i ϵ i
with the relative weight vector ϵ = ( ϵ 1 , ϵ 2 , , ϵ z ) T   , i = 1 z ϵ i = 1 and ϵ i [ 0 , 1 ] .
Proof. 
  ϵ i u i = ψ k ( k + β i ) ϵ i ( k β i ) ϵ i ( k + β i ) ϵ i + ( k β i ) ϵ i , ψ k 2 γ i ϵ i ( 2 k γ i ) ϵ i + γ i ϵ i , ψ k 2 δ i ϵ i ( 2 k δ i ) ϵ i + δ i ϵ i ;
  z = 2   , L N N E W A ( u 1 , u 2 ) =    e 2 i = 1 ϵ i u i = ψ k 2 ( k ( k + β 1 ) ϵ 1 ( k β 1 ) ϵ 1 ( k + β 1 ) ϵ 1 + ( k β 1 ) ϵ 1 + k ( k + β 2 ) ϵ 2 ( k β 2 ) ϵ 2 ( k + β 2 ) ϵ 2 + ( k β 2 ) ϵ 2 ) k 2 + ( ( k ( k + β 1 ) ϵ 1 ( k β 1 ) ϵ 1 ( k + β 1 ) ϵ 1 + ( k β 1 ) ϵ 1 ) ( k ( k + β 2 ) ϵ 2 ( k β 2 ) ϵ 2 ( k + β 2 ) ϵ 2 + ( k β 2 ) ϵ 2 ) , ψ k ( k 2 γ 1 ϵ 1 ( 2 k γ 1 ) ϵ 1 + γ 1 ϵ 1 ) ( k 2 γ 2 ϵ 2 ( 2 k γ 2 ) ϵ 2 + γ 2 ϵ 2 ) k 2 + ( k ( k 2 γ 1 ϵ 1 ( 2 k γ 1 ) ϵ 1 + γ 1 ϵ 1 ) ) ( k ( k 2 γ 2 ϵ 2 ( 2 k γ 2 ) ϵ 2 + γ 2 ϵ 2 ) ) , ψ k ( k 2 δ 1 ϵ 1 ( 2 k δ 1 ) ϵ 1 + δ 1 ϵ 1 ) ( k 2 δ 2 ϵ 2 ( 2 k δ 2 ) ϵ 2 + δ 2 ϵ 2 ) k 2 + ( k ( k 2 δ 1 ϵ 1 ( 2 k δ 1 ) ϵ 1 + δ 1 ϵ 1 ) ) ( k ( k 2 δ 2 ϵ 2 ( 2 k δ 2 ) ϵ 2 + δ 2 ϵ 2 ) ) = ψ k ( k + β 1 ) ϵ 1 ( k + β 2 ) ϵ 2 ( k β 1 ) ϵ 1 ( k β 2 ) ϵ 2 ( k + β 1 ) ϵ 1 ( k + β 2 ) ϵ 2 + ( k β 1 ) ϵ 1 ( k β 2 ) ϵ 2 , ψ k 2 γ 1 ϵ 1 γ 2 ϵ 2 ( 2 k γ 1 ) ϵ 1 ( 2 k γ 2 ) ϵ 2 + γ 1 ϵ 1 γ 2 ϵ 2 , ψ k 2 δ 1 ϵ 1 δ 2 ϵ 2 ( 2 k δ 1 ) ϵ 1 ( 2 k δ 1 ) ϵ 2 + δ 1 ϵ 1 δ 2 ϵ 2 = ψ k i = 1 2 ( k + β i ) ϵ i i = 1 2 ( k β i ) ϵ i i = 1 2 ( k + β i ) ϵ i + i = 1 2 ( k β i ) ϵ i , ψ k 2 i = 1 2 γ i ϵ i i = 1 2 ( 2 k γ i ) ϵ i + i = 1 2 γ i ϵ i , ψ k 2 i = 1 2 δ i ϵ i i = 1 2 ( 2 k δ i ) ϵ i + i = 1 2 δ i ϵ i .
Suppose z = m, according t formula (17), we can get
L N N E W A ( u 1 , u 2 , u m ) =    e m i = 1 ϵ i u i = ψ k i = 1 m ( k + β i ) ϵ i i = 1 m ( k β i ) ϵ i i = 1 m ( k + β i ) ϵ i + i = 1 m ( k β i ) ϵ i , ψ k 2 i = 1 m γ i ϵ i i = 1 m ( 2 k γ i ) ϵ i + i = 1 m γ i ϵ i , ψ k 2 i = 1 m δ i ϵ i i = 1 m ( 2 k δ i ) ϵ i + i = 1 m δ i ϵ i ,
Then z = m + 1, the following can be found:
L N N E W A ( u 1 , u 2 , u m , u m + 1 ) = ( e m i = 1 ϵ i u i ) e ϵ m + 1 u m + 1 = ψ k i = 1 m ( k + β i ) ϵ i i = 1 m ( k β i ) ϵ i i = 1 m ( k + β i ) ϵ i + i = 1 m ( k β i ) ϵ i ψ k 2 i = 1 m γ i ϵ i i = 1 m ( 2 k γ i ) ϵ i + i = 1 m γ i ϵ i , ψ k 2 i = 1 m δ i ϵ i i = 1 m ( 2 k δ i ) ϵ i + i = 1 m δ i ϵ i e ψ k ( k + β m + 1 ) ϵ m + 1 ( k β m + 1 ) ϵ m + 1 ( k + β m + 1 ) ϵ m + 1 + ( k β m + 1 ) ϵ m + 1 , ψ k 2 γ m + 1 ϵ m + 1 ( 2 k γ m + 1 ) ϵ m + 1 + γ m + 1 ϵ m + 1 , ψ k 2 δ m + 1 ϵ m + 1 ( 2 k δ m + 1 ) ϵ m + 1 + δ m + 1 ϵ m + 1 = ψ k 2 ( ( k i = 1 m ( k + β i ) ϵ i i = 1 m ( k β i ) ϵ i i = 1 k ( k + β i ) ϵ i + i = 1 k ( k β i ) ϵ i ) + ( k ( k + β m + 1 ) ϵ m + 1 ( k β m + 1 ) ϵ m + 1 ( k + β m + 1 ) ϵ m + 1 + ( k β m + 1 ) ϵ m + 1 ) ) k 2 + ( k i = 1 m ( k + β i ) ϵ i i = 1 m ( k β i ) ϵ i i = 1 m ( k + β i ) ϵ i + i = 1 m ( k β i ) ϵ i ) ( k ( k + β m + 1 ) ϵ m + 1 ( k β m + 1 ) ϵ m + 1 ( k + β m + 1 ) ϵ m + 1 + ( k β m + 1 ) ϵ m + 1 ) , ψ k ( k 2 i = 1 m γ i ϵ i i = 1 m ( 2 k γ i ) ϵ i + i = 1 m γ i ϵ i ) ( k 2 γ m + 1 ϵ m + 1 ( 2 k γ m + 1 ) ϵ m + 1 + γ m + 1 ϵ m + 1 ) k 2 + ( k ( k 2 i = 1 m γ i ϵ i i = 1 m ( 2 k γ i ) ϵ i + i = 1 m γ i ϵ i ) ) ( k ( k 2 γ m + 1 ϵ m + 1 ( 2 k γ m + 1 ) ϵ m + 1 + γ m + 1 ϵ m + 1 ) ) ψ k ( k 2 i = 1 m δ i ϵ i i = 1 m ( 2 k δ i ) ϵ i + i = 1 m δ i ϵ i ) ( k 2 δ m + 1 ϵ m + 1 ( 2 k δ m + 1 ) ϵ m + 1 + δ m + 1 ϵ m + 1 ) k 2 + ( k ( k 2 i = 1 m δ i ϵ i i = 1 m ( 2 k δ i ) ϵ i + i = 1 m δ i ϵ i ) ) ( k ( k 2 δ m + 1 ϵ m + 1 ( 2 k δ m + 1 ) ϵ m + 1 + δ m + 1 ϵ m + 1 ) ) , = ψ k i = 1 m + 1 ( k + β i ) ϵ i i = 1 m + 1 ( k β i ) ϵ i i = 1 m + 1 ( k + β i ) ϵ i + i = 1 m + 1 ( k β i ) ϵ i , ψ k 2 i = 1 m + 1 γ i ϵ i i = 1 m + 1 ( 2 k γ i ) ϵ i + i = 1 m + 1 γ i ϵ i , ψ k 2 i = 1 m + 1 δ i ϵ i i = 1 m + 1 ( 2 k δ i ) ϵ i + i = 1 m + 1 δ i ϵ i .
So, Equation (17) is satisfied for any z according to the above results.
This proves Theorem 1. □
Theorem 3.
(Idempotency). Set an LNN u = ψ β , ψ γ , ψ δ in   Ψ , for every u i in u is equal to u, we can get:
L N N E W A ( u 1 , u 2 , u z ) = L N N E W A ( u , u u ) = u .
Proof. 
For u i = u ,   t h e n β i = β ; γ i = γ ; δ i = δ = (i = 1, 2,   ,z), the following result can be found:
L N N E W A ( u 1 , u 2 , u z ) =   L N N E W A   ( u , u u )   = ( e z i = 1 ϵ i u ) = ψ k i = 1 z ( k + β ) ϵ i i = 1 z ( k β ) ϵ i i = 1 z ( k + β ) ϵ i + i = 1 z ( k β ) ϵ i , ψ k 2 i = 1 z γ ϵ i i = 1 z ( 2 k γ ) ϵ i + i = 1 z γ ϵ i , ψ k 2 i = 1 z δ ϵ i i = 1 z ( 2 k δ ) ϵ i + i = 1 z δ ϵ i = ψ k ( k + β ) ( k β ) ( k + β ) + ( k β ) , ψ k 2 γ ( 2 k γ ) + γ , ψ k 2 δ ( 2 k δ ) + δ = ψ β , ψ γ , ψ δ = u
Theorem 4.
(Monotonicity) set two collections of LNNs u i = ψ β i , ψ γ i , ψ δ i and u i   = ψ β i , ψ γ i , ψ δ i (i = 1, 2,…, z) in Ψ , if u i u i then
L N N E W A ( u 1 , u 2 , u z ) L N N E W A ( u 1 , u 2 , u z ) .
Proof. 
For u i u i , then ϵ i u i ϵ i u i
So, we can easily obtain:
e z i = 1 ϵ i u i e z i = 1 ϵ i u i
For L N N E W A ( u 1 , u 2 , u z ) =   e z i = 1 ϵ i u i and L N N E W A ( u 1 , u 2 , u z ) = e z i = 1 ϵ i u i , then we can get: L N N E W A ( u 1 , u 2 , u z ) L N N E W A ( u 1 , u 2 , u z ) . □
Theorem 5.
(Boundedness) Let a collection u i = ψ β i , ψ γ i , ψ δ i in Ψ , u = m i n ( ψ β i ) , m a x ( ψ γ i ) , m a x ( ψ δ i )   a n d   u + = m a x ( ψ β i ) , m i n ( ψ γ i ) , m i n ( ψ δ i ) , we can get:
u L N N E W A ( u 1 , u 2 , u z ) u + .
Proof. 
The following can be obtained by using Theorem 3:
u = LNNEWA ( u , u u ) ,   u + = LNNEWA ( u + , u + u + ) .
The following can be obtained by using Theorem 4:
LNNEWA   ( u , u u ) LNNEWA ( u 1 , u 2 , u z ) LNNEWA ( u + , u + u + ) .
Above all, we can get:
u LNNEWA ( u 1 , u 2 , u z ) u + .
 □

3.2. LNNEWG Operators

Definition 8.
Set a collection u i   = ψ β i , ψ γ i , ψ δ i in   Ψ , for i = 1, 2, …, z, we define the LNNEWG operator:
L N N E W G ( u 1 , u 2 , u z ) =    e z i = 1 ( u i ) ϵ i ,
with the relative weight vector ϵ = ( ϵ 1 , ϵ 2 , , ϵ z ) T   ,   i = 1 z ϵ i = 1 and ϵ i [ 0 , 1 ] .
Theorem 6.
Set a collection u i   = ψ β i , ψ γ i , ψ δ i in   Ψ , for i = 1,2,…,z, then according to the LNNEWG aggregation operator, we can get the following result:
L N N E W G ( u 1 , u 2 , u z ) =   e z i = 1 ( u i ) ϵ i = ψ k 2 i = 1 z β i ϵ i i = 1 z ( 2 k β i ) ϵ i + i = 1 z β i ϵ i , ψ k i = 1 z ( k + γ i ) ϵ i i = 1 z ( k γ i ) ϵ i i = 1 z ( k + γ i ) ϵ i + i = 1 z ( k γ i ) ϵ i , ψ k i = 1 z ( k + δ i ) ϵ i i = 1 z ( k δ i ) ϵ i i = 1 z ( k + δ i ) ϵ i + i = 1 z ( k δ i ) ϵ i
with the relative weight vector ϵ = ( ϵ 1 , ϵ 2 , , ϵ z ) T , i = 1 z ϵ i = 1 and ϵ i [ 0 , 1 ] .
Theorem 7.
(Idempotency) Set a collection u i   = ψ β i , ψ γ i , ψ δ i in   Ψ , for i = 1,2,…,z, for every u i in u is equal to u, we can get
L N N E W G ( u 1 , u 2 , u z ) = L N N E W G ( u , u u ) = u .
Theorem 8.
(Monotonicity). Set two collections of LNNs u i = ψ β i , ψ γ i , ψ δ i   and u i   = ψ β i , ψ γ i , ψ δ i (i = 1, 2,…, z) in Ψ ,if u i u i then
L N N E W G ( u 1 , u 2 , u z ) L N N E W G ( u 1 , u 2 , u z ) .
Theorem 9.
(Boundedness) Let a collection u i = ψ β i , ψ γ i , ψ δ i   in Ψ , u = m i n ( ψ β i ) , m a x ( ψ γ i ) , m a x ( ψ δ i )   a n d   u + = m a x ( ψ β i ) , m i n ( ψ γ i ) , m i n ( ψ δ i ) , we can get:
u L N N E W G ( u 1 , u 2 , u z ) u +
We omit the proof here because it is similar to Theorems 2–5.

4. Methods with LNNEWA or LNNEWG Operator

We introduce two MAGDM methods with the LNNEWA or LNNEWG operator in LNN information.
Now, we suppose that a collection of alternatives is expressed Θ = { Θ 1 , Θ 2 , , Θ m }   and a collection of attributes is expressed   Ε = { E 1 , E 2 , , E n } . Then, ϵ = ( ϵ 1 , ϵ 2 , , ϵ n ) T with i = 1 n ϵ i = 1 and ϵ i [ 0 , 1 ] is the weight vector of   E i ( i = 1 , 2 , , n ) . Establishing a set of experts   D = { D 1 , D 2 , , D t }   ,   μ = ( μ 1 , μ 2 , , μ t ) T with 1 μ j 0   and   j = 1 t μ j = 1 is the weight vector of   D i ( i = 1 , 2 , , t ) . Assuming that the expert D y ( y = 1 , 2 , , t ) uses the LNNs to give out the assessed value θ i j ( y ) for alternative Θ i   with the attribute   Ε j , the value θ i j ( y ) can be written as θ i j ( y ) = ψ β i j y , ψ γ i j y , ψ δ i j y ( y = 1 , 2 , , t   ; i = 1 , 2 , , m ; j = 1 , 2 , , n ) , ψ β i j y , ψ γ i j y , ψ δ i j y Ψ . Then, the decision evaluation matrix can be found. Table 1 is the decision matrix.
The decision steps are described as follows:
Step 1: the integrated matrix can be obtained by the LNNEWA operator:
θ i j = ψ β i j , ψ γ i j , ψ δ i j = L N N E W A ( θ i j 1 , θ i j 2 , , θ i j t ) = e t l = 1 θ l θ i j l = ψ k l = 1 t ( k + β i j l ) μ l l = 1 t ( k β i j l ) μ l l = 1 t ( k + β i j l ) μ l + l = 1 t ( k β i j l ) μ l , ψ k 2 l = 1 t γ i j l μ l l = 1 t ( 2 k γ i j l ) μ l + l = 1 t γ i j l μ l , ψ k 2 l = 1 t δ i j l μ l l = 1 t ( 2 k δ i j l ) μ l + l = 1 t δ i j l μ l
Step 2: the total collective LNN θ i ( i = 1 , 2 , , m ) can be obtained by the LNNWEA or LNNEWG operator.
θ i = L N N E W A ( θ i 1 , θ i 2 , , θ i n ) =   e n j = 1 ϵ i j θ i j = ψ k j = 1 n ( k + β i j ) ϵ i j j = 1 n ( k β i j ) ϵ i j j = 1 n ( k + β i j ) ϵ i j + j = 1 n ( k β i j ) ϵ i j , ψ k 2 j = 1 n γ i j ϵ i j j = 1 n ( 2 k γ i j ) ϵ i j + j = 1 n γ i j ϵ i j , ψ k 2 j = 1 n δ i j ϵ i j j = 1 n ( 2 k δ i j ) ϵ i j + j = 1 n δ i j ϵ i j
Or
θ i = L N N E W G ( θ i 1 , θ i 2 , , θ i n ) = e n j = 1 ( θ i j ) ϵ i j = ψ k 2 j = 1 n β i j ϵ i j j = 1 n ( 2 k β i j ) ϵ i j + j = 1 n β i j ϵ i j , ψ k j = 1 n ( k + γ i j ) ϵ i j j = 1 n ( k γ i j ) ϵ i j j = 1 n ( k + γ i j ) ϵ i j + j = 1 n ( k γ i j ) ϵ i j , ψ k j = 1 n ( k + δ i j ) ϵ i j j = 1 n ( k δ i j ) ϵ i j j = 1 n ( k + δ i j ) ϵ i j + j = 1 n ( k δ i j ) ϵ i j
Step 3: according to Definition 3, we can calculate ζ ( θ i ) and η ( θ i ) of every LNN Θ i ( i = 1 , 2 , , m ) .
Step 4: According to   ζ ( θ i ) , then we can rank the alternatives and the best one can be chosen out.
Step 5: End.

5. Illustrative Examples

5.1. Numerical Example

Now, we adopt illustrative examples of the MAGDM problems to verify the proposed decision methods. An investment company wants to find a company to invest. Now, there are four companies Θ = { Θ 1 , Θ 2 , Θ 3 , Θ 4 }   to be considered as candidates, the first is for selling cars ( Θ 1 ) , the second is for selling food ( Θ 2 ) , the third is for selling computers ( Θ 3 ) , and the last is for selling arms ( Θ 4 ) . Next, three experts D = { D 1 , D 2 , D 3 }   are invited to evaluate these companies, their weight vector is   μ = ( 0.37 , 0.33 , 0.3 ) T . The experts make evaluations of the alternatives according to three attributes   E = { E 1 , E 2 , E 3 } , E 1 is the ability of risk, E 2 is the ability of growth, and E 3 is the ability of environmental impact, the weight vector of them is   ϵ = ( 0.35 , 0.25 , 0.4 ) T . Then, the experts use LNNs to make the evaluation values with a linguistic set Ψ = { ψ 0 = extremely   poor , ψ 1 = very   poor , ψ 2 = poor , ψ 3 = slightly   poor ,   ψ 4 = medium   , ψ 5 = slightly   good , ψ 6 =   good , ψ 7 =   very   good , ψ 8 =   extremely   good }. Then, the decision evaluation matrix can be established, Table 2, Table 3 and Table 4 show them.
Now, the proposed method is applied to manage this MAGDM problem and the computational procedures are as follows:
Step 1: the overall decision matrix can be obtained by the LNNEWA operator in Table 5.
Step 2: the total collective LNN θ i ( i = 1 , 2 , , m ) can be obtained by the LNNWEA operator:
θ 1 = ψ 6.0661 , ψ 1.7313 , ψ 2.3644 , θ 2 = ψ 6.0961 , ψ 1.7929 , ψ 1.9840 , θ 3 = ψ 5.7523 , ψ 1.7260 , ψ 2.2199 ,   and   θ 4 = ψ 6.4198 , ψ 1.4753 , ψ 1.5957 .
Step 3: according to Definition 3, the expected values of ζ ( θ i ) for θ i ( i = 1 , 2 , 3 , 4 ) can be calculated:
ζ ( θ 1 ) = 0.7488 , ζ ( θ 2 ) = 0.7633 , ζ ( θ 3 ) = 0.7419 ,   and   ζ ( θ 4 ) = 0.8062 .
Based on the expected values, four alternatives can be ranked Θ 4 Θ 2 Θ 1 Θ 3 , thus, company Θ 4 is the optimal choice.
Now, the LNNEWG operator was used to manage this MAGDM problem:
Step 1′: the overall decision matrix can be obtained by the LNNEWA operator;
Step 2′: the total collective LNN θ i ( i = 1 , 2 , , m ) can be obtained by the LNNEWG operator, which are as following:
θ 1 = ψ 5.9491 , ψ 1.7507 , ψ 2.4660 , θ 2 = ψ 6.5864 , ψ 1.8026 , ψ 2.0000 , θ 3 = ψ 6.8354 , ψ 1.8390 , ψ 2.2614 , and   θ 4 = ψ 6.3950 , ψ 1.4868 , ψ 1.6033 .
Step 3′: according to Definition 3, the expected values of ζ ( θ i ) for θ i ( i = 1 , 2 , 3 , 4 ) can be calculated:
ζ ( θ 1 ) = 0.7389 , ζ ( θ 2 ) = 0.7827 , ( θ 3 ) = 0.7806 ,   and   ζ ( θ 4 ) = 0.8043 .
Based on the expected values, four alternatives can be ranked Θ 4 Θ 2 Θ 3 Θ 1 , thus, company Θ 4 is still the optimal choice.
Clearly, there exists a small difference in sorting between these two kinds of methods. However, we can get the same optimal choice by using the LNNEWA and LNNEWG operators. The proposed methods are effective ranking methods for the MCDM problem.

5.2. Comparative Analysis

Now, we do some comparisons with other related methods for LNN, all the results are shown in Table 6.
As shown in Table 6, we can see that company   θ 4 is the best choice for investing by using four methods. Many methods such as arithmetic averaging, geometric averaging, and Bonferroni mean can all be used in LNN to handle the multiple attribute decision-making problems and can get similar results. Additionally, The Einstein aggregation operator is smoother than the algebra aggregation operator, which is used in the literature [15,16]. Compared to the existing literature [2,3,4,5,6,7,8,9,10,11,12,13,14], LNNs can express and manage pure linguistic evaluation values, while other literature [2,3,4,5,6,7,8,9,10,11,12,13,14] cannot do that. In this paper, a new MAGDM method was presented by using the LNNEWA or LNNEWG operator based on LNN environment.

6. Conclusions

A new approach for solving MAGDM problems was proposed in this paper. First, we applied the Einstein operation to a linguistic neutrosophic set and established the new operation rules of this linguistic neutrosophic set based on the Einstein operator. Second, we combined some aggregation operators with the linguistic neutrosophic set and defined the linguistic neutrosophic number Einstein weight average operator and the linguistic neutrosophic number Einstein weight geometric (LNNEWG) operator according the new operation rules. Finally, by using the LNNEWA and LNNEWG operator, two methods for handling MADGM problem were presented. In addition, these two methods were introduced into a concrete example to show the practicality and advantages of the proposed approach. In future, we will further study the Einstein operation in other neutrosophic environment just like the refined neutrosophic set [30]. At the same time, we will use these aggregation operators in many actual fields, such as campaign management, decision making and clustering analysis and so on [31,32,33].

Author Contributions

C.F. originally proposed the LNNEWA and LNNEWG operators and their properties; C.F., S.F. and K.H. wrote the paper together.

Acknowledgments

This research was funded by the National Natural Science Foundation of China grant number [61603258], [61703280]; General Research Project of Zhejiang Provincial Department of Education grant number [Y201839944]; Public Welfare Technology Research Project of Zhejiang Province grant number [LGG19F020007]; Public Welfare Technology Application Research Project of Shaoxing City grant number [2018C10013].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic, ProQuest Information & Learning; Infolearnquest: Ann Arbor, MI, USA, 1998; p. 105. [Google Scholar]
  2. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multisp. Multi Struct. 2010, 4, 410–413. [Google Scholar]
  3. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Interval Neutrosophic Sets and Logic: Theory and Applications in Computing; Hexis: Phoenix, AZ, USA, 2005. [Google Scholar]
  4. Ye, S.; Ye, J. Dice similarity measure between single valued neutrosophic multisets and its application in medical diagnosis. Neutrosophic Sets Syst. 2014, 6, 49–54. [Google Scholar]
  5. Ye, J. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses. Artif. Intell. Med. 2015, 63, 171–179. [Google Scholar] [CrossRef] [PubMed]
  6. Ye, J. An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. J. Intell. Fuzzy Syst. 2015, 28, 247–255. [Google Scholar]
  7. Ye, J.; Florentin, S. Similarity Measure of Refined Single-Valued Neutrosophic Sets and Its Multicriteria Decision Making Method. Neutrosophic Sets Syst. 2016, 12, 41–44. [Google Scholar]
  8. Fan, C.X.; Fan, E.; Hu, K. New form of single valued neutrosophic uncertain linguistic variables aggregation operators for decision-making. Cogn. Syst. Res. 2018, 52, 1045–1055. [Google Scholar] [CrossRef]
  9. Fan, C.X.; Ye, J. Heronian Mean Operator of Linguistic Neutrosophic Cubic Numbers and Their Multiple Attribute Decision-Making Methods. Math. Probl. Eng. 2018, 2018, 4158264. [Google Scholar] [CrossRef]
  10. Fan, C.; Ye, J. The cosine measure of refined-single valued neutrosophic sets and refined-interval neutrosophic sets for multiple attribute decision-making. J. Intell. Fuzzy Syst. 2017, 33, 2281–2289. [Google Scholar] [CrossRef]
  11. Ye, J. Multiple attribute group decision making based on interval neutrosophic uncertain linguistic variables. Int. J. Mach. Learn. Cybern. 2017, 8, 837–848. [Google Scholar] [CrossRef]
  12. Liu, P.D.; Shi, L.L. Some neutrosophic uncertain linguistic number Heronian mean operators and their application to multi-attribute group decision making. Neural Comput. Appl. 2017, 28, 1079–1093. [Google Scholar] [CrossRef]
  13. Jun, Y.; Shigui, D. Some distances, similarity and entropy measures for interval-valued neutrosophic sets and their relationship. Int. J. Mach. Learn. Cybern. 2019, 10, 347–355. [Google Scholar] [CrossRef]
  14. Fan, C.X.; Fan, E.; Ye, J. The Cosine Measure of Single-Valued Neutrosophic Multisets for Multiple Attribute Decision-Making. Symmetry-Basel 2018, 10, 154. [Google Scholar] [CrossRef]
  15. Fang, Z.B.; Ye, J. Multiple Attribute Group Decision-Making Method Based on Linguistic Neutrosophic Numbers. Symmetry 2017, 9, 111. [Google Scholar] [CrossRef]
  16. Fan, C.; Ye, J.; Hu, K.; Fan, E. Bonferroni Mean Operators of Linguistic Neutrosophic Numbers and Their Multiple Attribute Group Decision-Making Methods. Information 2017, 8, 107. [Google Scholar] [CrossRef]
  17. Li, Y.Y.; Zhang, H.Y.; Wang, J.Q. Linguistic Neutrosophic Sets and Their Application in Multicriteria Decision-Making Problems. Int. J. Uncertain. Quantif. 2017, 7, 135–154. [Google Scholar] [CrossRef]
  18. Shi, L.; Ye, J. Cosine Measures of Linguistic Neutrosophic Numbers and Their Application in Multiple Attribute Group Decision-Making. Information 2017, 8, 10. [Google Scholar]
  19. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  20. Xu, Z.S. Intuitionistic fuzzy aggregation operators. IEEE Trans. Fuzzy Syst. 2007, 15, 1179–1187. [Google Scholar]
  21. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  22. Zhao, H.; Xu, Z.S.; Ni, M.F.; Liu, S.S. Generalized Aggregation Operators for Intuitionistic Fuzzy Sets. Int. J. Intell. Syst. 2010, 25, 1–30. [Google Scholar] [CrossRef]
  23. Klement, E.P.; Mesiar, R.; Pap, E. Triangular norms. Position paper I: Basic analytical and algebraic properties. Fuzzy Sets Syst. 2004, 143, 5–26. [Google Scholar] [CrossRef]
  24. Wang, W.Z.; Liu, X.W. Intuitionistic fuzzy geometric aggregation operators based on Einstein operations. Int. J. Intell. Syst. 2011, 26, 1049–1075. [Google Scholar] [CrossRef]
  25. Zhao, X.F.; Wei, G.W. Some intuitionistic fuzzy Einstein hybrid aggregation operators and their application to multiple attribute decision making. Knowl. Based Syst. 2013, 37, 472–479. [Google Scholar] [CrossRef]
  26. Guo, S.; Jin, F.F.; Chen, Y.H. Application of hesitate fuzzy Einstein geometry operator. Comput. Eng. Appl. 2013. [Google Scholar] [CrossRef]
  27. Yang, L.; Li, B.; Xu, H. Novel Power Aggregation Operators Based on Einstein Operations for Interval Neutrosophic Linguistic Sets. IAENG Int. J. Appl. Math. 2018, 48, 4. [Google Scholar]
  28. Xia, M.M.; Xu, Z.S.; Zhu, B. Some issues on intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm. Knowl. Based Syst. 2012, 31, 78–88. [Google Scholar] [CrossRef]
  29. Wang, W.Z.; Liu, X.W. Intuitionistic fuzzy information aggregation using Einstein operations. IEEE Trans. Fuzzy Systs. 2012, 20, 923–938. [Google Scholar] [CrossRef]
  30. Smarandache, F. N-Valued Refined Neutrosophic Logic and Its Applications in Physics. Prog. Phys. 2013, 4, 143–146. [Google Scholar]
  31. Morente-Molinera, J.A.; Kou, G.; González-Crespo, R.; Corchado, J.M. Solving multi-criteria group decision making problems under environments with a high number of alternatives using fuzzy ontologies and multi-granular linguistic modelling methods. Knowl. Based Syst. 2017, 137, 54–64. [Google Scholar] [CrossRef]
  32. Carrasco, R.A.; Blasco, M.F.; García-Madariaga, J.; Herrera-Viedma, E. A Fuzzy Linguistic RFM Model Applied to Campaign Management. Int. J. Interact. Multimedia Artif. Intell. 2019, 5, 21–27. [Google Scholar] [CrossRef]
  33. Khiat, S.; Djamila, H. A Temporal Distributed Group Decision Support System Based on Multi-Criteria Analysis. Int. J. Interact. Multimed. Artif. Intell. 2019, 1–15, In Press. [Google Scholar] [CrossRef]
Table 1. The decision matrix using linguistic neutrosophic numbers (LNN).
Table 1. The decision matrix using linguistic neutrosophic numbers (LNN).
Ε 1 Ε n
Θ 1 ψ β 11 y , ψ γ 11 y , ψ δ 11 y ψ β 1 n y , ψ γ 1 n y , ψ δ 1 n y
Θ 2 ψ β 21 y , ψ γ 21 y , ψ δ 21 y ψ β 2 n y , ψ γ 2 n y , ψ δ 2 n y
Θ m ψ β m 1 y , ψ γ m 1 y , ψ δ m 1 y ψ β m n y , ψ γ m n y , ψ δ m n y
Table 2. The decision matrix based on the data of D 1 .
Table 2. The decision matrix based on the data of D 1 .
E 1 E 2 E 3
Θ 1 ψ 6 1 , ψ 1 1 , ψ 2 1 ψ 7 1 , ψ 2 1 , ψ 1 1 ψ 6 1 , ψ 2 1 , ψ 2 1
Θ 2 ψ 7 1 , ψ 1 1 , ψ 1 1 ψ 7 1 , ψ 3 1 , ψ 2 1 ψ 7 1 , ψ 2 1 , ψ 1 1
Θ 3 ψ 6 1 , ψ 2 1 , ψ 2 1 ψ 7 1 , ψ 1 1 , ψ 1 1 ψ 6 1 , ψ 2 1 , ψ 2 1
Θ 4 ψ 7 1 , ψ 1 1 , ψ 2 1 ψ 7 1 , ψ 2 1 , ψ 3 1 ψ 7 1 , ψ 2 1 , ψ 1 1
Table 3. The decision matrix based on the data of D 2 .
Table 3. The decision matrix based on the data of D 2 .
E 1 E 2 E 3
Θ 1 ψ 6 2 , ψ 1 2 , ψ 2 2 ψ 6 2 , ψ 1 2 , ψ 1 2 ψ 4 2 , ψ 2 2 , ψ 3 2
Θ 2 ψ 7 2 , ψ 2 2 , ψ 3 2 ψ 6 2 , ψ 1 2 , ψ 1 2 ψ 4 2 , ψ 2 2 , ψ 3 2
Θ 3 ψ 5 2 , ψ 1 2 , ψ 2 2 ψ 5 2 , ψ 1 2 , ψ 2 2 ψ 5 2 , ψ 4 2 , ψ 2 2
Θ 4 ψ 6 2 , ψ 1 2 , ψ 1 2 ψ 5 2 , ψ 1 2 , ψ 1 2 ψ 5 2 , ψ 2 2 , ψ 3 2
Table 4. The decision matrix based on the data of D 3 .
Table 4. The decision matrix based on the data of D 3 .
. E 1 E 2 E 3
Θ 1 ψ 7 3 , ψ 3 3 , ψ 4 3 ψ 7 3 , ψ 3 3 , ψ 3 3 ψ 5 3 , ψ 2 3 , ψ 5 3
Θ 2 ψ 6 3 , ψ 3 3 , ψ 4 3 ψ 5 3 , ψ 1 3 , ψ 2 3 ψ 6 3 , ψ 2 3 , ψ 3 3
Θ 3 ψ 7 3 , ψ 2 3 , ψ 4 3 ψ 6 3 , ψ 1 3 , ψ 2 3 ψ 7 3 , ψ 2 3 , ψ 4 3
Θ 4 ψ 7 3 , ψ 2 3 , ψ 3 3 ψ 5 3 , ψ 2 3 , ψ 1 3 ψ 6 3 , ψ 1 3 , ψ 1 3
Table 5. The overall decision matrix.
Table 5. The overall decision matrix.
E 1 E 2 E 3
Θ 1 ψ 6.3671 , ψ 1.4116 , ψ 2.4888 ψ 6.7366 , ψ 1.8191 , ψ 1.4116 ψ 5.1343 , ψ 2.000 , ψ 3.0637
Θ 2 ψ 6.7630 , ψ 1.7705 , ψ 2.2397 ψ 6.2295 , ψ 1.5275 , ψ 1.5997 ψ 6.0042 , ψ 2.000 , ψ 2.0355
Θ 3 ψ 6.1200 , ψ 1.5997 , ψ 2.4888 ψ 6.2067 , ψ 1.000 , ψ 1.5564 ψ 6.1200 , ψ 2.5427 , ψ 2.4888
Θ 4 ψ 6.7366 , ψ 1.2370 , ψ 1.8191 ψ 5.9645 , ψ 1.5997 , ψ 1.5275 ψ 6.2067 , ψ 1.6329 , ψ 1.4602
Table 6. The ranking orders by utilizing three different methods.
Table 6. The ranking orders by utilizing three different methods.
MethodResultRanking OrderThe Best Alternative
Method 1 based on arithmetic averaging in [15] ζ ( θ 1 ) = 0.7528, ζ ( θ 2 ) = 0.7777, ζ ( θ 3 ) = 0.7613, ζ ( θ 4 )   =   0.8060. θ 4 θ 2 θ 3 θ 1 θ 4
Method 2 based on geometric averaging in [15] ζ ( θ 1 ) =   0.7397, ζ ( θ 2 ) =   0.7747, ζ ( θ 3 ) =   0.7531, ζ ( θ 4 )   =   0.8035. θ 4 θ 2 θ 3 θ 1 θ 4
Method 3 based on Bonferroni Mean in [16] (p = q = 1) ζ ( θ 1 ) = 0.7298, ζ ( θ 2 ) =   0.7508, ζ ( θ 3 ) = 0.7424 ζ ( θ 4 ) =   0.7864. θ 4 θ 2 θ 3 θ 1 θ 4
The proposed method ζ ( θ 1 ) = 0.7488, ζ ( θ 2 ) =   0.7633, ζ ( θ 3 ) =   0.7419 ζ ( θ 4 ) =   0.8062. θ 4 θ 2 θ 1 θ 3 θ 4

Share and Cite

MDPI and ACS Style

Fan, C.; Feng, S.; Hu, K. Linguistic Neutrosophic Numbers Einstein Operator and Its Application in Decision Making. Mathematics 2019, 7, 389. https://doi.org/10.3390/math7050389

AMA Style

Fan C, Feng S, Hu K. Linguistic Neutrosophic Numbers Einstein Operator and Its Application in Decision Making. Mathematics. 2019; 7(5):389. https://doi.org/10.3390/math7050389

Chicago/Turabian Style

Fan, Changxing, Sheng Feng, and Keli Hu. 2019. "Linguistic Neutrosophic Numbers Einstein Operator and Its Application in Decision Making" Mathematics 7, no. 5: 389. https://doi.org/10.3390/math7050389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop