Next Article in Journal
Coverage and Rate Analysis for Location-Aware Cross-Tier Cooperation in Two-Tier HetNets
Previous Article in Journal
Multi-objective Fuzzy Bi-matrix Game Model: A Multicriteria Non-Linear Programming Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cloud Generalized Power Ordered Weighted Average Operator and Its Application to Linguistic Group Decision-Making

School of Economics and Management, North China Electric Power University, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Symmetry 2017, 9(8), 156; https://doi.org/10.3390/sym9080156
Submission received: 6 July 2017 / Revised: 4 August 2017 / Accepted: 11 August 2017 / Published: 15 August 2017

Abstract

:
In this paper, we develop a new linguistic aggregation operator based on the cloud model for solving linguistic group decision-making problem. First, an improved generating cloud method is proposed so as to transform linguistic variables into clouds, which modifies the limitation of the classical generating cloud method. We then address some new cloud algorithms, such as cloud possibility degree and cloud support degree which can be respectively used to compare clouds and determine the weights. Combining the cloud support degree with power aggregation operator, we develop a new cloud aggregation operator dubbed the cloud generalized power ordered weighted average (CGPOWA) operator. We study the properties of the CGPOWA operator and investigate its family including a wide range of aggregation operators such as the CGPA operator, CPOWA operator, CPOWGA operator, CPWQA operator, CWAA and CWGA operator. Furthermore, a new approach for linguistic group decision-making is presented on the basis of the improved generating cloud method and CGPOWA operator. Finally, an illustrative example is provided to examine the effectiveness and validity of the proposed approach.

1. Introduction

As an important part of modern decision science, multiple criteria decision-making (MCDM) is the process of finding the best option from all of the feasible alternatives. It consists of a single decision maker (DM), multiple decision criteria and multiple decision alternatives [1]. However, the increasing complexity of the socioeconomic environment makes it less possible for a single DM to consider all relevant aspects of a problem as many decision-making processes take place in group settings. This makes the multiple criteria group decision-making (MCGDM) become more and more attractive in management [2,3,4,5,6]. Due to the complexities of objects and the vagueness of the human mind, it is more appropriate for the DMs to use linguistic descriptors than other descriptors to express their assessments in the actual process of MCGDM [7,8,9]. For example, when evaluating the “comfort” or “design” of a car, terms such as “good”, “medium”, and “bad” are frequently used, and when evaluating a car’s speed, terms such as “very fast”, “fast”, and “slow” can be used instead of numerical values. In such situations, the use of a linguistic approach is necessary. The objective of Linguistic multiple criteria group decision-making (LMCGDM) is to find the optimal solution(s) from a set of feasible alternatives by means of linguistic information provided by the DMs. To realize this objective, aggregating linguistic information is the key point and linguistic aggregation operators are commonly used.
Until now, many linguistic aggregation operators have been proposed and these operators can be classified into six types: (1) the first is based on linear ordering, such as the linguistic max and min operators [10,11,12], linguistic max-min weighted averaging operator [13], linguistic median operator [14], ordinal ordered weighted averaging operator [15], linguistic weighted conjunction operator [16]; (2) The second is built on the extension principle [17,18] and makes computations on fuzzy numbers that support the semantics of the linguistic labels, such as the linguistic OWA operator [19], and the linguistic weighted OWA operator [20], the inverse linguistic OWA operator degree [21], distance measure operator with linguistic information [22], induced linguistic continuous ordered weighted geometric operator [23], linguistic distances with continuous aggregation operator [24], linguistic probabilistic weighted average operator [25]; (3) The third is based upon 2-tuple representation, including the 2-tuple arithmetic mean operator [26], 2-tuple OWA operator [27], dependent 2-tuple ordered weighted geometric operator [28], 2-Tuple linguistic hybrid arithmetic aggregation operator [29]; (4) The fourth computes directly with words, such as the linguistic weighted averaging operator [30], extended ordered weighted geometric operator [31], linguistic weighted arithmetic averaging operator [32], linguistic ordered weighted geometric averaging operator [33], uncertain linguistic weighted averaging operator [34], induced uncertain linguistic OWA operator [35], uncertain linguistic geometric mean operator [36]; (5) The fifth is on the basis of the power ordered weighted average operator [37], including linguistic power ordered weighted average (LPOWA) operator [38], the linguistic generalized power average (LGPA) operator [39]; (6) and the last is a class of cloud aggregation operator which introduces the cloud model [40], in LMCGDM, such as the cloud weighted arithmetic averaging (CWAA) operator and cloud weight geometric averaging (CWGA) operator [41], trapezium cloud ordered weighted arithmetic averaging (TCOWA) operator [42]. A detail description of the operators LPOWA, CCWA, and CWGA will be presented in Section 2 of the paper.
The above mentioned operators of types (1)–(2) develop approximation processes to express the results in the initial expression domain, but they produce a consequent loss of information and then result in a lack of precision [26]. This shortcoming of operators of types (1)–(2) is just overcome by those operators of types (3)–(4) which allow a continuous representation of the linguistic information on their domains, and then can represent any counting of information obtained in an aggregation process without loss of information [26,27,28,29]. However, the operators of types (3)–(4) do not consider the information about the relationship between the values being combined [38]. The weakness of operators of types (3)–(4) can be corrected by operators of type (5) since they allow exact arguments to support each other in aggregation process and the weighting vectors depend on the input arguments and allow values to be aggregated to support and reinforce each other [37,38,39]. In this way, operators of type (5) consider the information about the relationship between the values being fused, but these operators of type (5) cannot describe the randomness of languages [41].
The limitation of operators of type (5) can be explained by the following fact. We know that natural languages generally include uncertainty in which randomness and fuzziness are the two most important aspects; here, the fuzziness mainly refers to uncertainty regarding the range of extension of concept, and the randomness implies that any concept is related to the external world in various ways [42]. The fuzziness and randomness are used to describe the uncertainty of natural languages. For instance, for a linguistic decision-making problem, DM A may think that 75% fulfillment of a task is “good”, but DM B may regard that less than 80% fulfillment of the same task cannot be considered as “good” with the same linguistic term scale. In this way, when considering the degree of certainty of an element belonging to a qualitative concept in a specific universe, it is more feasible to allow a stochastic disturbance of the membership degree encircling a determined central value than to allow a fixed number [41,42]. The cloud model, based on the fuzzy set theory and probability statistics [40,43,44], can describe the fuzziness with a normal membership function and the randomness by means of three numerical characteristics (expectation, entropy and Hyper entropy). Hence, the cloud aggregation operators of type (6) just overcome the limitation of operators of type (5). Nevertheless, the cloud aggregation operators of type (6) do not take into account the information about the relationship between the values being fused.
Based on the above analyses, we find that the limitations of linguistic power aggregation operators of type (5) and cloud aggregation operators of type (6) are mutually complementary. In other words, the linguistic power aggregation operators focus on the information about the relationship between the values being fused, while they ignore the randomness of qualitative concept; the cloud aggregation operators can capture the fuzziness and randomness of linguistic information, but they neglect the information about the relationship between values being fused.
Therefore, this paper aims to propose a new cloud generalized power ordered weighted average (CGPOWA) operator so as to overcome the limitations of existing linguistic power aggregation operators of (5) and cloud aggregation operators of (6). The novelty of this paper is as follows.
(i)
We present an improved generating cloud method to transform linguistic variables into clouds. The key to linguistic decision-making based on cloud models is the transformation between linguistic variables and clouds, for which Wang and Feng [45] proposed a method of generating five clouds on the basis of the golden ratio, but this method has three weaknesses: (a) it is limited to a linguistic term set of 5 labels; (b) the expectation of clouds sometimes exceeds the range of the universe; and (c) it cannot effectively distinguish the linguistic evaluation scale over the symmetrical interval. Regarding these limitations, we present an improved method by applying the cloud construction principle. This method can transform linguistic term set of any odd labels rather than only five labels, and can guarantee that all the expectations of clouds fall into the range of the universe. Meanwhile, it can effectively distinguish the linguistic evaluation scale over the symmetrical interval. In this way, this method modifies the weaknesses of the classical generating cloud method.
(ii)
We address some new cloud algorithms such as cloud possibility degree and cloud support degree. Based on “3En rules” of cloud model, a cloud distance is defined. We further put forward to a cloud possibility degree according to this cloud distance, which can be used to compare the clouds, and define a cloud support degree which is a similarity index. That is, the greater the similarity is, the closer the two clouds are, and consequently the more they support each other. The support degree can be used to determine the weights of aggregation operator.
(iii)
We develop a new CGPOWA operator. By using the cloud support degree we defined and the power aggregation operator [39], we develop the CGPOWA operator, which overcomes the limitations of existing linguistic power aggregation operators and cloud aggregation operators and simultaneously maintains the advantages of the two types operators. By studying its properties, we find that the CGPOWA operator is idempotent, commutative and bounded. In addition, we investigate the family of the CGPOWA operator which contains a wide range of aggregation operators such as the CGPA operator, CPOWA operator, CPOWGA operator, CPWQA operator, CWAA and CWGA operators, the maximum and minimum operators.
(iv)
A new approach for LMCGDM is developed by applying the improved generating cloud method and CGPOWA operator. The main advantage of this approach is that it gives a completely objective view of the decision problem because the CGPOWA operator and the weighting method depend on the arguments completely. Comparing our method with three traditional LMCGDM approaches (linguistic symbolic model, linguistic membership function model, 2-tuple linguistic model) and the cloud aggregating method [41,42], we find that:
(a)
Compared with the three traditional LMCGDM approaches, our method takes a multi-granular linguistic assessment scale of great psychological sense, while the three traditional LMCGDM approaches only use a uniform granular linguistic assessment scale. In other words, when the alternatives are assessed, these three traditional approaches only regard the average level as the unique criterion, which leads to the evaluations rough and one-sided. Our method, however, considers not only the average level but also the fluctuation and stability of qualitative concepts via the cloud model;
(b)
Compared with cloud aggregating method [41,42], our method provides a completely objective weighting model by using the cloud support degree, while the weights in Wang et al. [41,42] are subjectively given by the DMs which may result in different ranking results if the DMs provide different weight vectors. In addition, the CGPOWA operator considers the relationships between the arguments provided by the DMs, while the cloud aggregating operators in Wang et al. [41,42] do not;
(c)
Our method presents a simple measure to compare different clouds by the cloud possibility degree (Equation (11)) and the ranking vector (Equation (13)), which requires no knowledge about the distribution of cloud drops, this is different from the score function [41] which needs to know the distribution of cloud drops. This is also an attractive feature because in most case the distribution of cloud drops is unknown and it is rigid to acquire cloud drops.
This approach is also applicable to different linguistic decision-making problems such as strategic decision-making, human resource management, product management and financial management.
The rest of the paper is organized as follows. Section 2 reviews the LPOWA, CWAA and CWGA operators and the cloud model. Section 3 presents an improved method of transforming linguistic variables into clouds, and provides some new cloud algorithms. Section 4 develops the CGPOWA operator and studies its properties. Section 5 develops an approach for LMCGDM. Section 6 presents an illustrative example and the conclusions are drawn in Section 7.

2. Preliminaries

In this section, we briefly review the LPOWA operator, the definitions and operational rules of the clouds, CWAA operator and CWGA operator.

2.1. The LPOWA Operator

The linguistic approach is an approximate technique that represents qualitative aspects as linguistic values using linguistic variables. Let S = { s i | i = t , , t } be a finite and completely ordered discrete term set, which stands for a possible value for a linguistic variable. For instance, a set of nine terms S could be [8,23]:
S = { s 4 = extremely poor ,   s 3 = very poor ,   s 2 = poor ,   s 1 = medium poor ,   s 0 = fair , s 1 = medium good ,   s 2 = good ,   s 3 = very good ,   s 4 = extremely good }
In many real problems, the input linguistic arguments may not match any of the original linguistic labels, or may be located between two of them. For such cases, Xu [36] presents some operational laws. Let s 1 = [ s α 1 ,   s β 1 ] and s 2 = [ s α 2 ,   s β 2 ] , the operational rules are as follows:
(i)
s 1 s 2 = [ s α 1 ,   s β 1 ] [ s α 2 ,   s β 2 ] = [ s α 1 s α 2 ,   s β 1 s β 2 ] = [ s α 1 α 2 ,   s β 1 β 2 ] ;
(ii)
s λ = [ s α ,   s β ] λ = [ s α λ ,   s β λ ] , where λ [ 0 ,   1 ] ;
(iii)
s 1 s 2 = s 2 s 1 ;
(iv)
( s 1 s 2 ) λ = s 1 λ s 2 λ , where λ [ 0 ,   1 ] .
Yager [37] introduced a nonlinear ordered weighted-average aggregation tool, called the power ordered weighted average (POWA) operator, which can be defined as follows:
P O W A ( a 1 , a 2 , , a n ) = j = 1 n w j β j λ ,
where w j = g ( R j T V ) g ( R j 1 T V ) ,   R j = i = 1 j β i ,   T V = i = 1 n β i ,   V β i = 1 + T ( β i ) ,   T ( β i ) = j = 1 j i n S u p ( β i , β j ) .
Here S u p ( β i , β j ) is the support for β i from β j such that S u p ( β i , β j ) [ 0 , 1 ] , S u p ( β i , β j ) = S u p ( β j , β i ) and S u p ( β i , β j ) S u p ( x , y ) for | β i β j | < | x y | ; and β i is the ith largest of the arguments, and the basic unit-interval monotonic (BUM) function g ( ) satisfies g ( 0 ) = 0 , g ( 1 ) = 1 , g ( x ) g ( y ) , if x > y .
Based on the POWA operator, Xu, Merigó and Wang [38] provided a linguistic power ordered weighted average (LPOWA) operator, which is defined as follows.
Definition 1 (Xu, Merigó and Wang, [38]).
Let s α j ( j = 1 ,   2 ,   ,   n ) be a collection of linguistic variables, a linguistic power ordered weighted averaging (LPOWA) operator is a mapping LPOWA: S ¯ n S ¯ , if
L P O W A ( s α 1 ,   s α 2 ,   ,   s α n ) = j = 1 n u j s α σ ( j )
where ( σ ( 1 ) ,   σ ( 2 ) ,   ,   σ ( n ) ) is a permutation of ( 1 ,   2 ,   ,   n ) such that s α σ ( j 1 ) s α σ ( j ) for all j, and
u j = g ( B j T V ) g ( B j 1 T V ) ,   B j = i = 1 j V σ ( i ) ,   T V = j = 1 n V σ ( j ) ,   V σ ( j ) = 1 + T ( s α σ ( j ) ) ,
where g : [ 0 , 1 ]   [ 0 , 1 ] is a basic unit-interval monotonic (BUM) function satisfying g ( 0 ) = 0 , g ( 1 ) = 1 , g ( x ) g ( y ) if x > y . T ( s α σ ( j ) ) denotes the support of the jth largest argument by all the other arguments.
Remark 1.
The LPOWA operator considers the linguistic information about the relationship between the values being combined since it allows exact arguments to support each other in aggregation process and the weighting vectors depend on the input arguments and allow arguments being aggregated to support each other. However, this type operator can not characterize the randomness of languages. Here the randomness implies that any language is related to the external world in various ways [44]. In fact, natural languages usually involve in randomness and fuzziness (refer to uncertainty regarding the range of extension of languages). For example, DM A thinks 75% fulfillment of a task is “good”, but DM B thinks that less than 80% fulfillment of the same task cannot be considered “good” with the same linguistic term scale. When considering the degree of certainty of an element belonging to a qualitative concept in a specific universe, it is more feasible to allow a stochastic disturbance of the membership degree encircling a determined central value than to allow a fixed number.

2.2. Cloud Model

The cloud model, based on the fuzzy set theory and probability statistics [40], can describe the fuzziness with membership function and the randomness via probability distribution.
Definition 2 (Li, Meng and Shi, [40]).
Let U be a quantitative domain expressed by precise values, and C a qualitative concept on the domain. If the quantitative value x ( x U ) is a random instantiation to C, whose membership μ ( x ) [ 0 , 1 ] for C is a random number with stable tendency:
μ : U [ 0 , 1 ] ,   x U ,   x μ ( x ) .
Then, the distribution of x on the domain is named as a cloud and each x is named as a droplet.
The normal cloud model is applicable and universal for it is based on normal distribution and on the Gauss membership function [43].
Definition 3 (Li and Du, [43]).
Suppose that U is the universe of discourse and T is a qualitative concept in U . If x ( x U ) is a random instantiation of concept T satisfying x N ( E x ,   E n 2 ) and E n N ( E n ,   H e 2 ) and the certainty degree of x belonging to T satisfies μ = exp ( ( x E x ) 2 2 ( E n ) 2 ) , then the distribution of x in the universe U is called a normal cloud.
The cloud model can effectively integrate the randomness and fuzziness of concepts and describe the overall quantitative property of a concept via Expectation Ex, Entropy En, and Hyper entropy He. If A is a cloud with three numerical characteristics Ex, En, and He, then cloud A can be described as A ( E x , E n , H e ) . Li, Liu and Gan [44] provided operation rules of clouds as follows. Assume that there are two clouds A ( E x 1 , E n 1 , H e 1 ) and B ( E x 2 , E n 2 , H e 2 ) , operations between cloud A and cloud B are given by:
(i) 
A + B = ( E x 1 + E x 2 ,   E n 1 2 + E n 2 2 ,   H e 1 2 + H e 2 2 ) ;
(ii) 
A B = ( E x 1 E x 2 ,   E n 1 2 + E n 2 2 ,   H e 1 2 + H e 2 2 ) ;
(iii) 
A × B = ( E x 1 × E x 2 ,   ( E n 1 E x 2 ) 2 + ( E n 2 E x 1 ) 2 ,   ( H e 1 E x 2 ) 2 + ( H e 2 E x 1 ) 2 ) ;
(iv) 
λ A = ( λ E x 1 ,   λ E n 1 ,   λ H e 1 ) ;
(v) 
A λ = ( E x 1 λ ,   λ E x 1 λ 1 E n 1 ,   λ E x 1 λ 1 H e 1 ) .
Figure 1 shows that fuzziness is about the extension range of x, such as [37, 62]. Randomness is about the various cognitions for the DMs. In linguistic decision-making, there are occasions for which different individuals attribute different meanings to a linguistic expression. The same individual may even interpret the same linguistic expression differently in different situations. For instance, DM A may believe the membership degree of 45 belonging to the “number near 40” is 0.8, whereas DM B may regard it to be 0.85. Non-uniform cognition exists among the DMs. The process of aggregation of linguistic information will be distorted owing to the lack of uniformity. The cloud model allows the certainty degree of x to follow a probability distribution, which allows the distortion held by the DMs in the aggregation process to be neutralized to a great extent [41].

2.3. The CWAA Operator and CWGA Operator

Wang, Peng and Zhang [41] introduced the cloud model in LMCGDM and presented the cloud weighted arithmetic averaging (CWAA) operator and cloud weight geometric averaging (CWGA) operator.
Definition 4 (Wang, Peng and Zhang, [41]).
Let Ω be the set of all clouds and Y i ( E x i , E n i , H e i ) ( i = 1 , 2 , , n ) be a subset of Ω . A mapping CWAA: Ω n Ω is defined as the cloud-weighted arithmetic averaging (CWAA) operator so that the following is true:
C W A A ω ( Y 1 ,   Y 2 ,   ,   Y n ) = i = 1 n w i Y i ,
Here W = ( w 1 ,   w 2 ,   ,   w n ) is the associated weight vector of Y i ( E x i ,   E n i ,   H e i )   ( i = 1 ,   2 ,   ,   n ) , w i [ 0 , 1 ] ( i = 1 ,   2 ,   ,   n ) and i = 1 n w i = 1 .
Definition 5 (Wang, Peng and Zhang, [41]).
Let Ω be the set of all clouds and Y i ( E x i ,   E n i ,   H e i ) ( i = 1 ,   2 ,   ,   n ) be a subset of Ω . A mapping CWGA: Ω n Ω is defined as the CWGA operator, and the following is true:
C W G A w ( Y 1 ,   Y 2 ,   ,   Y n ) = i = 1 n Y i w i
Remark 2.
The CWAA and CWGA operators characterize the fuzziness and randomness of languages with cloud model, while they do not take into account the information about the relationship between the values being fused.

3. An Improved Generating Cloud Method and Cloud Algorithms

This section provides an improved method to transform linguistic values into clouds, and define some new cloud algorithms, such as cloud possibility degree and cloud support degree.

3.1. An Improved Generating Cloud Method

For an LMCGDM problem, natural languages generally include vague and imprecise information which is too complex and ill-defined to describe by using conventional quantitative expressions, and thus there is a barrier for transforming linguistic information into quantitative values. The cloud model describes linguistic concepts via three numerical characteristics which realize the objective and interchangeable transformation between qualitative concepts and quantitative values. Hence, it is necessary to transform linguistic variables into clouds. The key of this transformation is to select a transformation method. As for this, Wang and Feng [45] proposed a classical method for generating five clouds on the basis of the golden ratio, which is equal of 1 2 ( 1 + 5 ) .
Let n be the linguistic evaluation scale and U = [ X min ,   X max ] be an effective universe given by the DMs. Assume that the intermediate cloud is expressed by Y 0 ( E x 0 ,   E n 0 ,   H e 0 ) . The adjacent clouds around Y 0 ( E x 0 ,   E n 0 ,   H e 0 ) are respectively expressed by:
Y 1 ( E x 1 ,   E n 1 ,   H e 1 ) ,   Y 1 ( E x 1 ,   E n 1 ,   H e 1 ) , Y 2 ( E x 2 , E n 2 ,   H e 2 ) ,   Y 2 ( E x 2 ,   E n 2 ,   H e 2 ) ,   , Y ( n 1 ) / 2 ( E x ( n 1 ) / 2 ,   E n ( n 1 ) / 2 ,   H e ( n 1 ) / 2 ) ,   Y ( n 1 ) / 2 ( E x ( n 1 ) / 2 ,   E n ( n 1 ) / 2 ,   H e ( n 1 ) / 2 ) .
The numerical characteristics of five clouds are shown as follows (Wang and Feng, [45]):
E x 0 = ( X min + X max ) / 2 ,   E n 1 = E n 1 = 0.382 ( X max X min ) / 6 ,   E n 0 = 0.618 E n 1 , E n 2 = E n 2 = E n 1 / 0.618 ,   E x 1 = E x 0 0.382 ( X min + X max ) / 2 ,   E x 2 = X min ,   E x 2 = X max , H e 1 = H e 1 = H e 0 / 0.618 ,   E x 1 = E x 0 + 0.382 ( X min + X max ) / 2 ,   H e 2 = H e 2 = H e 1 / 0.618 ,
Here H e 0 is given beforehand.
However, we find that there are three weaknesses in the method of Wang and Feng [45].
  • First, the expectation of clouds may exceed the range of the universe U . For example, if U = [ 10 , 20 ] , then E x 1 = E x 0 0.382 X min + X max 2 = 9.27 < 10 , and E x 1 = E x 0 + 0.382 X min + X max 2 = 20.73 > 20 .
  • Second, the method of Wang and Feng [45] can not be widely used for it is only limited to five labels of the linguistic evaluation scale.
  • And third, the method cannot effectively distinguish the linguistic evaluation scale over the symmetrical interval. For instance, if U = [ a ,   a ] ,   a > 0 , then the expectation values E x 1 = E x 1 = E x 0 = 0 , which results in the linguistic evaluation scale undistinguished.
To overcome the above weaknesses, we present an improved method to transform linguistic variables into clouds by means of the cloud construction principle, which is shown as follows.
Procedure for transforming linguistic variables into clouds:
Step 1. Calculate E x
E x 0 = ( X min + X max ) / 2 ,   E x i = E x 0 + 0.382 i ( X max X min 2 ) / ( n 3 ) 2 ,   E x ( n 1 ) / 2 = X max E x ( n 1 ) / 2 = X min ,   E x i = E x 0 0.382 i ( X max X min 2 ) / ( n 3 ) 2 ,   ( 1 i n 3 2 ) .
Step 2. Compute E n
E n 1 = E n 1 = 0.382 × ( X max X min ) / 6 ,   E n 0 = 0.618 E n + 1 ,   E n i = E n i = E n i 1 0.618 ,   ( 2 i n 1 2 ) .
Step 3. Calculate H e
H e i = H e + i = H e i 1 / 0.618 ,   1 i ( n 1 ) / 2 , here H e 0 is given beforehand.
The following Theorem proves that our method can overcome the weaknesses of method given by Wang and Feng [45].
Theorem 1.
Let n be the linguistic evaluation scale and U = [ X min ,   X max ] be a valid universe given by the DMs. If Y i ( E x i ,   E n i ,   H e i )   ( i = 1 ,   2 ,   ,   n ) are the cloud in U , then E x i E x j   ( i j ) , ( i ,   j = ( n 1 ) / 2 ,   ,   0   ,   ,   ( n 1 ) / 2 ) , and X min E x i X max   ( i = ( n 1 ) / 2 ,   ,   0 ,   ,   ( n 1 ) / 2 ) .
Proof. 
(1)
First, we prove that the expectations of clouds are different from each other.
Let l U = X max X min 0 , according to Step 1 of the procedure for transforming linguistic variables into clouds, we get:
0.382 × ( X max X min ) / 2 = 0.382 l U / 2 0 .
Therefore,
E x i = E x 0 + 0.382 i × l U / ( n 3 ) E x j = E x 0 + 0.382 j × l U / ( n 3 )   ( i j ) ,
E x i = E x 0 0.382 i × l U / ( n 3 ) E x j = E x 0 0.382 j × l U / ( n 3 )   ( i j ) .
It follows from expressions (6) and (7) that expectations of clouds are different from each other.
(2)
Second, we prove that all the expectations of clouds fall within the range of the universe.
From Step 1 of the procedure for transforming linguistic variables into clouds, we see that:
E x 1 = min { E x i } , E x ( n 1 ) / 2 = max { E x i } ,   ( 1 i ( n 1 ) / 2 ) .
Since E x 1 = X max ( 0.5 0.382 n 3 ) × l U , it can be concluded that:
X min < E x 1 < X max .
Similarly, note that E x ( n 1 ) / 2 = X max 0.309 × l U , we then have:
X min < E x ( n 1 ) 2 < X max .
Therefore, the expectations of clouds Y i ( 1 i n 1 2 ) fall into the range of the universe.
By the same token, it is easy to verify that the expectations of clouds Y i ( 1 i n 1 2 ) fall into the range of the universe. Based on the above analysis, we can conclude that all the expectations of clouds fall into the range of the universe.☐
Remark 3.
Theorem 1 shows that the improved generating cloud method can guarantee that all the expectations fall into the range of the universe, and meanwhile this method can effectively distinguish the linguistic evaluation scale over the symmetrical interval and transform linguistic term set of any odd labels into cloud rather than only five labels.
Example 1.
Let U = [ 10 ,   20 ] ,   H e 0 = 0.05 and the linguistic assessment set H = { h 2 = v e r y   p o o r ,     h 1 = p o o r ,   h 0 = f a i r ,   h 1 = g o o d ,   h 2 = v e r y   g o o d } . Then the five clouds can be obtained by using the classical method and the improved generating cloud method, respectively.
  • The classical method given by Wang and Feng [45]:
    Y 2 = ( 10.0 ,   1.031 ,   0.13 ) ,   Y 1 = ( 9.27 ,   0.637 ,   0.08 ) ,   Y 0 = ( 15.0 ,   0.394 ,   0.05 ) , Y 2 = ( 20.73 ,   0.637 ,   0.08 ) ,   Y 2 = ( 20.0 ,   1.031 ,   0.13 ) .
  • The improved generating cloud method:
    Y 2 = ( 10.0 ,   1.031 ,   0.13 ) ,   Y 1 = ( 13.1 ,   0.637 ,   0.08 ) ,   Y 0 = ( 15.0 ,   0.394 ,   0.05 ) , Y 1 = ( 16.9 ,   0.637 ,   0.08 ) ,   Y 2 = ( 20.0 ,   1.031 ,   0.13 ) .
From Example 1, we find that some expectations of clouds obtained by the classical method exceed the range of the universe, e.g., E x 1 = 9.27 < 10 , E x 1 = 20.73 > 20.0 . In particular, we see that E x 1 > E x 2 , E n 1 < E n 2 , H e 1 < H e 2 . That is, cloud Y 1 is absolutely better than cloud Y 2 . This is obviously inconsistent with the fact that linguistic variable h 2 is absolutely better than h 1 . Fortunately, these weaknesses have been corrected by the improved generating cloud method.

3.2. New Algorithms of the Cloud Model

This subsection defines the cloud distance, cloud possibility degree and cloud support degree, which will be used for cloud comparison and the weight determination, respectively.
Based on “3En rules” of cloud model, the distance between clouds is defined as follows.
Definition 6.
Let Y 1 = Y 1 ( E x 1 ,   E n 1 ,   H e 1 ) and Y 2 = Y 2 ( E x 2 ,   E n 2 ,   H e 2 ) be two clouds in the universe U . Then, the distance d ( Y 1 ,   Y 2 ) of these clouds Y 1 and Y 2 is given by:
d ( Y 1 ,   Y 2 ) = 1 2 ( d _ ( Y 1 ,   Y 2 ) + d ¯ ( Y 1 ,   Y 2 ) ) ,
where d _ ( Y 1 ,   Y 2 ) = | ( 1 3 E n 1 2 + H e 1 2 / E x 1 ) E x 1 ( 1 3 E n 2 2 + H e 2 2 / E x 2 ) E x 2 | , and d ¯ ( Y 1 ,   Y 2 ) = | ( 1 + 3 E n 1 2 + H e 1 2 / E x 1 ) E x 1 ( 1 + 3 E n 2 2 + H e 2 2 / E x 2 ) E x 2 | .
Proposition 1.
The cloud distance has the following properties:
(i) 
d ( Y 1 ,   Y 2 ) 0 ;
(ii) 
d ( Y 1 ,   Y 2 ) = d ( Y 2 ,   Y 1 ) ;
(iii) 
For   Y 3 F , d ( Y 1 ,   Y 3 ) d ( Y 1 ,   Y 2 ) + d ( Y 2 ,   Y 3 ) .
where F stands for the collection of all clouds in U .
Proof. 
See Appendix A.☐
Remark 4.
If E n 1 = H e 1 = E n 2 = H e 2 = 0 , then the cloud will degenerate into a real number, in this case, d ( Y 1 ,   Y 2 ) = | E x 1 E x 2 | .
Based on the cloud distance, a cloud possibility degree can be defined as follows.
Definition 7.
Let Y 1 = Y 1 ( E x 1 ,   E n 1 ,   H e 1 ) and Y 2 = Y 2 ( E x 2 ,   E n 2 ,   H e 2 ) be two clouds in universe U , and Y * = Y ( max E x i ,   min E n i ,   min H e i )   ( i = 1 ,   2 ) be the positive ideal cloud, then the cloud possibility degree is defined as:
p ( Y 1 Y 2 ) = d ( Y * ,   Y 2 ) d ( Y * ,   Y 1 ) + d ( Y * ,   Y 2 ) ,
where d ( Y * ,   Y 1 ) and d ( Y * ,   Y 2 ) are the distances between Y * and Y 1 , Y 2 , respectively.
Definition 7 shows that the cloud possibility degree p ( Y 1 Y 2 ) is described by the distance d ( Y * ,   Y 1 ) and d ( Y * ,   Y 2 ) . The larger the distance between Y 2 and Y * is, the larger the cloud possibility degree p ( Y 1 Y 2 ) is. The cloud possibility degree can be used for clouds comparison.
From Definition 7, we can easily obtain the following properties of cloud possibility degree.
Proposition 2.
Let Y 1 = Y 1 ( E x 1   ,   E n 1   ,   H e 1 ) , Y 2 = Y 2 ( E x 2 ,   E n 2 ,   H e 2 ) and Y 3 = Y 3 ( E x 3 ,   E n 3 ,   H e 3 ) be three cloud variables. Then, the cloud possibility degree has the following properties:
(i) 
0 p ( Y 1 Y 2 ) 1 ;
(ii) 
p ( Y 1 Y 2 ) = 1 Y * = Y 1 ;
(iii) 
p ( Y 1 Y 2 ) = 0 Y * = Y 2 ;
(iv) 
p ( Y 1 Y 2 ) + p ( Y 1 Y 2 ) = 1 , particularly, p ( Y 1 Y 1 ) = 0.5 ;
(v) 
if p ( Y 1 Y 2 ) 1 and p ( Y 2 Y 3 ) 1 , then p ( Y 1 Y 3 ) 1 ;
(vi) 
if p ( Y 1 Y 2 ) = 1 , then p ( Y 1 Y 3 ) p ( Y 2 Y 3 ) .
To rank clouds Y i ( i = 1 ,   2 ,   ,   m ) , following Wan and Dong [46] who ranked interval-valued intuitionistic fuzzy numbers via possibility degree, we can construct a fuzzy complementary matrix of cloud possibility degree as follows:
P = [ p 11 p 12 p 1 m p 21 p 22 p 2 m p m 1 p m 2 p m m ] ,
where Y * = Y ( max E x i ,   min E n i ,   min H e i ) , p i j 0 , p i j + p j i = 1 and p i i = 0.5 . Then, the ranking vector V = ( v 1 ,   v 2 ,   ,   v m ) T is determined by:
v i = 1 m ( m 1 ) ( j = 1 m p i j + m 2 1 ) ( i = 1 ,   2 ,   ,   m ) ,
and consequently, the clouds Y i ( i = 1 ,   2 ,   ,   m ) can be ranked in descending order via values of v i ( i = 1 ,   2 ,   ,   m ) . That is, the smaller the value of v i is, the larger the corresponding order of Y i ( i = 1 ,   2 ,   ,   m ) is.
The advantage of utilizing the vector V = ( v 1 ,   v 2 ,   ,   v m ) T for ranking clouds lies in the fact that fully uses the decision-making information and makes the calculation simple.
Proposition 3.
Suppose that Y 1 ( E x 1 ,   E n 1 ,   H e 1 ) and Y 2 ( E x 2 ,   E n 2 ,   H e 2 ) are two clouds in the universe U , if E x 1 E x 2 , E n 1 E n 2 , H e 1 H e 2 , then Y 1 Y 2 .
Proof. 
See Appendix A.☐
Example 2.
Let Y 1 ( 3.80 ,   0.663 ,   0.09 ) , Y 2 ( 4.30 ,   1.025 ,   0.13 ) , Y 3 ( 7.58 ,   1.320 ,   0.17 ) , Y 4 ( 4.76 ,   0.676 ,   0.09 ) be four normal clouds, and these clouds can be ranked by the values of v i ( i = 1 ,   2 ,   ,   m ) .
Note that the positive ideal cloud Y * = Y ( 7.58 ,   0.663 ,   0.09 ) and according to Equation (10), we have that d ( Y * ,   Y 1 ) = 3.78 , d ( Y * ,   Y 2 ) = 3.28 , d ( Y * ,   Y 3 ) = 1.96 and d ( Y * ,   Y 4 ) = 2.82 .
Consequently, based on Equation (11), the possibility degree matrix can be derived as follows:
P = [ 0.500 0.465 0.344 0.427 0.535 0.500 0.377 0.462 0.656 0.623 0.500 0.587 0.573 0.538 0.413 0.500 ] .
According to Equation (13), we further derive the ranking vector V = ( 0.228 ,   0.240 ,   0.280 ,   0.252 ) T . So the ranking of the normal clouds is: Y 3 > Y 4 > Y 2 > Y 1 .
Following Yager [37], we can define the cloud support degree.
Definition 8.
Let F be the set of all clouds and support (hereafter, Sup) a mapping from F × F to R. For any Y α and Y β , if the term Sup satisfies:
(i) 
S u p ( Y a ,   Y β ) [ 0 ,   1 ] ;
(ii) 
S u p ( Y a ,   Y β ) = S u p ( Y β ,   Y a ) ;
(iii) 
S u p ( Y a ,   Y β ) S u p ( Y i ,   Y j ) if d ( Y α ,   Y β ) < d ( Y i ,   Y j ) . where d is a distance measure for clouds.
Then S u p ( Y a ,   Y β ) is called the support degree for Y α from Y β .
Note that Sup measure is essentially a similarity index, meaning that the greater the similarity is, the closer the two clouds are, and consequently the more they support each other. The support degree will be used to determine the weights of aggregation operator.

4. Cloud Generalized Power Ordered Weighted Average Operator

For an LMCGDM problem, when the linguistic information is converted to clouds, an aggregation step must be performed for a collective evaluation. In this section, we provide a cloud generalized power ordered weighted average (CGPOWA) operator and study its family which includes many different operators.
Following LPOWA operator of Xu, Merigó and Wang [38] and using the cloud support degree, we can define a cloud generalized power ordered weighted average (CGPOWA) operator as follows.
Definition 9.
Let F be the set of all clouds and { Y i ( E x i ,   E n i ,   H e i ) | i = 1 ,   2 ,   ,   n } be a subset of F . A mapping C G P O W A : F n F is defined as a cloud generalized power ordered weighted average (CGPOWA) operator,
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( j = 1 n w j ( Y j ) λ ) 1 / λ ,
where λ is a parameter satisfying λ ( 0 ,   + ) ,
w j = φ ( R j T V ) φ ( R j 1 T V ) ,   R j = i = 1 j V i ,   T V = i = 1 n V i ,   V i = 1 + T ( Y i ) ,   T ( Y i ) = j = 1 j i n S u p ( Y i ,   Y j ) ,
and Y j is the jth largest cloud of Y i for all i = 1 ,   2 ,   ,   n , the function φ : [ 0 ,   1 ] [ 0 ,   1 ] is a BUM function which satisfies φ ( 0 ) = 0 , φ ( 1 ) = 1 and φ ( x ) φ ( y ) if x > y .
There is a noteworthy theorem that can be deduced from the definition given above.
Theorem 2.
The CGPOWA operator is still a cloud and such that:
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( ( i = 1 n w i ( E x i ) λ ) 1 λ , i = 1 n w i ( ( E x i ) λ 1 E n i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 , i = 1 n w i ( ( E x i ) λ 1 H e i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 ) .
Proof. 
From operational rules of the cloud given by Li, Liu and Gan [44], we have
w i Y i λ = ( w i ( E x i ) λ ,   λ w i ( E x i ) λ 1 E n i ,   λ w i ( E x i ) λ 1 H e i ) ,
and
i = 1 n w i Y i λ = ( i = 1 n w i ( E x i ) λ ,   λ i = 1 n w i ( ( E x i ) λ 1 E n i ) 2 ,   λ i = 1 n w i ( ( E x i ) λ 1 H e i ) 2 ) .
Therefore, from Definition 9, we derive that
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( ( i = 1 n w i ( E x i ) λ ) 1 λ ,   i = 1 n w i ( ( E x i ) λ 1 E n i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 ,   i = 1 n w i ( ( E x i ) λ 1 H e i ) 2 × ( i = 1 n w i ( E x i ) λ 1 ) 1 λ 1 ) .
The GPOWA operator given in Definition 9 has the following properties.☐
Proposition 4.
(i) 
(Idempotency). If Y i = Y = ( E x ,   E n ,   H e ) for i = 1 ,   2 ,   ,   n , then f ( Y 1 ,   Y 2 ,   ,   Y n ) = Y .
(ii) 
(Commutativity). If Y ˜ i is any permutation of Y i , then f ( Y 1 ,   Y 2 ,   ,   Y n ) = f ( Y ˜ 1 ,   Y ˜ 2 ,   ,   Y ˜ n ) .
(iii) 
(Boundedness). If P ( Y i Y 1 ) = 1 and P ( Y n Y i ) = 1   ( i = 1 ,   2 ,   ,   n ) , we then have P ( Y Y 1 ) = 1 , and P ( Y n Y ) = 1 .
Proof. 
See Appendix A. ☐
Remark 5.
The CGPOWA operator possesses the following attractive features: (a) it considers the importance of the ordered position of each input argument, here each input argument is a cloud; (b) it has the basic features of LPOWA operator, for instance, it considers the relationships between the arguments and gauges the similarity degrees of the arguments; (c) the weighting vectors associated with the CGPOWA operator can be determined by Equation (15), which provides an objective weighting model based on the objective data rather than relying on the preferences and knowledge of the DMs, moreover, it will reduce the influence of those unduly high (or low) arguments on the decision result by using the support measure to assign them lower weights; (d) the CGPOWA operator considers the decision arguments and their relationships, which are neglected by existing cloud aggregation operators; in addition, it can describe the randomness of linguistic terms, whereas linguistic power aggregation operators cannot do this work; (e) if the linguistic information is converted to a sequence of random variables with certain distribution and moment properties, it is possible to formulate the CGPOWA operator in an abstract stochastic model.
Table 1 shows that the CGPOWA operator can degenerate into many aggregation operators (here φ ( x ) = x ), such as cloud power ordered weighted quadratic average (CPOWQA) operator, cloud power ordered weighted average (CPOWA) operator, cloud power ordered weighted geometric average(CPOWGA) operator, CGPA operator, CGM operator, cloud power weighted quadratic average (CPWQA) operator, CWAA and CWGA operator (See Appendix B for a proof).
By taking different weighting vector W = ( w 1 ,   w 2 ,   ,   w n ) T in CGPOWA operator, we can obtain some other aggregation operators such as the maximum operator, the minimum operator, the cloud generalized mean operator and the Window-CGPOWA operator (See Table 2).

5. An Approach for LGDM Based on the CGPOWA Operator

The LMCGDM problem is the process of finding the best alternative from all of the feasible alternatives which can be evaluated according to a number of criteria values with linguistic information. In general, LMCGDM problem includes multiple experts (or the DMs), multiple decision criteria and multiple alternatives.
To better understand the procedure for solving LMCGDM problem on the basis of a cloud model, we develop a general framework for LMCGDM aggregation procedure (see, Figure 2) which contains two stages: (i) individual aggregation, which is a MCDM process for each DM; and (ii) group aggregation, which is a multiple experts decision-making process composed by multiple experts and multiple alternatives. For individual aggregation, we need to determine the weights of criteria, and then aggregate the criteria values of each alternative into one collective value by means of the CGPOWA operator and consequently, we derive a collective decision matrix composed by the DMs and alternatives; For group aggregation, we need to determine the weights of the DMs based on the collective decision matrix, and further aggregate the collective values of each alternative into one result by using the CGPOWA operator. Finally, we can assess the alternatives.
We develop a new algorithm for LMCGDM based on the improved generating cloud method and CGPOWA operator with the weight information being completely unknown. The algorithm is summarized in a simple algorithm through six steps. We first describe the algorithm inputs.
Input data of our new LMCGDM algorithm. Let A = { A 1 ,   ,   A i ,   ,   A m } be the set of m alternatives, C = { c 1 , , c j , , c n } be the set of n criteria, and D = { d 1 ,   ,   d k ,   ,   d t } be the set of t DMs. Assume that the DM d k provides his/her preference value b ˜ i j ( k ) for the alternative A i A w. r. t. the criterion c j C , where b ˜ i j ( k ) takes the form of linguistic variable, and consequently we can construct a decision matrix B ˜ k = ( b ˜ i j ( k ) ) m × n for d k D . We summarize all input data below:
= { A ,   C ,   D ,   B ˜ k }
Given the input data in (17), our objective is to determine the optimal alternative A * A . We give a new LMCGDM approach below.
An LMCGDM algorithm.
Step 1. Transform the linguistic information into clouds.
Transform the linguistic decision matrix B ˜ k = ( b ˜ i j ( k ) ) m × n into a cloud decision matrix R ^ k = ( r ^ i j ( k ) ) m × n ( k = 1 , 2 , , t ) by applying the improved generating cloud method developed in Section 3.
Step 2. Determine the weights of criteria.
Calculate the cloud support degrees:
S u p ( r ^ s j k ,   r ^ q j k ) = 1 2 d ( r ^ s j k ,   r ^ q j k ) q = 1 q s n d ( r ^ s j k ,   r ^ q j k ) + s = 1 s q n d ( r ^ q j k ,   r ^ s j k ) ,   q = 1 ,   2 ,   ,   n
which satisfy the support conditions (i)–(iii) in Definition 8. Here, the cloud distance measure is expressed by Equation (10), and S u p ( r ^ s j k , r ^ q j k ) denotes the similarity between the sth largest cloud preference value r ^ s j k and the qth largest cloud preference value r ^ q j k . We further calculate the weights of criteria by means of Equation (15).
Step 3. Aggregate the criteria values of each alternative into a collective value.
Utilize Equation (14) to aggregate all cloud decision matrices R ^ k = ( r ^ i j ( k ) ) m × n ( k = 1 ,   2 ,   ,   t ) into a collective cloud decision matrix R = ( r i j ) m × t .
Step 4. Calculate the weights of the DMs.
Calculate the cloud support degrees:
S u p ( r h j ,   r f j ) = 1 2 d ( r h j ,   r f j ) f = 1 f h n d ( r h j ,   r f j ) + h = 1 h f n d ( r f j ,   r h j ) ,   f = 1 ,   2 ,   ,   t  
which satisfy the support conditions (i)–(iii) in Definition 8. Here, the cloud distance measure is calculated by Equation (10). According to Equation (15), we can calculate the weights of the DMs.
Step 5. Aggregate the collective values of each alternative into one result.
Utilize Equation (14) to compute the collective overall preference value r i of the alternative A i .
Step 6. Rank the alternatives and choose the best one(s).
According to the cloud possibility degree (11) and the ranking vector (13), we can rank the collective overall preference values r i ( i = 1 ,   2 ,   ,   m ) in descending order and consequently select the best one in the light of the collective overall preference values r i   ( i = 1 ,   2 ,   ,   m ) .
Remark 6.
Compared with the traditional linguistic approaches (e.g., linguistic membership function model, linguistic symbolic model, 2-tuple linguistic model) and the existing cloud aggregating method (e.g., [41]), the attractive features of our approach are as follows.
(a) 
The three traditional LMCGDM approaches only use a uniform granular linguistic assessment scale, while ours takes a multi-granular linguistic assessment scale of great psychological sense. In other words, when assessing the alternatives, these three traditional approaches only regard the average level as the unique criterion, which leads to the evaluations rough and one-sided. Our method, however, considers not only the average level but also the fluctuation and stability of qualitative concepts by using E n and H e , respectively. Such statements can also be examined by the numerical analysis in Section 6.2.
(b) 
In addition, the corresponding aggregation operators for the three traditional LMCGDM methods are the linguistic power average (LPA) operator, triangular fuzzy weighted averaging (TFWA) operator and 2-tuple weighted averaging (TWA) operator, respectively. Note that these operators have their own weaknesses when describing the randomness, while our aggregation operator can effectively reduce the loss and distortion of information in aggregating process, and correspondingly improve the precision of the results.
For instance, LPA and TWA operators cannot precisely depict the randomness because when converting the linguistic variables into real numbers, they directly transform the random decision-making information into the precise domain, therefore, partial linguistic information is lost. TFWA operator can describe the fuzziness whereas it cannot describe the randomness.
(c) 
Compared with the cloud aggregating method (cf., [41]), our method provides a completely objective weighting model by using the cloud support degree, while the weights in Wang et al. [41] are subjectively given by the DMs which may result in different ranking results if the DMs provide different weight vectors. In addition, the CGPOWA operator considers the relationships between the arguments provided by the DMs, while the cloud aggregating operators in Wang et al. [41] do not.
(d) 
Our method presents a simple measure for comparing different clouds by the cloud possibility degree Equation (11) and the ranking vector Equation (13), which requires no knowledge about the distribution of cloud drops, this is different from the score function [41] which needs to know the distribution of cloud drops. This is also an attractive feature because in most case the distribution of cloud drops is unknown and it is rigid to acquire cloud drops.

6. Illustrative Example

This section provides a numerical example to illustrate the application of the approach proposed in Section 5 and makes a comparative study to examine the validity of our approach.

6.1. An Investment Selection Problem

Following [41], we assume that there is an investment company who wants to invest a sum of money in another company. There are five possible alternatives for investing the money: a car company A 1 , a food company A 2 , a computer company A 3 , an arms company A 4 , and a TV company A 5 . The investment company will make a decision according to the following six criteria: financial risk c 1 ; technical risk c 2 ; production risk c 3 ; market risk c 4 ; management risk c 5 ; and environmental risk c 6 . The five possible alternatives A i ( i = 1 ,   ,   5 ) are evaluated by the linguistic term set H = { h 3 = very poor ,   h 2 = poor ,   h 1 = medium poor ,   h 0 = fair ,   h 1 = medium good ,   h 2 = good ,   h 3 = very good } used by three DMs d k ( k = 1 ,   2 ,   3 ) for these six criteria. The linguistic decision matrix is shown in Table 3.
To simplify the calculation, throughout the numerical analysis, we assume that φ ( x ) = x , λ = 2 in CGPOWA operator, and that the universe U = [ 0 ,   10 ] and H e 0 = 0.05 in the improved generating cloud method. Based on the approach developed in Section 5 and the given parameters, the order of enterprises can be ranked by applying MATLAB or Lingo software package.
Procedures of LMCGDM based on the cloud model.
Step 1. Transform the linguistic decision matrix into the corresponding cloud decision matrix R ^ by using the new cloud generating method. (See Table 4)
Step 2. Calculate the weights of criteria by means of Equation (15). (See Table 5)
Step 3. Aggregate the criteria values of each alternative into a collective value by using Equation (14). (See Table 6)
Step 4. Calculate the weights of the DMs by means of Equation (15). (See Table 7)
Step 5. Utilize Equation (14) to compute the collective overall preference value r i of the alternative A i .
r 1 : Y 1 ( 3.85 , 0.656 , 0.08 ) ,   r 2 : Y 2 ( 4.73 , 0.679 , 0.09 ) ,   r 3 : Y 3 ( 7.52 , 1.309 , 0.16 ) ,   r 4 : Y 4 ( 4.72 , 0.702 , 0.09 ) ,   r 5 : Y 5 ( 4.41 , 1.037 , 0.13 ) .
Step 6. Rank the alternatives and choose the best one(s).
From Step 5, we can get the positive ideal cloud Y * = Y ( 7.52 , 0.656 , 0.08 ) . Then, the ranking vector is derived by Equations (11) and (13): V = ( 0.1836 , 0.2006 , 0.2216 , 0.2003 , 0.1939 ) T . And consequently, the rank of the clouds is: r 3 > r 4 > r 2 > r 5 > r 1 . The ranking order in the light of the overall collective preference values r i   ( i = 1 ,   2 ,   3 ,   4 ,   5 ) is:
A 3 A 4 A 2 A 5 A 1 .
Thus, the best investment alterative is the computer company A 3 , which is in accordance with the result of Wang et al. [41]. However, compared with the cloud aggregating method, our approach has the following features:
(1)
As for the weighting method, we provide an objective weighting model based on the cloud support degree, while the weights in Wang et al. [41] are subjectively given by the DMs.
For example, Wang et al. [41] supposed that the weights of the DMs and criteria are respectively given by λ = ( 0.35 , 0.4 , 0.25 ) T and W = ( 0.12 , 0.15 , 0.18 , 0.25 , 0.2 , 0.1 ) T , which completely rely on the subjective preferences and knowledge of the DMs. Thus, the ranking results obtained by Wang et al. [41] are not stable because there may exist different ranking results if the DMs provide different weight vectors (See Table 8). However, our method does not require the DMs to provide weighting information and the weights are derived based on the objective data and the cloud support degree Equation (15), and then the ranking results generally remain unchanged (See Table 9).
(2)
Our method considers the relationship between arguments given by the DMs, while Wang et al. [41] neglect it.
For instance, CWAA and CWGA operators provided by Wang et al. [41] do not consider the relationship between the arguments, while the CGPOWA operator we derived considers the relationship among input arguments by allowing values being aggregated to support and reinforce each other via cloud support degrees. Here, cloud support degrees of the arguments can be calculated by applying Equations (18) and (19).

6.2. Comparative Analysis

To validate the feasibility of our method, a comparative study is conducted by applying three traditional LMCGDM methods, i.e., linguistic symbolic model, linguistic membership function model, 2-tuple linguistic model. The corresponding aggregation operators for the three traditional LMCGDM methods are respectively the LPA operator, TFWA operator and TWA operator. This comparative analysis is based on the same illustrative example given in Section 6.1. The weights of the DMs and criteria are respectively taken from Table 4 and Table 6 so as to make it easy to compare these results with the case of our method.
• Linguistic symbolic model
First, aggregate all linguistic decision matrices B ˜ ( k ) = ( b ˜ i j ( k ) ) 5 × 6 into a collective linguistic decision matrix B = ( b i j ) 5 × 6 by applying LPA operator (See Table 10).
Second, utilize LPA operator to derive the overall collective preference values.
t 1 : h 1.328 ,   t 2 : h 0.453 ,   t 3 : h 1.868 ,   t 4 : h 0.455 ,   t 5 : h 0.948 .
Finally, rank the order of the five alternatives: A 3 A 2 A 4 A 5 A 1 .
• Linguistic membership function model
First, transform linguistic variables into triangular fuzzy numbers by the method of Iraj et al. [47]:
h 3 : ( 9 , 10 , 10 ) ,   h 2 : ( 7 , 9 , 10 ) ,   h 1 : ( 5 , 7 , 9 ) ,   h 0 : ( 3 , 5 , 7 ) ,   h 1 : ( 1 , 3 , 5 ) ,   h 2 : ( 0 , 1 , 3 ) ,   h 3 : ( 0 , 0 , 1 ) .
Second, utilize TFWA operator to derive the individual overall evaluation value T i k :
T 1 1 = ( 0.67 , 1.84 , 3.64 ) ,   T 1 2 = ( 1.39 , 3.14 , 5.14 ) ,   T 1 3 = ( 1.50 , 2.72 , 4.42 ) , T 2 1 = ( 2.45 , 3.83 , 5.68 )   T 2 2 = ( 2.61 , 4.46 , 6.26 )   T 2 3 = ( 2.90 , 4.60 , 6.60 ) , T 3 1 = ( 6.40 , 8.15 , 9.25 ) ,   T 3 2 = ( 6.80 , 8.40 , 9.50 )   T 3 3 = ( 7.04 , 8.64 , 9.62 ) , T 4 1 = ( 2.01 , 3.66 , 5.66 ) ,   T 4 2 = ( 2.57 , 4.42 , 6.42 ) ,   T 4 3 = ( 3.08 , 4.96 , 6.81 ) , T 5 1 = ( 1.56 , 3.16 , 5.06 ) ,   T 5 2 = ( 2.03 , 3.23 , 4.74 ) ,   T 5 3 = ( 1.10 , 2.50 , 4.40 ) .
Third, use TFWA operator to determine the overall collective evaluation value T ˙ i :
T ˙ 1 = ( 1.17 , 2.58 , 4.44 ) ,   T ˙ 2 = ( 2.63 , 4.27 , 6.14 ) ,   T ˙ 3 = ( 6.72 , 8.37 , 9.44 ) , T ˙ 4 = ( 2.50 , 4.23 , 6.25 ) ,   T ˙ 5 = ( 1.63 , 3.02 , 4.77 ) .
Finally, rank the order of the five alternatives via the method of comparing triangular fuzzy numbers (Chang & Wang, [48]): A 3 A 2 A 4 A 5 A 1 .
• 2-tuple linguistic model
First, utilize TWA operator to derive the individual overall evaluation value:
S 1 1 = ( 2 , 0.32 ) ,   S 2 1 = ( 1 , 0.34 ) ,   S 3 1 = ( 2 , 0.3 ) ,   S 4 1 = ( 1 , 0.33 ) ,   S 5 1 = ( 1 , 0.08 ) , S 1 2 = ( 1 , 0.07 ) ,   S 2 2 = ( 0 , 0.27 ) ,   S 3 2 = ( 2 , 0.1 ) ,   S 4 2 = ( 0 , 0.29 ) ,   S 5 2 = ( 1 , 0.05 ) , S 1 3 = ( 1 , 0.29 ) ,   S 2 3 = ( 0 , 0.02 ) ,   S 3 3 = ( 2 , 0.02 ) ,   S 4 3 = ( 0 , 0.02 ) ,   S 5 3 = ( 1 , 0.3 ) .
Second, apply TWA operator to get the overall collective evaluation values S ˙ i :
S ˙ 1 = ( 1 , 0.283 ) ,   S ˙ 2 = ( 0 , 0.344 ) ,   S ˙ 3 = ( 2 , 0.14 ) ,   S ˙ 4 = ( 0 , 0.356 ) ,   S ˙ 1 = ( 1 , 0.027 ) .
Third, rank the order of the five alternatives: A 3 A 2 A 4 A 5 A 1 .
Table 11 shows the ranking results with three different aggregation operators (i.e., LPA, TFWA, TWA). Comparing Table 11 with Table 9, we find that the respective ranking results of the three operators are all the same, but the ranking result is different when the CGPOWA operator is applied. The difference lies in the ranking order of A 2 and A 4 .
The above difference can be explained by the following fact: when the alternatives are assessed, these three traditional methods only regard the average level as the unique criterion, which leads to the evaluations rough and one-sided. Notice that the average level of A 2 is higher than the case of A 4 , and then the ranking order of these three traditional methods becomes A 2 A 4 . Our method, however, considers not only the average level but also the fluctuation and stability of qualitative concepts by using E n and H e , respectively. In other words, the three traditional methods only use a uniform granular linguistic assessment scale, while our method takes a multi-granular linguistic assessment scale of great psychological sense. This causes the average level of A 2 to be lower than the case of A 4 , namely, E x 2 < E x 4 . In addition, we derive that E n 2 > E n 4 and H e 2 > H e 4 in this example. Therefore, according to Equation (13), we can conclude that the ranking result becomes A 4 A 2 in our method.

7. Conclusions

LMCDM problems are widespread in various fields such as economics, management, medical care, social sciences, engineering, and military applications. However, traditional aggregation methods are not robust enough to convert qualitative concepts to quantitative information in LMCDM problems. Among the existing aggregation operators, linguistic power aggregation operators and cloud aggregation operators have the most merits, but they have their own weaknesses. If combined together, the two types of operators can overcome their own weaknesses, that is, the characters of two types of operators are mutually complementary. This paper developed a new class of aggregation operator which successfully unified the advantages of the existing linguistic power aggregation operators and cloud aggregation operators, and simultaneously overcame their limitations. First, we presented an improved method to transform linguistic variables into clouds, which corrects the weaknesses of the classical generating cloud method. Based on this method, we developed some new cloud algorithms such as the cloud possibility degree and cloud support degree, which can be used for cloud comparison and the weight determination, respectively. Furthermore, a new CGPOWA operator was developed, which considers the decision arguments and their relationships and characterizes the fuzziness and randomness of linguistic terms. By studying the properties of CGPOWA operator we found that it is commutative, idempotent, bounded. Moreover, CGPOWA operator can degenerate into many different operators, including CGPA operator, CPOWA operator, CPOWGA operator, CPWQA operator, CWAA and CWGA operators, the maximum operator and the minimum operator. In particular, based on the new generating cloud method and CGPOWA operator, a new approach for LGDM was developed. In the end, to show the effectiveness and the good performance of our approach in practice, we provided an example of investment selection and made a comparative analysis.
In further research, it would be very interesting to extend our analysis to the case of more sophisticated situation such as introducing the behavior theory of the DMs in the context of CGOWPA operator. Nevertheless, we leave that point to future research, since our methodology cannot be applied to that extended framework, which will result in more sophisticated calculation and which we cannot tackle here.

Acknowledgments

This research is supported by Natural Science Foundation of China under Grant No. 71671064 and Humanity and Social Science Fund Major Project of Beijing under Grant No. 15ZDA19.

Author Contributions

Jianwei Gao and Ru Yi contributed equally to this work. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Proposition 1.

From Definition 6, it is easy to verify that conclusions (i) and (ii) hold. (iii) From Definition 6, we have:
d _ ( Y 1 , Y 3 ) = | ( 1 3 E n 1 2 + H e 1 2 E x 1 ) E x 1 ( 1 3 E n 3 2 + H e 3 2 E x 3 ) E x 3 | = | ( 1 3 E n 1 2 + H e 1 2 E x 1 ) E x 1 ( 1 3 E n 2 2 + H e 2 2 E x 2 ) E x 2 + ( 1 3 E n 2 2 + H e 2 2 E x 2 ) E x 2 ( 1 3 E n 3 2 + H e 3 2 E x 3 ) E x 3 | | ( 1 3 E n 1 2 + H e 1 2 E x 1 ) E x 1 ( 1 3 E n 2 2 + H e 2 2 E x 2 ) E x 2 | + | ( 1 3 E n 2 2 + H e 2 2 E x 2 ) E x 2 ( 1 3 E n 3 2 + H e 3 2 E x 3 ) E x 3 | = d _ ( Y 1 ,   Y 2 ) + d _ ( Y 2 ,   Y 3 ) .
Similarly, we can obtain that:
d ¯ ( Y 1 ,   Y 3 ) d ¯ ( Y 1 ,   Y 2 ) + d ¯ ( Y 2 ,   Y 3 ) .
Therefore,
d ( Y 1 , Y 3 ) = 1 2 { d _ ( Y 1 , Y 3 ) + d ¯ ( Y 1 , Y 3 ) } 1 2 { d _ ( Y 1 , Y 2 ) + d _ ( Y 2 , Y 3 ) + d ¯ ( Y 1 , Y 2 ) + d ¯ ( Y 2 , Y 3 ) } = 1 2 { d _ ( Y 1 , Y 2 ) + d ¯ ( Y 2 , Y 3 ) } + 1 2 { d _ ( Y 2 , Y 3 ) + d ¯ ( Y 2 , Y 3 ) } = d ( Y 1 , Y 2 ) + d ( Y 2 , Y 3 ) .

Proof of Proposition 3.

Notice that if E x 1 E x 2 , E n 1 E n 2 and H e 1 H e 2 , then the positive ideal cloud will become Y = Y ( E x 1 ,   E n 1 ,   H e 1 ) . According to Definition 6, we derive d ( Y * ,   Y 1 ) = 0 . In addition, then, based on Equation (11), the possibility degree matrix can be obtained as follows:
[ 0.5 1.0 0.0 0.5 ] .
According to Equation (13), we can get the ranking vector v = ( 0.75 ,   0.25 ) T . Thus, we have:
Y 1 Y 2 .

Proof of Proposition 4.

(i)
(Idempotency). If Y i = Y = ( E x ,   E n ,   H e ) for i = 1 ,   2 ,   ,   n , then according to Equation (11), we have:
( i = 1 n w i ( E x i ) λ ) 1 λ = ( ( E x ) λ i = 1 n w i ) 1 λ = E x ,
i = 1 n w i ( ( E x i ) λ 1 E n i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 = ( E x ) λ 1 E n × ( E x ) 1 λ = E n ,
i = 1 n w i ( ( E x i ) λ 1 H e i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 = ( E x ) λ 1 H e × ( E x ) 1 λ = H e .
Hence,
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( E x ,   E n ,   H e ) .
(ii)
(Commutativity). Assume that ( Y ˜ 1 ,   Y ˜ 2 ,   ,   Y ˜ n ) is any permutation of ( Y 1 ,   Y 2 ,   ,   Y n ) , then for each Y ˜ i , there exists one and only one Y j such that Y ˜ i = Y j , and vice versa. Therefore, from Equation (11), we have: f ( Y 1 ,   Y 2 ,   ,   Y n ) = f ( Y ˜ 1 ,   Y ˜ 2 ,   ,   Y ˜ n ) .
(iii)
(Boundedness). Note that if P ( Y i Y 1 ) = 1 ( i = 1 ,   2 ,   ,   n ) , according to Proposition 2, we have:
E x i E x 1 , E n i E n 1 , H e i H e 1 .
So,
( i = 1 n w i ( E x i ) λ ) 1 λ ( i = 1 n w i ( E x 1 ) λ ) 1 λ = E x 1 ,
i = 1 n w i ( ( E x i ) λ 1 E n i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 ( E x n ) λ 1 E n 1 × ( E x n ) 1 λ = E n 1 ,
i = 1 n w i ( ( E x i ) λ 1 H e i ) 2 × ( i = 1 n w i ( E x i ) λ ) 1 λ 1 ( E x n ) λ 1 H e 1 × ( E x n ) 1 λ = H e 1 .
Then,
P ( Y Y 1 ) = 1
By the same token, if P ( Y n Y i ) = 1 ( i = 1 ,   2 ,   ,   n ) , we then can obtain:
P ( Y n Y ) = 1 .

Appendix B

Proof of the Family of CGPOWA Operator (See Table 1)

Considering that it is not easy to find the result of the cloud power weighted quadratic average (CPWQA) operator (cf. No.6 in Table 1), without loss of generality, we only need to prove the operator of No.6 in Table 1. Other operators can be derived via the similar proof. ☐

Proof of No.6 in Table 1.

If φ ( x ) = x , then CGPOWA operator will degenerate into CGPA operator. According to Theorem 2, we derive that:
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( ( i = 1 n v i E x i λ ) 1 λ ,   i = 1 n v i ( E x i λ 1 E n i ) 2 × ( i = 1 n v i E x i λ ) 1 λ 1 ,   i = 1 n v i ( E x i λ 1 H e i ) 2 × ( i = 1 n v i E x i λ ) 1 λ 1 ) .
Thus, as λ = 2 in the above equation, we have:
( i = 1 n v i E x i λ ) 1 λ = i = 1 n v i E x i 2 , i = 1 n v i ( E x i λ 1 E n i ) 2 × ( i = 1 n v i E x i λ ) 1 λ 1 = i = 1 n v i E x i 2 E n i 2 i = 1 n v i E x i 2 , i = 1 n v i ( E x i λ 1 H e i ) 2 × ( i = 1 n v i E x i λ ) 1 λ 1 = i = 1 n v i E x i 2 H e i 2 i = 1 n v i E x i 2 , f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n v i E x i 2 ,   i = 1 n v i E x i 2 E n i 2 i = 1 n v i E x i 2 ,   i = 1 n v i E x i 2 H e i 2 i = 1 n v i E x i 2 ) .

References

  1. Wiecek, M.M.; Ehrgott, M.; Fadel, G.; Figueira, J.R. Multiple criteria decision-making for engineering. Omega 2008, 36, 337–339. [Google Scholar] [CrossRef]
  2. Wu, J.; Li, J.C.; Li, H.; Duan, W.Q. The induced continuous ordered weighted geometric operators and their application in group decision-making. Comput. Ind. Eng. 2009, 58, 1545–1552. [Google Scholar] [CrossRef]
  3. Xia, M.M.; Xu, Z.S. Methods for fuzzy complementary preference relations based on multiplicative consistency. Comput. Ind. Eng. 2011, 61, 930–935. [Google Scholar] [CrossRef]
  4. Merigó, J.M.; Casanovas, M. Induced and uncertain heavy OWA operators. Comput. Ind. Eng. 2011, 60, 106–116. [Google Scholar] [CrossRef]
  5. Gong, Y.B.; Hu, N.; Zhang, J.G.; Liu, G.F.; Deng, J.G. Multi-attribute group decision-making method based on geometric Bonferroni mean operator of trapezoidal interval type-2 fuzzy numbers. Comput. Ind. Eng. 2015, 81, 167–176. [Google Scholar] [CrossRef]
  6. Wan, S.P.; Wang, F.; Lin, L.L.; Dong, J.Y. Some new generalized aggregation operators for triangular intuitionistic fuzzy numbers and application to multi-attribute group decision-making. Comput. Ind. Eng. 2016, 93, 286–301. [Google Scholar] [CrossRef]
  7. Ngan, S.C. A type-2 linguistic set theory and its application to multi-criteria decision-making. Comput. Ind. Eng. 2013, 64, 721–730. [Google Scholar] [CrossRef]
  8. Wei, G.W.; Zhao, X.F.; Lin, R. Some hybrid aggregating operators in linguistic decision-making with Dempster-Shafer belief structure. Comput. Ind. Eng. 2013, 65, 646–651. [Google Scholar] [CrossRef]
  9. Wang, Z.Q.; Richard, Y.K.F.; Li, Y.L.; Pu, Y. A group multi-granularity linguistic-based methodology for prioritizing engineering characteristics under uncertainties. Comput. Ind. Eng. 2016, 91, 178–187. [Google Scholar] [CrossRef]
  10. Yager, R.R. Fusion of ordinal information using weighted median aggregation. Int. J. Approx. Reason. 1998, 18, 35–52. [Google Scholar] [CrossRef]
  11. Xu, Z.S. An overview of operators for aggregating information. Int. J. Intell. Syst. 2003, 18, 953–969. [Google Scholar] [CrossRef]
  12. Xu, Z.S. Deviation measures of linguistic preference relations in group decision-making. Omega 2005, 33, 249–254. [Google Scholar] [CrossRef]
  13. Yager, R.R. An approach to ordinal decision-making. Int. J. Approx. Reason. 1995, 12, 237–261. [Google Scholar] [CrossRef]
  14. Yager, R.R.; Rybalov, A. Understanding the median as a fusion operator. Int. J. Gen. Syst. 1997, 26, 239–263. [Google Scholar] [CrossRef]
  15. Yager, R.R. Applications and extensions of OWA aggregations. Int. J. Man-Mach. Stud. 1992, 37, 103–132. [Google Scholar] [CrossRef]
  16. Herrera, F.; Herrera-Viedma, E. Aggregation operators for linguistic weighted information. IEEE Trans. Syst. Man Cybern. Part A 1997, 27, 646–656. [Google Scholar] [CrossRef]
  17. Lee, H.M. Applying fuzzy set theory to evaluate the rate of aggregative risk in software development. Fuzzy Sets Syst. 1996, 79, 323–336. [Google Scholar] [CrossRef]
  18. Lee, H.M.; Lee, S.Y.; Lee, T.Y.; Chen, J.J. A new algorithm for applying fuzzy set theory to evaluate the rate of aggregative risk in software development. Inf. Sci. 2003, 153, 177–197. [Google Scholar] [CrossRef]
  19. Herrera, F.; Herrera-Viedma, E.; Verdegay, J.L. A sequential selection process in group decision-making with a linguistic assessment approach. Inf. Sci. 1995, 85, 223–239. [Google Scholar] [CrossRef]
  20. Torra, V. The weighted OWA operator. Int. J. Intell. Syst. 1997, 12, 153–166. [Google Scholar] [CrossRef]
  21. Herrera, F.; Herrera-Viedma, E. Linguistic decision analysis: Steps for solving decision problems under linguistic information. Fuzzy Sets Syst. 2000, 115, 67–82. [Google Scholar] [CrossRef]
  22. Merigó, J.M.; Casanovas, M. Decision-making with distance measures and linguistic aggregation operators. Int. J. Fuzzy Syst. 2010, 12, 190–198. [Google Scholar]
  23. Zhou, L.G.; Chen, H.Y. The induced linguistic continuous ordered weighted geometric operator and its application to group decision-making. Comput. Ind. Eng. 2013, 66, 222–232. [Google Scholar] [CrossRef]
  24. Zhou, L.; Wu, J.; Chen, H.Y. Linguistic continuous ordered weighted distance measure and its application to multiple attributes group decision-making. Appl. Soft Comput. 2014, 25, 266–276. [Google Scholar] [CrossRef]
  25. Merigó, J.M.; Palacios-Marqué, D.; Zeng, S.Z. Subjective and objective information in linguistic multi-criteria group decision-making. Eur. J. Oper. Res. 2016, 248, 522–531. [Google Scholar] [CrossRef]
  26. Herrera, F.; Martínez, L. A model based on linguistic 2-tuples for dealing with multi granular hierarchical linguistic contexts in multi-expert decision-making. IEEE Trans. Syst. Man Cybern. Part B 2001, 31, 227–234. [Google Scholar] [CrossRef] [PubMed]
  27. Herrera, F.; Martinez, L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2015, 8, 746–752. [Google Scholar]
  28. Wei, G.; Zhao, X. Some dependent aggregation operators with 2-tuple linguistic information and their application to multiple attribute group decision-making. Expert Syst. Appl. 2012, 39, 5881–5886. [Google Scholar] [CrossRef]
  29. Wan, S.P. 2-Tuple linguistic hybrid arithmetic aggregation operators and application to multi-attribute group decision-making. Knowl.-Based Syst. 2013, 45, 31–40. [Google Scholar] [CrossRef]
  30. Xu, Z.S. On generalized induced linguistic aggregation operators. Int. J. Gen. Syst. 2006, 35, 17–28. [Google Scholar] [CrossRef]
  31. Xu, Z.S. EOWA and EOWG operators for aggregating linguistic labels based on linguistic preference relations. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2004, 12, 791–810. [Google Scholar] [CrossRef]
  32. Xu, Y.J.; Da, Q.L. Standard and mean deviation methods for linguistic group decision-making and their applications. Expert Syst. Appl. 2010, 37, 5905–5912. [Google Scholar] [CrossRef]
  33. Xu, Z.S. A method based on linguistic aggregation operators for group decision-making with linguistic preference relations. Inf. Sci. 2004, 166, 19–30. [Google Scholar] [CrossRef]
  34. Xu, Y.J.; Da, Q.L.; Zhao, C.X. Interactive approach for multiple attribute decision-making with incomplete weight information under uncertain linguistic environment. Syst. Eng. Electron. 2009, 31, 597–601. [Google Scholar]
  35. Xu, Z.S. Induced uncertain linguistic OWA operators applied to group decision-making. Inf. Fusion 2006, 7, 231–238. [Google Scholar] [CrossRef]
  36. Xu, Z.S. An approach based on the uncertain LOWG and induced uncertain LOWG operators to group decision-making with uncertain multiplicative linguistic preference relations. Decis. Support Syst. 2006, 41, 488–499. [Google Scholar] [CrossRef]
  37. Yager, R.R. The power average operator. IEEE Trans. Syst. Man Cybern. Part A 2001, 31, 724–731. [Google Scholar] [CrossRef]
  38. Xu, Y.; Merigó, J.M.; Wang, H. Linguistic power aggregation operators and their application to multiple attribute group decision-making. Appl. Math. Model. 2012, 36, 5427–5444. [Google Scholar] [CrossRef]
  39. Zhou, L.; Chen, H.Y. A generalization of the power aggregation operators for linguistic environment and its application in group decision-making. Knowl.-Based Syst. 2012, 26, 216–224. [Google Scholar] [CrossRef]
  40. Li, D.; Meng, H.; Shi, X. Membership clouds and membership cloud generators. J. Comput. Res. Dev. 1995, 32, 16–21. [Google Scholar]
  41. Wang, J.Q.; Peng, L.; Zhang, H.Y.; Chen, X.H. Method of multi-criteria group decision-making based on cloud aggregation operators with linguistic information. Inf. Sci. 2014, 274, 177–191. [Google Scholar] [CrossRef]
  42. Wang, J.Q.; Wang, P.; Wang, J.; Zhang, H.Y.; Chen, X.H. Atanassov’s interval-valued intuitionistic linguistic multi-criteria group decision-making method based on trapezium cloud model. IEEE Trans. Fuzzy Syst. 2015, 23, 542–554. [Google Scholar] [CrossRef]
  43. Li, D.Y.; Du, Y. Artificial Intelligence with Uncertainty; Chapman & Hall/CRC, Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  44. Li, D.Y.; Liu, C.Y.; Gan, W.Y. A new cognitive model: Cloud model. Int. J. Intell. Syst. 2009, 24, 357–375. [Google Scholar] [CrossRef]
  45. Wang, H.L.; Feng, Y.Q. On multiple attribute group decision-making with linguistic assessment information based on cloud model. Control Decis. 2005, 20, 679–685. [Google Scholar]
  46. Wan, S.P.; Dong, J.Y. A possibility degree method for interval-valued intuitionistic fuzzy multi-attribute group decision-making. J. Comput. Syst. Sci. 2014, 80, 237–256. [Google Scholar] [CrossRef]
  47. Iraj, M.; Nezam, M.A.; Armaghan, H.; Rahele, N. Designing a model of fuzzy TOPSIS in multiple criteria decision-making. Appl. Math. Comput. 2008, 206, 607–617. [Google Scholar]
  48. Chang, T.H.; Wang, T.C. Using the fuzzy multi-criteria decision-making approach for measuring the possibility of successful knowledge management. Inf. Sci. 2009, 179, 355–370. [Google Scholar] [CrossRef]
Figure 1. Cloud (50, 3.93, 0.1).
Figure 1. Cloud (50, 3.93, 0.1).
Symmetry 09 00156 g001
Figure 2. General Framework of the CGPOWA Operator for an LMCGDM Procedure.
Figure 2. General Framework of the CGPOWA Operator for an LMCGDM Procedure.
Symmetry 09 00156 g002
Table 1. Family of CGPOWA operator.
Table 1. Family of CGPOWA operator.
No. λ FormulationName of Operator
1 λ = 2 f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n ω i Y i 2 ) 1 / 2 CPOWQA operator
2 λ = 1 f ( Y 1 ,   Y 2 ,   ,   Y n ) = i = 1 n ω i Y i CPOWA operator
3 λ 0 f ( Y 1 ,   Y 2 ,   ,   Y n ) = i = 1 n ( Y i ) ω i CPOWGA operator
4- f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n 1 + T ( Y i ) i = 1 n ( 1 + T ( Y i ) ) Y i λ ) 1 / λ CGPA operator
5 S u p ( Y i , Y j ) = k ( i j ) f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n 1 n Y i λ ) 1 / λ CGM operator
6 λ = 2 f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n w i ( E x i ) 2 ,   i = 1 n w i ( E x i E n i ) 2 i = 1 n w i ( E x i ) 2 ,   i = 1 n w i ( E x i H e i ) 2 i = 1 n w i ( E x i ) 2 ) CPWQA operator
7 λ = 1 f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n w i E x i ,   i = 1 n w i ( E n i ) 2 ,   i = 1 n w i ( H e i ) 2 ) CWAA operator
8 λ 0 f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n ( E x i ) w i ,   i = 1 n w i ( E n i ) 2 E x i × i = 1 n ( E x i ) w i ,   i = 1 n w i ( E n i ) 2 E x i × i = 1 n ( E x i ) w i ) CWGA operator
Table 2. Particular cases of CGPOWA operator.
Table 2. Particular cases of CGPOWA operator.
No. W FormulationRemarks
1 ( 1 ,   0 ,   ,   0 ) T f ( Y 1 ,   Y 2 ,   ,   Y n ) = Y 1 The maximum operator
2 ( 0 ,   0 ,   ,   1 ) T f ( Y 1 ,   Y 2 ,   ,   Y n ) = Y n The minimum operator
3 ( 1 / n ,   1 / n ,   ,   1 / n ) T f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( i = 1 n 1 n Y i λ ) 1 / λ The cloud generalized mean operator
4 w 1 = α ,
w n = 1 α ,
w i = 0   ( i 1 ,   n )
f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( α Y 1 λ + ( 1 α ) Y n λ ) 1 / λ Y 1 = max 1 i n { Y i } , Y n = min 1 i n { Y i } . It includes the maximum and minimum aggregation operators
5 w i = 1 / p
( k i k + p 1 ) , w i = 0
( i < k   and   i k + p )
W i n d o w C G P O W A = ( i = k k + p 1 1 p Y i λ ) 1 / λ Window-CGPOWA operator
6 w n 2 = w n 2 + 1 = 1 / 2 , w i = 0 ( i n / 2 ,   n / 2 + 1 ) f ( Y 1 ,   Y 2 ,   ,   Y n ) = ( 1 2 Y n / 2 λ + 1 2 Y n / 2 + 1 λ ) 1 / λ n is an even number, Y n 2 is the n / 2 t h largest of Y i ( i = 1 ,   2 ,   ,   n ) ; Y n 2 + 1 is the ( n / 2 + 1 ) t h largest of Y i ( i = 1 ,   2 ,   ,   n )
7 w ( n + 1 ) / 2 = 1 / 2 , w i = 0   ( i ( n + 1 ) / 2 ) f ( Y 1 ,   Y 2 ,   ,   Y n ) = Y ( n + 1 ) / 2 n is an odd number, Y n + 1 2 is the ( n + 1 ) / 2 t h largest of Y i ( i = 1 ,   2 ,   ,   n )
Table 3. Linguistic Decision matrix B ˜ .
Table 3. Linguistic Decision matrix B ˜ .
DMsAlternatives c 1 c 2 c 3 c 4 c 5 c 6
d 1 A 1 h 1 h 0 h 2 h 2 h 3 h 1
A 2 h 2 h 3 h 1 h 1 h 2 h 0
A 3 h 2 h 1 h 2 h 3 h 0 h 2
A 4 h 1 h 2 h 0 h 1 h 2 h 1
A 5 h 1 h 2 h 0 h 2 h 1 h 2
d 2 A 1 h 0 h 2 h 1 h 1 h 0 h 2
A 2 h 1 h 2 h 0 h 1 h 2 h 0
A 3 h 1 h 3 h 1 h 3 h 1 h 2
A 4 h 1 h 2 h 1 h 1 h 1 h 0
A 5 h 3 h 3 h 2 h 0 h 1 h 3
d 3 A 1 h 1 h 1 h 2 h 0 h 3 h 3
A 2 h 2 h 1 h 1 h 1 h 0 h 0
A 3 h 2 h 3 h 1 h 3 h 1 h 2
A 4 h 2 h 2 h 1 h 0 h 1 h 1
A 5 h 0 h 2 h 0 h 2 h 1 h 3
Table 4. Cloud decision matrix R ^ .
Table 4. Cloud decision matrix R ^ .
DMs and Alternatives c 1 c 2 c 3 c 4 c 5 c 6
d 1 A 1 (4.05, 0.637, 0.08)(5.00, 0.394, 0.05)(3.09, 1.031, 0.13)(3.09, 1.031, 0.13)(0.00, 1.668, 0.21)(4.05, 0.637, 0.08)
A 2 (3.09, 1.031, 0.13)(0.00, 1.668, 0.21)(5.96, 0.637, 0.08)(5.96, 0.637, 0.08)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)
A 3 (6.93, 1.031, 0.13)(5.96, 0.637, 0.08)(6.93, 1.031, 0.13)(10.0, 1.668, 0.21)(5.00, 0.394, 0.05)(6.93, 1.031, 0.13)
A 4 (4.05, 0.637, 0.08)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(5.96, 0.637, 0.08)(3.09, 1.031, 0.13)(4.05, 0.637, 0.08)
A 5 (4.05, 0.637, 0.08)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(3.09, 1.031, 0.13)(4.05, 0.637, 0.08)(6.93, 1.031, 0.13)
d 2 A 1 (5.00, 0.394, 0.05)(3.09, 1.031, 0.13)(4.05, 0.637, 0.08)(4.05, 0.637, 0.08)(5.00, 0.394, 0.05)(3.09, 1.031, 0.13)
A 2 (4.05, 0.637, 0.08)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(4.05, 0.637, 0.08)(6.93, 1.031, 0.13)(5.00, 0.394, 0.05)
A 3 (5.96, 0.637, 0.08)(10.0, 1.668, 0.21)(5.96, 0.637, 0.08)(10.0, 1.668, 0.21)(5.96, 0.637, 0.08)(6.93, 1.031, 0.13)
A 4 (4.05, 0.637, 0.08)(3.09, 1.031, 0.13)(5.96, 0.637, 0.08)(4.05, 0.637, 0.08)(5.96, 0.637, 0.08)(5.00, 0.394, 0.05)
A 5 (10.0, 1.668, 0.21)(0.00, 1.668, 0.21)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(4.05, 0.637, 0.08)(0.00, 1.668, 0.21)
d 3 A 1 (5.96, 0.637, 0.08)(4.05, 0.637, 0.08)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(0.00, 1.668, 0.21)(0.00, 1.668, 0.21)
A 2 (3.09, 1.031, 0.13)(5.96, 0.637, 0.08)(4.05, 0.637, 0.08)(5.96, 0.637, 0.08)(5.00, 0.394, 0.05)(5.00, 0.394, 0.05)
A 3 (6.93, 1.031, 0.13)(10.0, 1.668, 0.21)(5.96, 0.637, 0.08)(10.0, 1.668, 0.21)(5.96, 0.637, 0.08)(6.93, 1.031, 0.13)
A 4 (3.09, 1.031, 0.13)(6.93, 1.031, 0.13)(4.05, 0.637, 0.08)(5.00, 0.394, 0.05)(5.96, 0.637, 0.08)(4.05, 0.637, 0.08)
A 5 (5.00, 0.394, 0.05)(3.09, 1.031, 0.13)(5.00, 0.394, 0.05)(3.09, 1.031, 0.13)(4.05, 0.637, 0.08)(0.00, 1.668, 0.21)
Table 5. The weights of criteria.
Table 5. The weights of criteria.
DMsAlternatives c 1 c 2 c 3 c 4 c 5 c 6
d 1 A 1 0.1720.1640.1710.1710.1500.172
A 2 0.1700.1540.1680.1680.1700.171
A 3 0.1750.1660.1750.1500.1600.175
A 4 0.1730.1660.1660.1570.1660.173
A 5 0.1730.1670.1670.1670.1730.153
d 2 A 1 0.1650.1630.1720.1720.1650.163
A 2 0.1720.1600.1720.1720.1530.172
A 3 0.1720.1570.1720.1570.1720.169
A 4 0.1700.1600.1650.1700.1650.171
A 5 0.1540.1660.1720.1700.1720.166
d 3 A 1 0.1640.1710.1710.1690.1620.162
A 2 0.1550.1670.1640.1670.1730.173
A 3 0.1720.1590.1690.1590.1690.172
A 4 0.1620.1580.1720.1710.1650.172
A 5 0.1680.1710.1680.1710.1710.151
Table 6. Collective cloud decision matrix R .
Table 6. Collective cloud decision matrix R .
Alternatives D M   ( d 1 ) D M   ( d 2 ) D M   ( d 3 )
A 1 (3.61, 0.701, 0.09)(4.12, 0.636, 0.09)(3.81, 0.638, 0.08)
A 2 (4.41, 0.678, 0.09)(4.81, 0.752, 0.09)(4.97, 0.598, 0.08)
A 3 (7.07, 1.195, 0.15)(7.61, 1.337, 0.17)(7.77, 1.356, 0.17)
A 4 (4.31, 0.680, 0.09)(4.80, 0.635, 0.08)(4.99, 0.777, 0.10)
A 5 (4.51, 0.833, 0.11)(4.91, 1.387, 0.17)(3.80, 0.639, 0.08)
Table 7. The weights of the DMs.
Table 7. The weights of the DMs.
Alternatives D M   ( d 1 ) D M   ( d 2 ) D M   ( d 3 )
A 1 0.3300.3070.363
A 2 0.3320.3440.323
A 3 0.2890.3640.346
A 4 0.3210.3500.329
A 5 0.3610.2980.341
Table 8. Ranking results for different weights of the DMs and weights of criteria.
Table 8. Ranking results for different weights of the DMs and weights of criteria.
The Weight Vector λ of the DMsThe Weight Vector W of CriteriaRanking Results
( 0.1 , 0.6 , 0.3 ) T ( 0.12 , 0.15 , 0.18 , 0.25 , 0.2 , 0.1 ) T A 3 A 4 A 2 A 5 A 1
( 0.35 , 0.4 , 0.25 ) T ( 0.12 , 0.25 , 0.18 , 0.1 , 0.15 , 0.2 ) T A 3 A 4 A 5 A 2 A 1
( 0.35 , 0.4 , 0.25 ) T ( 0.12 , 0.15 , 0.18 , 0.1 , 0.2 , 0.25 ) T A 3 A 5 A 4 A 2 A 1
( 0.35 , 0.4 , 0.25 ) T ( 0.4 , 0.2 , 0.1 , 0.1 , 0.1 , 0.1 ) T A 3 A 4 A 5 A 1 A 2
( 0.35 , 0.4 , 0.25 ) T ( 0.2 , 0.4 , 0.1 , 0.1 , 0.1 , 0.1 ) T A 3 A 4 A 1 A 5 A 2
( 0.35 , 0.4 , 0.25 ) T ( 0.2 , 0.1 , 0.1 , 0.1 , 0.1 , 0.4 ) T A 3 A 5 A 2 A 4 A 1
Table 9. Comparison with different parameter λ .
Table 9. Comparison with different parameter λ .
Aggregation OperatorsRanking VectorRanking Results
CGPOWA ( λ = 1 ) ( 0.1769 , 0.1962 , 0.2468 , 0.1977 , 0.1824 ) T . A 3 A 4 A 2 A 5 A 1
CGPOWA ( λ = 2 ) ( 0.1836 , 0.2003 , 0.2216 , 0.2006 , 0.1939 ) T . A 3 A 4 A 2 A 5 A 1
CGPOWA ( λ = 3 ) ( 0.1872 , 0.2021 , 0.2052 , 0.2034 , 0.2020 ) T . A 3 A 4 A 2 A 5 A 1
Table 10. Collective linguistic decision matrix B .
Table 10. Collective linguistic decision matrix B .
Alternatives c 1 c 2 c 3 c 4 c 5 c 6
A 1 h 0 h 1 h 1.736 h 1 h 2.208 h 2
A 2 h 1.736 h 1.388 h 0 h 0.276 h 0 h 0
A 3 h 1.736 h 2.472 h 1.264 h 3 h 0.736 h 2
A 4 h 1.264 h 0.944 h 0 h 0 h 0.208 h 0.736
A 5 h 0.529 h 2.264 h 0.528 h 0.736 h 1 h 1.68
Table 11. Comparison with different models.
Table 11. Comparison with different models.
Aggregation OperatorsRanking Results
LPA A 3 A 2 A 4 A 5 A 1
TFWA A 3 A 2 A 4 A 5 A 1
TWA A 3 A 2 A 4 A 5 A 1

Share and Cite

MDPI and ACS Style

Gao, J.; Yi, R. Cloud Generalized Power Ordered Weighted Average Operator and Its Application to Linguistic Group Decision-Making. Symmetry 2017, 9, 156. https://doi.org/10.3390/sym9080156

AMA Style

Gao J, Yi R. Cloud Generalized Power Ordered Weighted Average Operator and Its Application to Linguistic Group Decision-Making. Symmetry. 2017; 9(8):156. https://doi.org/10.3390/sym9080156

Chicago/Turabian Style

Gao, Jianwei, and Ru Yi. 2017. "Cloud Generalized Power Ordered Weighted Average Operator and Its Application to Linguistic Group Decision-Making" Symmetry 9, no. 8: 156. https://doi.org/10.3390/sym9080156

APA Style

Gao, J., & Yi, R. (2017). Cloud Generalized Power Ordered Weighted Average Operator and Its Application to Linguistic Group Decision-Making. Symmetry, 9(8), 156. https://doi.org/10.3390/sym9080156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics