Next Article in Journal
On Connection between Second-Degree Exterior and Symmetric Derivations of Kähler Modules
Next Article in Special Issue
Probabilistic Single-Valued (Interval) Neutrosophic Hesitant Fuzzy Set and Its Application in Multi-Attribute Decision Making
Previous Article in Journal
Positive Solutions of One-Dimensional p-Laplacian Problems with Superlinearity
Previous Article in Special Issue
Commutative Generalized Neutrosophic Ideals in BCK-Algebras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Linguistic Neutrosophic Multi-Criteria Group Decision-Making Method to University Human Resource Management

1
School of Business, Central South University, Changsha 410083, China
2
College of Business Administration, Hunan University, Changsha 410082, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(9), 364; https://doi.org/10.3390/sym10090364
Submission received: 17 July 2018 / Revised: 11 August 2018 / Accepted: 22 August 2018 / Published: 26 August 2018

Abstract

:
Competition among different universities depends largely on the competition for talent. Talent evaluation and selection is one of the main activities in human resource management (HRM) which is critical for university development. Firstly, linguistic neutrosophic sets (LNSs) are introduced to better express multiple uncertain information during the evaluation procedure. We further merge the power averaging operator with LNSs for information aggregation and propose a LN-power weighted averaging (LNPWA) operator and a LN-power weighted geometric (LNPWG) operator. Then, an extended technique for order preference by similarity to ideal solution (TOPSIS) method is developed to solve a case of university HRM evaluation problem. The main contribution and novelty of the proposed method rely on that it allows the information provided by different decision makers (DMs) to support and reinforce each other which is more consistent with the actual situation of university HRM evaluation. In addition, its effectiveness and advantages over existing methods are verified through sensitivity and comparative analysis. The results show that the proposal is capable in the domain of university HRM evaluation and may contribute to the talent introduction in universities.

1. Introduction

Human resource management (HRM) refers to a process of hiring and developing employees to enhance the core competitiveness of an organization [1]. Acting as the root of national competitiveness, a success in HRM may bring benefit to both the organization and employee well-being; thus, effective HRM has received a higher demand and recognition during the 21st century. Over the past three decades, theory and research on HRM has made considerable progress in various fields, such as tourism industries, health services and universities [2,3,4,5]. For example, Zhang et al. [5] investigated a case of HRM for teaching quality assessment using a multi-criteria group decision-making (MAGDM) framework. This framework aimed to improve the teaching quality of college teachers and further enhance the competitiveness of colleges and universities. Apart from the classroom teaching quality evaluation problems in universities, talent introduction also plays a significant role in universities’ HRM. Particularly, selecting or evaluating these applicants by inappropriate methods may lead to a failure in HRM and even influence the overall efficiency of the university. Since various applicants and influential criteria are usually involved in the evaluation procedures of HRM by several decision makers (DMs), the evaluation should be recognized as a multi-criteria group decision-making (MCGDM) problem.
The theory of fuzzy set (FS) can handle uncertainty and fuzziness. The neutrosophic set (NS) [6] was initially proposed to express membership, nonmembership and indeterminacy, which is a generalization of FS [7]. Later, many extensions emerged to tackle real engineering and scientific problems [8], among which the popularly used forms are the simplified neutrosophic set (SNS) [9] and the single-valued trapezoidal neutrosophic set (SVTNS) [10,11,12]. These extensions have been successfully applied in various domains, including green product development [13], outsourcing provider selection [14], clustering analysis [15,16].
However, on some real occasions, people may tend to provide their evaluation information using natural languages rather than the above extensions which are too complex to obtain. For example, people can give some linguistic terms like “excellent”, “medium” or “poor” to evaluate the performance of a company staff based on various criteria. Moreover, it may be also difficult for a single person to evaluate all alternatives under each influential aspect due to the high complexity of decision environments. Therefore, the linguistic MCGDM under fuzzy environments has received extensive research attention and gained many excellent results [17]. Up to now, various extensions have been studied in depth to describe linguistic information, such as hesitant fuzzy linguistic term set and some of its extended forms [18,19,20,21,22], linguistic intuitionistic fuzzy set (LIFS) [23,24], Z-number [25], and probabilistic linguistic term set [26,27] etc. However, the drawback of these extensions for linguistic MCGDM is that they cannot cover the inconsistent linguistic decision information which will appear with increasing complexity of the internal and external decision-making environments. Another example is that when one DM was asked to give some evaluations on a teacher from overseas under the aspect teaching skill, the DM may describe his or her bad judgments on the teaching attitude but the good or neutral aspects of the teacher’s teaching capacity and teaching method as well. An example of that can be seen from the evaluation: “The teacher is rather average in writing and oral language, and he is able to tailor his teaching method to different students. But my only complaint is that the teacher is a little strict in teaching attitude”. It can be noted that the above evaluation includes positive, neutral and negative information all at once. Therefore, this poses a great challenge for linguistic MCGDM methods on how to capture such inconsistent information.
To tackle the above problem, Fang and Ye [28] proposed the linguistic neutrosophic set (LNS), which was generalized from the concept of LIFS [23,24]. By contrast, one LNS is represented by three independent functions of truth-membership, indeterminacy-membership, and falsity-membership in the form of linguistic terms. Thus, the LNS has its prominent advantages in depicting inconsistent and indeterminate linguistic information, and several scholars have extended the LNS in several aspects, such as aggregation operators and similarity (or distance) measures. Li et al. [29] introduced a linguistic neutrosophic geometric Heronian mean (LNGHM) operator and a linguistic neutrosophic prioritized geometric Horonian mean (LNGHM) operator. Fan et al. [30] merged the LNSs with Bonferroni mean operator and proposed a linguistic neutrosophic number normalized weighted Bonferroni mean (LNNNWBM) operator and a linguistic neutrosophic number normalized weighted geometric Bonferroni mean (LNNNWGBM) operator. Shi and Ye [31] introduced two cosine similarity measures of LNSs to tackle MCGDM problems. Liang et al. [32] defined several distance measures of LNS and presented an extended TOPSIS method under the LNS environment.
To facilitate the mathematical operation, several quantification tools of natural language have been introduced, such as 2-type [33], triangular (or trapezoidal) fuzzy number [34,35], cloud model [36] and symbol model [37,38]. These models have greatly contributed to the ease of computation for linguistic information; however, they cannot cover all types of problems and have some limitations to be addressed. To tackle the limitations of prior research, Wang et al. [39] introduced a series of linguistic scale functions (LSFs) for converting linguistic information into real numbers. Through this model, flexibility of modeling information has been greatly enhanced by considering different semantic situations and loss and distortion of information has been mitigated to a great extent. Thus, we apply the LSFs to tackle linguistic neutrosophic information in this paper.
The power averaging (PA) operator, proposed by Yager [40], has been used as one effective information aggregation tool in solving MCDM [41,42,43] problems since its appearance. Unlike other common aggregation tools, such as weighted averaging [44] and ordered weighted averaging [45,46], which implicate the independent hypothesis among inputs. The PA operator allows the information between inputs to support and reinforce each other. In the HRM evaluation problems, it is very suitable for PA operator to integrate evaluation information of different teams of DMs, as these DMs are not completely independent and the PA operator can measure their support degree among one another.
TOPSIS method was first presented by Huang and Yoon [47]. It considered that the better scheme would be closer to ideal solution [48]. Due to the inevitable vagueness inherent in decision information, fuzzy TOPSIS and its extensions have been deployed [49,50,51] in real world applications. Considering the advantages of this method, an extended TOPSIS technique is introduced to evaluate alternatives.
As discussed above, our study developed an integrated method by combining PA operator with LNSs and constructing an extended TOPSIS technique to tackle the university HRM evaluation problem. The novelties and contributions of the proposal are listed as following. (1) New algorithms for LNNs based on LSFs is defined, which can reflect differences between various semantics. (2) Based on LSFs and the new operations, a generalized distance measure for LNNs is introduced, which can be reduced to Hamming distance and Euclidean distance of LNNs. The proposed distance measure is more flexible than prior studies because of the application of LSFs and novel operations. (3) Considering the fact that DMs in case of university HRM evaluation may support each other, this paper merges the PA operator with LNSs to tackle information fusion. The proposed method can improve the adaptability of LNNs in real decision.
The context in the rest of this paper is as follows: Section 2 defines some operations and distance measurements of LNSs. Section 3 proposes two aggregation operators for LNSs and investigates their properties. Next, the detailed procedures for a linguistic MCGDM problem are given in Section 4. Then, a case of university HRM evaluation problem verifies the feasibility and validity of our method in Section 5. Finally, Section 6 presents the conclusion and future work.

2. New Operations and Distance Measure for LNNs

After introducing the concepts of linguistic term set (LTS) and LNS, this section defines some new operations and a distance measure for LNNs based on the Archimedean t-norm and t-conorm. For better representation, some preliminaries about LSFs and the Archimedean t-norm and t-conorm are provided in Appendix A and Appendix B, respectively.

2.1. Linguistic Neutrosophic Set

H = { h τ | τ = 0 , 1 , , 2 t , t N * } is a discrete term set, which is finite and totally ordered. Herein, N * presents a positive integers’ set, h τ is the value of a linguistic variable. Thus, the linguistic variable h τ in H meets the following two properties [34]: (1) The LTS is ordered: h τ < h υ if and only if τ < υ , where ( h τ , h υ H ) ; and (2) With existing of a negation operator n e g ( h τ ) = h ( 2 t τ )   ( τ , υ = 0 , 1 , , 2 t ) .
In order to preserve as much of the given information and avoid information loss, Xu [52] extended H = { h τ | τ = 0 , 1 , , 2 t } into a continuous LTS H ¯ = { h τ | 1 τ L } , which satisfies the properties of discrete term set H . When h τ H ¯ , h τ is called the original linguistic term; otherwise, h τ is called the virtual linguistic term.
Definition 1
([28,29]). Let X be a universe of discourse and H ¯ = { h α | h 0 h α h 2 t , α [ 0 , 2 t ] } , and the LNSs can be defined as follows:
a ˜ = { x , h T a ˜ ( x ) , h I a ˜ ( x ) , h F a ˜ ( x ) | x X } ,
where 0 T a ˜ + I a ˜ + F a ˜ 6 t and the values h T a ˜ ( x ) , h I a ˜ ( x ) , h F a ˜ ( x ) H ¯ represent the degrees of truth-membership, indeterminacy-membership, and falsity-membership, respectively.
Noteworthy, if there contains only one element in X , a ˜ is called a LNN, for notational simplicity, it can be denoted by a ˜ = h T a ˜ , h I a ˜ , h F a ˜ .

2.2. New Operations for LNNs

According to the LSFs in Appendix A and the Archimedean t-norm and t-conorm presented in Appendix B, some novel operations for LNNs are defined as follows.
Definition 2.
Let a ˜ = h T a ˜ , h I a ˜ , h F a ˜ and b ˜ = h T b ˜ , h I b ˜ , h F b ˜ be two arbitrary LNNs, and ζ 0 ; then the operations for LNNs are defined as follows:
(1)
a ˜ b ˜ = f * 1 ( f * ( h T a ˜ ) + f * ( h T b ˜ ) 1 + f * ( h T a ˜ ) f * ( h T b ˜ ) ) , f * 1 ( f * ( h I a ˜ ) + f * ( h I b ˜ ) 1 + ( 1 f * ( h I a ˜ ) ) ( 1 f * ( h I b ˜ ) ) ) ,
f * 1 ( f * ( h F a ˜ ) + f * ( h F b ˜ ) 1 + ( 1 f * ( h F a ˜ ) ) ( 1 f * ( h F b ˜ ) ) ) ;
(2)
a ˜ b ˜ = f * 1 ( f * ( h T a ˜ ) + f * ( h T b ˜ ) 1 + ( 1 f * ( h T a ˜ ) ) ( 1 f * ( h T b ˜ ) ) ) , f * 1 ( f * ( h I a ˜ ) + f * ( h I b ˜ ) 1 + f * ( h I a ˜ ) f * ( h I b ˜ ) ) ,
f * 1 ( f * ( h F a ˜ ) + f * ( h F b ˜ ) 1 + f * ( h F a ˜ ) f * ( h F b ˜ ) ) ;
(3)
ζ a ˜ = f * 1 ( ( 1 + f * ( h T a ˜ ) ) ζ ( 1 f * ( h T a ˜ ) ) ζ ( 1 + f * ( h T a ˜ ) ) ζ + ( 1 f * ( h T a ˜ ) ) ζ ) , f * 1 ( 2 ( f * ( h I a ˜ ) ) ζ ( 2 f * ( h I a ˜ ) ) ζ + ( f * ( h I a ˜ ) ) ζ ) , f * 1 ( 2 ( f * ( h F a ˜ ) ) ζ ( 2 f * ( h F a ˜ ) ) ζ + ( f * ( h F a ˜ ) ) ζ ) ;
(4)
a ˜ ζ = f * 1 ( 2 ( f * ( h T a ˜ ) ) ζ ( 2 f * ( h T a ˜ ) ) ζ + ( f * ( h T a ˜ ) ) ζ ) , f * 1 ( ( 1 + f * ( h I a ˜ ) ) ζ ( 1 f * ( h I a ˜ ) ) ζ ( 1 + f * ( h I a ˜ ) ) ζ + ( 1 f * ( h I a ˜ ) ) ζ ) ,
f * 1 ( ( 1 + f * ( h F a ˜ ) ) ζ ( 1 f * ( h F a ˜ ) ) ζ ( 1 + f * ( h F a ˜ ) ) ζ + ( 1 f * ( h F a ˜ ) ) ζ ) ; and
(5)
n e g ( a ˜ ) = h F a ˜ , 1 h I a ˜ , h T a ˜ .
Example 1.
Let H = { h 0 , h 1 , h 2 , h 3 , h 4 , h 5 , h 6 } = { v e r y   p o o r ,   p o o r ,   s l i g h t l y   p o o r ,   f a i r ,   s l i g h t l y   g o o d ,   g o o d ,   v e r y   g o o d } , a ˜ = h 3 , h 2 , h 2 , b ˜ = h 2 , h 3 , h 3 , and ζ = 2 , if a = 1.4 , and f 1 ( h x ) = θ x = x 2 t   ( x = 0 , 1 , , 2 t ) . The calculated results are as follows:
(1)
a ˜ b ˜ = h 4.29 , h 3.75 , h 3.75 ;
(2)
a ˜ b ˜ = h 3.75 , h 4.29 , h 4.29 ;
(3)
2 a ˜ = h 4.8 , h 0.46 , h 0.46 ; and
(4)
a ˜ 2 = h 1.2 , h 3.6 , h 3.6 .
Theorem 1.
Let a ˜ , b ˜ , and c ˜ be three LNNs, and ζ 0 ; then the following equations are true:
(1)
a ˜ b ˜ = b ˜ a ˜ ;
(2)
( a ˜ b ˜ ) c ˜ = a ˜ ( b ˜ c ˜ ) ;
(3)
a ˜ b ˜ = b ˜ a ˜ ;
(4)
( a ˜ b ˜ ) c ˜ = a ˜ ( b ˜ c ˜ ) ;
(5)
ζ a ˜ ζ b ˜ = ζ ( b ˜ a ˜ ) ; and
(6)
( a ˜ b ˜ ) ζ = a ˜ ζ b ˜ ζ .
Theorem 1 holds according to Definition 2, so the proof is omitted here.

2.3. Distance between Two LNNs

Definition 3.
Let a ˜ = h T a ˜ , h I a ˜ , h F a ˜ and b ˜ = h T b ˜ , h I b ˜ , h F b ˜ be two arbitrary LNNs, f * is a LSF. Then, the generalized distance measure between a ˜ and b ˜ is defined as follows:
d ( a ˜ , b ˜ ) = 1 3 ( | f * ( h T a ˜ ) f * ( h T b ˜ ) | λ + | f * ( h I a ˜ ) f * ( h I b ˜ ) | λ + | f * ( h F a ˜ ) f * ( h F b ˜ ) | λ ) 1 λ .
When λ = 1 , the above distance measure can be reduced to the Hamming distance; when λ = 2 , it can be reduced to the Euclidean distance. We can see that Equation (2) is a generalized form of distance measure.
Theorem 2.
Let a ˜ = h T a ˜ , h I a ˜ , h F a ˜ , b ˜ = h T b ˜ , h I b ˜ , h F b ˜ and c ˜ = h T c ˜ , h I c ˜ , h F c ˜ be three arbitrary LNNs, then, the following properties are required for the generalized distance measure in Definition 3.
(1)
d ( a ˜ , b ˜ ) 0 ;
(2)
d ( a ˜ , a ˜ ) = 0 ;
(3)
d ( a ˜ , b ˜ ) = d ( b ˜ , a ˜ ) ; and
(4)
d ( a ˜ , c ˜ ) d ( a ˜ , b ˜ ) + d ( b ˜ , c ˜ ) .
Theorem 2 is proved in the Appendix C for better representation.

3. Linguistic Neutrosophic Aggregation Operators

Yager [40] introduced the PA operator to allow input arguments to support each other. Thus, the traditional PA operator are first reviewed; then, the LNPWA and LNPWG operators are proposed in an environment featuring LNNs.
Definition 4
([40]). Let a j ( j = 1 , 2 , , n ) be a collection of positive values and Ω be the set of all given values; then the PA operator is the mapping P A : Ω n Ω , which can be defined as follows:
P A ( a 1 , a 2 , , a n ) = j = 1 n 1 + G ( a j ) j = 1 n ( 1 + G ( a j ) ) a j ,
where
G ( a j ) = i = 1 , i j n S u p ( a j , a i ) ,
S u p ( a j , a i ) represents the support for a j from a i , and meets the following properties:
(1)
S u p ( a i , a j ) [ 0 , 1 ] ;
(2)
S u p ( a i , a j ) = S u p ( a j , a i ) ; and
(3)
S u p ( a i , a j ) S u p ( a l , a r ) , when d ( a i , a j ) < d ( a l , a r ) , and d ( a i , a j ) is the distance between a i and a j .

3.1. Linguistic Neutrosophic Power Weighted Averaging Operator

This subsection extends the traditional PA operator to LNN. Then, a LNPWA operator is proposed and discussed.
Definition 5.
Let a ˜ j = h T a ˜ j , h I a ˜ j , h F a ˜ j ( j = 1 , 2 , , n ) be a set of LNNs. Then, the LNPWA operator can be defined as
L N P W A ( a ˜ 1 , a ˜ 2 , , a ˜ n ) = j = 1 n w j ( 1 + G ( a ˜ j ) ) a ˜ j j = 1 n w j ( 1 + G ( a ˜ j ) ) ,
where w = ( w 1 , w 2 , , w n ) T is the weight vector of a ˜ j , w i [ 0 , 1 ] , and i = 1 n w i = 1 , G ( a j ) = i = 1 , i j n w i S u p ( a ˜ j , a ˜ i ) , S u p ( a ˜ j , a ˜ i ) is the support for a ˜ j from a ˜ i , which also satisfies the similar properties in Definition 4.
Theorem 3.
Let a ˜ j = h T a ˜ j , h I a ˜ j , h F a ˜ j ( j = 1 , 2 , , n ) be a set of LNNs, and w = ( w 1 , w 2 , , w n ) T is the weight vector of a ˜ j , w i [ 0 , 1 ] , and i = 1 n w i = 1 . Then, the aggregated result using Equation (5) is also a LNN. For notational simplicity, we assume that ζ j = w j ( 1 + G ( a ˜ j ) ) / j = 1 n w j ( 1 + G ( a ˜ j ) ) .
L N P W A ( a ˜ 1 , a ˜ 2 , , a ˜ n ) = f * 1 ( j = 1 n ( 1 + f * ( h T a ˜ ) ) ζ j j = 1 n ( 1 f * ( h T a ˜ ) ) ζ j j = 1 n ( 1 + f * ( h T a ˜ ) ) ζ j + j = 1 n ( 1 f * ( h T a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 n ( f * ( h I a ˜ ) ) ζ j j = 1 n ( 2 f * ( h I a ˜ ) ) ζ j + j = 1 n ( f * ( h I a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 n ( f * ( h F a ˜ ) ) ζ j j = 1 n ( 2 f * ( h F a ˜ ) ) ζ j + j = 1 n ( f * ( h F a ˜ ) ) ζ j ) .
Appendix D” details the proof of Theorem 3.
The traditional PA operator has the properties of idempotency, monotonicity, and boundedness. It can be proved that the LNPWA operator also satisfies these properties.
Theorem 4.
Let a ˜ j = h T a ˜ j , h I a ˜ j , h F a ˜ j ( j = 1 , 2 , , n ) be a set of LNNs, and w = ( w 1 , w 2 , , w n ) T is the weight vector of a ˜ j , w i [ 0 , 1 ] , and i = 1 n w i = 1 . If S u p ( a ˜ j , a ˜ i ) = 0 or S u p ( a ˜ j , a ˜ i ) = k   ( k [ 0 , 1 ] ) for all a ˜ i and a ˜ j . Hence, the LNPWA operator reduces to the linguistic neutrosophic weighted averaging (LNWA) operator.
L N W A ( a ˜ 1 , a ˜ 2 , , a ˜ n ) = j = 1 n w j a ˜ j = f * 1 ( j = 1 n ( 1 + f * ( h T a ˜ ) ) w j j = 1 n ( 1 f * ( h T a ˜ ) ) w j j = 1 n ( 1 + f * ( h T a ˜ ) ) w j + j = 1 n ( 1 f * ( h T a ˜ ) ) w j ) , f * 1 ( 2 j = 1 n ( f * ( h I a ˜ ) ) w j j = 1 n ( 2 f * ( h I a ˜ ) ) w j + j = 1 n ( f * ( h I a ˜ ) ) w j ) , f * 1 ( 2 j = 1 n ( f * ( h F a ˜ ) ) w j j = 1 n ( 2 f * ( h F a ˜ ) ) w j + j = 1 n ( f * ( h F a ˜ ) ) w j )
The proof for Theorem 4 is similar to the proof for Theorem 3; thus, it is omitted here.

3.2. Linguistic Neutrosophic Power Weighted Geometric Operator

Definition 6.
Let a ˜ j = h T a ˜ j , h I a ˜ j , h F a ˜ j ( j = 1 , 2 , , n ) be a set of LNNs. Then, the LNPWG operator can be defined as
L N P W G ( a ˜ 1 , a ˜ 2 , , a ˜ n ) = j = 1 n ( a ˜ j ) w j ( 1 + G ( a ˜ j ) ) j = 1 n w j ( 1 + G ( a ˜ j ) ) ,
where w = ( w 1 , w 2 , , w n ) T is the weight vector of a ˜ j , w i [ 0 , 1 ] , and i = 1 n w i = 1 , G ( a j ) = i = 1 , i j n w i S u p ( a ˜ j , a ˜ i ) , S u p ( a ˜ j , a ˜ i ) is the support for a ˜ j from a ˜ i and also satisfies the properties in Definition 4.
Theorem 5.
Let a ˜ j = h T a ˜ j , h I a ˜ j , h F a ˜ j ( j = 1 , 2 , , n ) be a set of LNNs, and w = ( w 1 , w 2 , , w n ) T is the weight vector of a ˜ j , w i [ 0 , 1 ] , and i = 1 n w i = 1 . Then, the aggregated result using Equation (8) is still a LNN, For notational simplicity, we assume that ζ j = w j ( 1 + G ( a ˜ j ) ) / j = 1 n w j ( 1 + G ( a ˜ j ) ) .
L N P W G ( a ˜ 1 , a ˜ 2 , , a ˜ n ) = f * 1 ( 2 j = 1 n ( f * ( h T a ˜ ) ) ζ j j = 1 n ( 2 f * ( h T a ˜ ) ) ζ j + j = 1 n ( f * ( h T a ˜ ) ) ζ j ) , f * 1 ( j = 1 n ( 1 + f * ( h I a ˜ ) ) ζ j j = 1 n ( 1 f * ( h I a ˜ ) ) ζ j j = 1 n ( 1 + f * ( h I a ˜ ) ) ζ j + j = 1 n ( 1 f * ( h I a ˜ ) ) ζ j ) , f * 1 ( j = 1 n ( 1 + f * ( h F a ˜ ) ) ζ j j = 1 n ( 1 f * ( h F a ˜ ) ) ζ j j = 1 n ( 1 + f * ( h F a ˜ ) ) ζ j + j = 1 n ( 1 f * ( h F a ˜ ) ) ζ j ) .
The proof of Theorem 5 is also omitted duo to the same way as Theorem 3.

4. MCGDM Method Based on the LNPWA and LNPWG Operators

In this part, a MCGDM method based on the LNPWA and LNPWG operators is developed to solve university HRM evaluation problems.
For a MCGDM problem with a finite set of m alternatives, let D = { D 1 , D 2 , , D s } be the set of DMs, A = { A 1 , A 2 , , A m } be the set of alternatives, and C = { C 1 , C 2 , , C n } be the set of criteria. Assume that the weight vector of the criteria is ϖ = ( ϖ 1 , ϖ 2 , , ϖ n ) T , such that ϖ j [ 0 , 1 ] and j = 1 n ϖ j = 1 . Analogously, the weight vector of the DMs is specified as w = ( w 1 , w 2 , , w s ) T , where w k 0 , and k = 1 s w k = 1 . The evaluation values provided by the DMs are transformed into LNNs, and a ˜ i j k = h T a ˜ i j k , h I a ˜ i j k , h F a ˜ i j k , ( k = 1 , 2 , , s ; j = 1 , 2 , , n ; i = 1 , 2 , , m ) represents the evaluation value of DM D k ( k = 1 , 2 , , s ) for alternative a ˜ i ( i = 1 , 2 , , m ) on criteria C j ( j = 1 , 2 , , n ) .
The detailed procedures of the MCGDM method involve the following steps:
Step 1: Normalize the decision matrices.
In general, criteria can be divided into two categories: benefit type and cost type. Using operation (5) in Definition 2, the cost criteria can be transformed into benefit ones as follows:
r i j k = { h T a ˜ i j k , h I a ˜ i j k , h F a ˜ i j k , f o r   b e n i f i t   c r i t e r i o n   c j h F a ˜ i j k , 1 h I a ˜ i j k , h T a ˜ i j k , o t h e r w i s e ,
Step 2: Obtain the weighted decision matrices.
Using operations in Definition 2, the weighted decision matrices can be constructed by multiplying the given criteria weight vector into the decision matrices.
Step 3: Calculate the supports.
Utilizing the distance measure defined in Definition 3, the support degrees can be obtained by Equation (11):
S u p ( r i j . k 1 , r i j k 2 ) = 1 d ( r i j k 1 , r i j k 2 ) ( i = 1 , 2 , , m ; j = 1 , 2 , , n ; k 1 , k 2 = 1 , 2 , , s )
Step 4: Calculate the weights associated with r i j k 1 ( k 1 = 1 , 2 , , s ) .
η i j , k 1 = w k 1 ( 1 + G ( r ˜ k 1 ) ) / k 1 = 1 q w k 1 ( 1 + G ( r ˜ k 1 ) )
where G ( r k 1 ) = k 2 = 1 , k 2 k 1 s w k 2 S u p ( r k 1 , r k 2 ) , and w k 2 is interpreted as the weight of DM D k 2 .
Step 5: Obtain the comprehensive evaluation information.
Using Equation (5) or Equation (9), the normalized evaluation information provided by DMs can be aggregated, and the integrated decision matrix R = [ r i j ] m × n can be obtained.
Step 6: Determine the ideal decision vectors of all alternative decisions.
After aggregating the DMs’ evaluation information into the decision matrix R = [ r i j ] m × n , which is as follow:
  C 1         C 2               C n R = [ r i j ] m × n =   A 1 A 2 A m ( r 11 r 12 r 21 r 22 r m 1 r m 2 r 1 n r 2 n r m n ) ,
We can determine the ideal alternative vector A * among all the alternatives below:
C 1 C 2 C n A * = ( h T 2 t . , h I 0 , h F 0 , h T 2 t , h I 0 , h F 0 , , h T 2 t , h I 0 , h F 0 ) .
Similarly, the negative ideal alternative vector A c * can be obtained by the negation of A * , which has the maximum separation from A * , as follows:
C 1 C 2 C n A c * = ( h T 0 . , h I 2 t , h F 2 t , h T 0 , h I 2 t , h F 2 t , , h T 0 , h I 2 t , h F 2 t ) .
In addition, we can obtain the left maximum separation from A * denoted as A * :
C 1 C 2 C n A * = ( h T A * 1 , h I A * 1 , h F A * 1 , h T A * 2 , h I A * 2 , h F A * i 2 , , h T A * n , h I A * n , h F A * n ) ,
where h T A * j = min i { h T A * j } , h I A * j = max i { h I A * j } , and h F A * j = max i { h F A * j } .
In the same way, we can also obtain the right maximum separation from A * denoted as A * + :
C 1 C 2 C n A * + = ( h T A * + 1 , h I A * + 1 , h F A * + 1 , h T A * + 2 , h I A * + 2 , h F A * + 2 , , h T A * + n , h I A * + n , h F A * + n ) ,
where h T A * + j = max i { h T A * + j } , h I A * + j = min i { h I A * + j } , and h F A * + j = min i { h F A * + j } .
Step 7: Calculate the separations of each alternative decision vector from the ideal decision vector.
Utilizing the distance measure in Definition 3, we can calculate the separations between each alternative vector and the ideal decision vectors of all alternative decisions, they are respectively represented as follows:
d ( A i , A * ) = j = 1 n 1 3 ( | f * ( h T α j ) f * ( h T 2 t ) | λ + | f * ( h I α j ) f * ( h I 0 ) | λ + | f * ( h F α j ) f * ( h F 0 ) | λ ) 1 λ ,
d ( A i , A c * ) = j = 1 n 1 3 ( | f * ( h T α j ) f * ( h T 0 ) | λ + | f * ( h I α j ) f * ( h I 2 t ) | λ + | f * ( h F α j ) f * ( h F 2 t ) | λ ) 1 λ ,
d ( A i , A * ) = j = 1 n 1 3 ( | f * ( h T α j ) f * ( h T A * j ) | λ + | f * ( h I α j ) f * ( h I A * j ) | λ + | f * ( h F α j ) f * ( h F A * j ) | λ ) 1 λ ,
d ( A i , A * + ) = j = 1 n 1 3 ( | f * ( h T α j ) f * ( h T A * j ) | λ + | f * ( h I α j ) f * ( h I A * j ) | λ + | f * ( h F α j ) f * ( h F A * j ) | λ ) 1 λ .
Step 8: Calculate the relative closeness of each alternative decision.
The relative closeness of each alternative decision can be obtained using the following formula:
I i = d ( A i , A c *   ) + d ( A i , A * ) + d ( A i , A * + ) d ( A i , A * ) + d ( A i , A c * ) + d ( A i , A * ) + d ( A i , A * + )
Step 9: Rank all the alternatives.
According to the relative closeness of each alternative decision I i , we can rank all the alternatives. The larger the value of I i , the better the alternative A i is.

5. A Case of Human Resource Management Problem

5.1. Problem Definition

The present study focuses on a case of HRM problem in a Chinese university to test the proposed MCGDM method. Specifically, the school of management in the university plans to introduce talents from home and abroad to strengthen discipline construction and try to realize the goal of building a high-level innovative university. Three teams of DMs are assembled as a committee and will take the whole responsibility for this recruitment process, these teams are university presidents D 1 , deans of management school D 2 , and human resource officers D 3 , respectively. After strict first interview, six candidates A i ( i = 1 , 2 , , 6 ) remain for the second review. Before the evaluation procedures, an appropriate evaluation index system should be constructed through literature review and expert consultation. In the literature research, Abdullah et al. [1] and Chou et al. [53] identified three dimensions and eight criteria for the HRM evaluation problem; the three dimensions used in their work were infrastructures, input and output. Zhang et al. [5] constructed an evaluation index system of classroom teaching quality; dimensions included in their work were usage of teaching attitude, teaching capacity, teaching content, teaching method and teaching effect. We can see that different evaluation index systems serve for different purposes of HRM evaluation in various industries. This study mainly tackles the HRM evaluation for talent introduction in universities which exists in real-life decision environments. According to Ref. [54], experts agree on the four criteria included in the evaluation index system for the evaluation of HRM, they are teaching skill ( C 1 ), morality ( C 2 ), education background ( C 3 ) and research capability ( C 4 ), respectively. A brief description of each criterion is shown as follows.
Teaching skill is an overall reflect of one teacher’s classroom teaching quality which includes several sub-attributes, such as teaching attitude, teaching capacity, teaching content, teaching method and teaching effect.
Morality refers to the teachers’ morality in this study. It is a kind of professional morality of teachers which takes up the first place of education and can greatly affects the education’s level and quality as a whole. More specifically, the teachers’ morality includes the moral consciousness, moral relations and moral activity of the teachers in universities.
Education background is an overview of a person’s learning environment and learning ability. It includes the person’s educational level, graduate school, major courses, academic achievements, and some other highlights.
Research capability denotes the scientific research ability that is required for scientific research or the research competence someone shows during the process of scientific research. The former is closer to the potential, including someone’s abilities in logical thinking, writing and oral language, etc., whereas the latter emphasizes someone’s practical scientific research capacity.
With the reform of education and fierce competition among universities, the current form of university education needs more and more modern teachers with the above four abilities. Therefore, this study applies the above four criteria for the case of HRM evaluation, and the six candidates A i ( i = 1 , 2 , , 6 ) are evaluated by the three teams of DMs under each criterion. The weight vector of criteria was assigned by DMs as ϖ = ( 0.3 , 0.12 , 0.31 , 0.27 ) T , and the weight vector of DMs was w = ( 1 3 , 1 3 , 1 3 ) T . In addition, the LTS was denoted as H = { h 0 , h 1 , , h 6 } = { e x t r e m e l y   p o o r ,   v e r y   p o o r ,   p o o r ,   m e d i u m ,   g o o d , v e r y   g o o d ,   e x t r e m e l y   g o o d } . By interviewing the DMs one by one anonymously, all of their linguistic assessments for each alternative under each criterion are collected together. During this process, DMs in each group are isolated and don’t negotiate with each other at all. Consequently, the decision information is provided independently in the form of linguistic terms. Take the evaluation value a ˜ 11 1 = h 5 , h 3 , h 2 as an example, which represents the evaluation value of DM D 1 for alternative A 1 under criterion C 1 . Since the criterion C 1 (teaching skill) includes various aspects, such as teaching attitude, teaching capacity, teaching content, teaching method and teaching effect, the group of DMs D 1 may hold inconsistent linguistic judgments for alternative A 1 with respect to C 1 . After collecting all the linguistic assessments for alternative A 1 , the linguistic neutrosophic information a ˜ 11 1 = h 5 , h 3 , h 2 is obtained by calculating the weighted mean values of all the labels of linguistic terms with respect to active, neutral and passive information, respectively. Similarly, the overall evaluation information provided by the teams of DMs can be represented in the form of LNNs in Table 1, Table 2 and Table 3.

5.2. Evaluation Steps of the Proposed Method

The following steps describe the procedures of evaluation for all candidates, and the ranking order of the six alternatives can be obtained. For simplicity of calculation, we chose the LSF f 1 * .
Step 1: Normalize the decision matrices.
It is obvious that all the four criteria are of the benefit type; then, there is no need for normalization.
Step 2: Obtain the weighted decision matrices.
Using operation in Definition 2, the weighted decision matrices can be constructed in Table 4, Table 5 and Table 6:
Step 3: Calculate the supports.
Utilizing the distance measure defined in Definition 3 and Equation (11), the supports can be obtained. Here, we assume that λ = 2 in the distance measure.
sup ( r i j 1 , r i j 2 ) = sup ( r i j 2 , r i j 1 ) = [ 0.6647 0.6988 0.7481 0.738 0.7816 1 1 0.8957 0.7456 1 0.7481 0.7157 0.7816 0.5848 1 0.738 0.7428 0.6988 0.6689 1 0.7816 1 0.6689 0.8906 ] ,
sup ( r i j 1 , r i j 3 ) = sup ( r i j 3 , r i j 1 ) = [ 0.6647 1 0.6689 0.738 0.964 1 0.7481 0.718 1 1 0.7852 1 1 1 0.7852 0.718 1 0.6988 0.7852 0.8957 0.7816 0.695 1 0.718 ] , and
sup ( r i j 2 , r i j 3 ) = sup ( r i j 3 , r i j 2 ) = [ 1 0.6988 0.7852 1 0.7456 1 0.7481 0.738 0.7456 1 0.6689 0.7157 0.7816 0.5848 0.7852 0.8957 0.7428 1 0.7481 0.8957 1 0.695 0.6689 0.771 ]
Step 4: Calculate the weights associated with r i j k 1   ( k 1 = 1 , 2 , , s ) .
The weights can be calculated by Equation (12) as follows:
η i j 1 = [ 0.317 0.3406 0.3295 0.3208 0.3394 0.3333 0.3393 0.3367 0.3394 0.3333 0.3382 0.3402 0.3385 0.3437 0.3384 0.3252 0.3395 0.3188 0.3323 0.3357 0.323 0.3407 0.3414 0.3349 ] , η i j 2 = [ 0.3415 0.3188 0.3382 0.3396 0.3238 0.3333 0.3393 0.3381 0.3212 0.3333 0.3295 0.3197 0.323 0.3126 0.3384 0.3381 0.3211 0.3406 0.3295 0.3357 0.3385 0.3407 0.3172 0.3388 ] , and
η i j 3 = [ 0.3415 0.3406 0.3323 0.3396 0.3368 0.3333 0.3213 0.3252 0.3394 0.3333 0.3323 0.3402 0.3385 0.3437 0.3232 0.3367 0.3395 0.3406 0.3382 0.3286 0.3385 0.3186 0.3414 0.3263 ]
Step 5: Obtain the comprehensive evaluation information.
Using Equation (5) or Equation (9), the integrated decision matrix R = [ r i j ] m × n are calculated below:
(i) When using Equation (5), the results are listed in Table 7.
(ii) When using Equation (9), the results are listed in Table 8.
Step 6: Determine the ideal decision vectors of all alternative decisions.
(i) When using Equation (5), we can determine the ideal alternative vectors among all the alternatives respectively as follows:
A * = ( h 6 , h 0 , h 0 , h 6 , h 0 , h 0 , h 6 , h 0 , h 0 , h 6 , h 0 , h 0 ) ,
  A c * = ( h 0 , h 6 , h 6 , h 0 , h 6 , h 6 , h 0 , h 6 , h 6 , h 0 , h 6 , h 6 ) ,
  A * = ( h 2.0696 , h 5.2356 , h 4.579 , h 0.5864 , h 5.6051 , h 0 , h 2.1327 , h 4.9881 , h 4.5335 , h 0.6358 , h 5.1166 , h 4.7165 ) , and
  A * + = ( h 6 , h 5.0201 , h 0 , h 6 , h 5.6051 , h 0 , h 6 , h 4.9881 , h 0 , h 1.8772 , h 0 , h 0 ) .
(ii) When using Equation (9), the results are:
A * = ( h 6 , h 0 , h 0 , h 6 , h 0 , h 0 , h 6 , h 0 , h 0 , h 6 , h 0 , h 0 ) ,
  A c * = ( h 0 , h 6 , h 6 , h 0 , h 6 , h 6 , h 0 , h 6 , h 6 , h 0 , h 6 , h 6 ) ,
  A * = ( h 2.0696 , h 5.3227 , h 4.579 , h 0 , h 5.6051 , h 2.6119 , h 2.1327 , h 4.9881 , h 4.5335 , h 0 , h 5.1166 , h 4.7165 ) , and
  A * + = ( h 4.5387 , h 5.0201 , h 1.8417 , h 1.7569 , h 5.6051 , h 0 , h 4.4051 , h 4.9881 , h 1.8174 , h 1.8772 , h 4.182 , h 0 ) .
Step 7: Calculate the separations of each alternative decision vector from the ideal decision vector.
The separations between each alternative and the ideal decision vector by the LNPWA and LNPGA operators are shown in Table 9 and Table 10, respectively.
Step 8: Calculate the relative closeness of each alternative decision.
The results of relative closeness of each alternative decision are shown in the last column of Table 9 and Table 10.
Step 9: Rank all the alternatives.
According to the relative closeness of each alternative decision I i , we can rank all the alternatives. When using LNPWA operator, the ranking result is A 3 A 1 A 6 A 2 = A 4 A 5 , whereas when using LNPWG operator, the result turns out A 3 A 1 A 2 A 6 A 4 A 5 . There is a subtle distinction between the results obtained by the LNPWA and LNPWG operators, but the alternative A 3 remains the most performant and competitive candidate.

5.3. Sensitivity Analysis and Discussion

The aim of sensitivity analysis is to investigate the effects of different semantics and the distance parameter λ on the final ranking results of alternatives. To do so, the calculated results are shown in Table 11 and Table 12 and Figure 1 and Figure 2, respectively.
It can be seen from Table 11 and Figure 1 and Figure 2 that the alternative A 3 remained to be the best one, and A 5 was consistently identified as the worst choice no matter how the aggregation operator or semantics change. When using the LNPWA operator, the ranking result remains A 3 A 1 A 6 A 2 = A 4 A 5 . The difference in semantics slightly influenced the values of I i , but did not result in different ranking orders. Similarly, when using the LNPWG operator, the ranking result always is A 3 A 1 A 2 A 6 A 4 A 5 . It is clear that the ranking results varied when using different aggregation operators. This may be caused by the distinct inherent characteristic of these two operators, since the LNPWA operator is based on the arithmetic averaging, whereas the LNPWG operator is based on the geometric averaging. This demonstrates that the ranking results have stability by our proposed method in some degree.
The following Table 12 the influence of the distance parameter λ on the final ranking results of alternatives when the semantics were fixed as f * = f 1 * . It can be seen that the ranking results kept the same as A 3 A 1 A 2 A 6 A 4 A 5 when using the LNPWG operator. However, results by the LNPWA operator change among A 3 A 1 A 2 A 6 A 4 A 5 , A 3 A 1 A 6 A 2 = A 4 A 5 and A 1 A 6 A 2 = A 4 A 3 A 5 . Thus, we can conclude that the differences in the aggregation operators and the parameter λ could influence the evaluation results, DMs should choose appropriate parameter λ and aggregation operators according to their own inherent characteristics.

5.4. Comparison Analysis and Discussion

This subsection conducts a comparative study to validate the practicality and advantages of the proposed method in the LNS contexts, and the results are shown in Table 13. Brief descriptions about the comparative methods are as follows.
(1) Weighted arithmetic and geometric averaging operators of LNNs [28]: the concept of LNNs was first proposed by Fang and Ye [28]. In their study, two aggregation operators including the LNN-weighted arithmetic averaging (LNNWAA) operator and LNN-weighted geometric averaging (LNNWGA) operator are utilized to derive collective evaluations. Then, based on their proposed score function and accuracy function of LNNs, the ranking order of alternatives is obtained.
(2) Bonferroni mean operators of LNNs [30]: the LNNNWBM operator and LNNNWGBM operator are proposed to aggregate evaluations to obtain the collective LNN for each alternative. Subsequently, the results are derived by expected value.
(3) An extended TOPSIS method [32]: a weighted model based on maximizing deviation is used to determine criteria weights. Subsequently, an extended TOPSIS method with LNNs is proposed to rank alternatives.
As shown in Table 13, different methods resulted in different ranking results, but the optimal candidate remained to be A 3 , despite the results obtained by the Bonferroni mean operators of LNNs [30]. The main reasons for these differences may be as follows: (1) The operations for LNNs between this study and the comparative methods are remarkably different. The operations in the existing methods [28,30,32] just considered the linguistic variables’ labels which may cause information loss and distortion. (2) Different aggregation operators and ranking rules might also cause different ranking results. Specifically, the LNNWAA and LNNWGA operators defined in [28] were respectively based on the arithmetic mean and geometric mean operators, whereas the Bonferroni mean operators of LNNs [30] implicated the interactive hypothesis among inputs. Unlike the existing aggregation tools, the proposed PA operator for LNNs allows the information provided by different DMs to support and reinforce each other, and it is a nonlinear weighted average operator.
From above discussions, the unique features of the proposal and its main advantages over others can be simply summarized below.
(1) The comparative methods [28,30,32] dealt with the LNNs only considering the labels of linguistic variables while ignoring the differences in various semantics. It has been contended that the same linguistic variable possesses different meanings for different people and has diverse meanings for the same person under various situations [55]. Therefore, directly using the labels of linguistic variables may lead to information loss during information aggregation. To cover this challenge, this study redefines the operations for LNNs based on the LSFs and Archimedean t-norm and t-conorm, which increases the flexibility and accuracy of linguistic information transformation.
(2) The extended TOPSIS method [32] only considered two relatively positive and negative ideal solutions to determine the values of correlation coefficient for each alternative. By contrast, this study takes both the relatively and absolutely positive and negative ideal solutions into account. Therefore, the ranking result by this proposed method may be somewhat more comprehensive than the existing method [32].
(3) For information fusion, all the existing methods [28,30,32] failed to consider the support degree among different DMs during the aggregation processes. Although it is true that different aggregation operators cater to different practical decision situations, the proposed PA operators within LNN contexts are more feasible in dealing with the university HRM evaluation problem in this study.

6. Conclusions and Future Work

Talent introduction plays an important role in the long-term development of a university. This is closely related to the university’s discipline development and comprehensive strength. Therefore, there is a need for proper HRM evaluation that uses group decision-making methods efficiently in order to utilize human resources. This study recognized the HRM evaluation procedures as a complex MCGDM problems within the LNNs’ circumstances. Through merging the PA operator with LNSs, we developed two aggregation operators (LNPWA and LNPWG) for information fusion. Then, we made some modifications in the classical TOPSIS method to determine the ranking order of alternatives. The strengths of the proposed method have been discussed via comparative analysis.
Nevertheless, this study also holds several limitations which can suggest several avenues for future research. First, the information fusion process adds to the computational complexity of the obtained results because the proposed LNPWA and LNPWG operators are both nonlinear weighted average operators, where the weights associated with each DM should be calculated by their input arguments. Fortunately, the pressure from complex computation can be remarkably eased with the assistance of programming software. Second, with the rapid development of information technology, it is also possible to extend the current results for other management systems under the network-based environments [56,57].
By analyzing the achieved results, the practical implications of our research may be summarized in two aspects. On the one hand, this study proposes a novel linguistic neutrosophic MCGDM method which contributes to expanding the theoretical depth of university HRM. It may offer comprehensive supports for decision-making of modern universities’ talent introduction. In addition, the developed method can also be further expanded to solving group decision-making problems in other fields, such as tourism. On the other hand, this study further explores the application of linguistic MCGDM methods in HRM. The obtained knowledge can be very helpful to improve the performance of the human resource of universities accordingly.

Author Contributions

R.-x.L., J.-q.W. and Z.-b.J. conceived and worked together to achieve this work; and R.-x.L. and Z.-b.J. wrote the paper.

Funding

This research was funded by Fundamental Research Funds for the Central Universities of Central South University grant number (No. 502211710), and the APC was also funded by (No. 502211710).

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their great help on this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Linguistic Scale Function

By means of literature review, we can gather the following choices acting as LSFs.
(1) The LSF f 1 is based on the subscript function s u b ( h τ ) = τ :
f 1 ( h x ) = θ x = x 2 t   ( x = 0 , 1 , , 2 t ) ,   θ x [ 0 , 1 ] .
The above function is divided on average. It is commonly used for its simple form and easy calculation, but it lacks a reasonable theoretical basis [58].
(2) The LSF f 2 is based on the exponential scale:
f 2 ( h y ) = θ y = { α t α t y . 2 α t 2   ( y = 0 , 1 , , t ) α t + α y t 2 2 α t 2   ( y = t + 1 , t + 2 , , 2 t )
Here, the absolute deviation between any two adjacent linguistic labels decreases with the increase of y in the interval [ 0 , t ] , and increases with the increase of y in the interval [ t + 1 , 2 t ] .
(3) The LSF f 3 is based on prospect theory:
f 3 ( h z ) = θ z = { t β ( t z ) . β 2 t β   ( z = 0 , 1 , , t ) t γ + ( z t ) γ 2 t γ   ( z = t + 1 , t + 2 , , 2 t )
Here, β , γ [ 0 , 1 ] , and when β = γ = 1 , the LSF f 3 is reduced to f 1 . Moreover, the absolute deviation between any two adjacent linguistic labels increases with the increase of y in the interval [ 0 , t ] , and decreases with the increase of y in the interval [ t + 1 , 2 t ] .
Each of the above LSFs f 1 , f 2 , and f 3 can be expanded to a strictly monotonically increasing and continuous function: f * : S ¯ R +   ( R + = { r | r 0 , r R } ) , which satisfies f * ( s τ ) = θ τ . Therefore, the inverse function of f * , denoted as f * 1 , exists due to its monotonicity.

Appendix B. The Archimedean T-norm and T-conorm

According to Reference [59], a t-norm T ( x , y ) is called Archimedean t-norm if it is continuous and T ( x , x ) < x , for all x ( 0 , 1 ) . An Archimedean t-norm is called a strict Archimedean t-norm if it is strictly increasing in every variable for x , y ( 0 , 1 ) . In addition, a t-conorm S ( x , y ) is called Archimedean t-conorm if it is continuous and S ( x , x ) > x , for all x ( 0 , 1 ) . An Archimedean t-conorm is called a strict Archimedean t-conorm if it is strictly increasing in every variable for x , y ( 0 , 1 ) .
In this study, we apply one well-known Archimedean t-norm and t-conorm [60], as S ( x , y ) = ( x + y ) / ( 1 + x y ) and T ( x , y ) = x y / [ 1 + ( 1 x ) ( 1 y ) ] , respectively.

Appendix C. The Proof of Theorem 2

Proof. 
It is clear that properties (1)–(3) in Theorem 2 hold. The proof of property (4) in Theorem 2 is shown below.
First, the distances d ( a ˜ , c ˜ ) , d ( a ˜ , b ˜ ) and d ( b ˜ , c ˜ ) can be easily determined respectively as follows:
d ( a ˜ , c ˜ ) = 1 3 ( | f * ( h T a ˜ ) f * ( h T c ˜ ) | λ + | f * ( h I a ˜ ) f * ( h I c ˜ ) | λ + | f * ( h F a ˜ ) f * ( h F c ˜ ) | λ ) 1 λ ,
d ( a ˜ , b ˜ ) = 1 3 ( | f * ( h T a ˜ ) f * ( h T b ˜ ) | λ + | f * ( h I a ˜ ) f * ( h I b ˜ ) | λ + | f * ( h F a ˜ ) f * ( h F b ˜ ) | λ ) 1 λ , and
d ( b ˜ , c ˜ ) = 1 3 ( | f * ( h T b ˜ ) f * ( h T c ˜ ) | λ + | f * ( h I b ˜ ) f * ( h I c ˜ ) | λ + | f * ( h F b ˜ ) f * ( h F c ˜ ) | λ ) 1 λ .
Since | a + b | | a | + | b | , then | f * ( h T a ˜ ) f * ( h T c ˜ ) | = | f * ( h T a ˜ ) f * ( h T b ˜ ) + f * ( h T b ˜ ) f * ( h T c ˜ ) | , and
| f * ( h T a ˜ ) f * ( h T b ˜ ) + f * ( h T b ˜ ) f * ( h T c ˜ ) | | f * ( h T a ˜ ) f * ( h T b ˜ ) | + | f * ( h T b ˜ ) f * ( h T c ˜ ) | .
Thus, | f * ( h T a ˜ ) f * ( h T c ˜ ) | | f * ( h T a ˜ ) f * ( h T b ˜ ) | + | f * ( h T b ˜ ) f * ( h T c ˜ ) | .
Similarly, we can obtain | f * ( h I a ˜ ) f * ( h I c ˜ ) | | f * ( h I a ˜ ) f * ( h I b ˜ ) | + | f * ( h I b ˜ ) f * ( h I c ˜ ) | , and | f * ( h F a ˜ ) f * ( h F c ˜ ) | | f * ( h F a ˜ ) f * ( h F b ˜ ) | + | f * ( h F b ˜ ) f * ( h F c ˜ ) | .
Then
1 3 ( | f * ( h T a ˜ ) f * ( h T c ˜ ) | λ + | f * ( h I a ˜ ) f * ( h I c ˜ ) | λ + | f * ( h F a ˜ ) f * ( h F c ˜ ) | λ ) 1 λ
1 3 ( | f * ( h T a ˜ ) f * ( h T b ˜ ) | λ + | f * ( h I a ˜ ) f * ( h I b ˜ ) | λ + | f * ( h F a ˜ ) f * ( h F b ˜ ) | λ ) 1 λ +
1 3 ( | f * ( h T b ˜ ) f * ( h T c ˜ ) | λ + | f * ( h I b ˜ ) f * ( h I c ˜ ) | λ + | f * ( h F b ˜ ) f * ( h F c ˜ ) | λ ) 1 λ
Thus, property (4) in Theorem 2 holds. □

Appendix D. The Proof of Theorem 3

For ease of computation, we assume that ζ j = w j ( 1 + G ( a ˜ j ) ) / j = 1 n w j ( 1 + G ( a ˜ j ) ) . In the following steps, Equation (5) will be proven using mathematical induction on n .
(1) Utilizing the operations for LNNs defined in Definition 2, when n = 2 , we have
L N P W A ( a ˜ 1 , a ˜ 2 ) = ζ 1 a ˜ 1 ζ 2 a ˜ 2 = f * 1 ( ( 1 + f * ( h T a ˜ 1 ) ) ζ 1 ( 1 + f * ( h T a ˜ 2 ) ) ζ 2 ( 1 f * ( h T a ˜ 1 ) ) ζ 1 ( 1 f * ( h T a ˜ 2 ) ) ζ 2 ( 1 + f * ( h T a ˜ 1 ) ) ζ 1 ( 1 + f * ( h T a ˜ 2 ) ) ζ 2 + ( 1 f * ( h T a ˜ 1 ) ) ζ 1 ( 1 f * ( h T a ˜ 2 ) ) ζ 2 ) , f * 1 ( 2 ( f * ( h I a ˜ 1 ) ) ζ 1 ( f * ( h I a ˜ 2 ) ) ζ 2 ( 2 f * ( h I a ˜ 1 ) ) ζ 1 ( 2 f * ( h I a ˜ 2 ) ) ζ 2 + ( f * ( h I a ˜ 1 ) ) ζ 1 ( f * ( h I a ˜ 2 ) ) ζ 2 ) , f * 1 ( 2 ( f * ( h F a ˜ 1 ) ) ζ 1 ( f * ( h F a ˜ 2 ) ) ζ 2 ( 2 f * ( h F a ˜ 1 ) ) ζ 1 ( 2 f * ( h F a ˜ 2 ) ) ζ 2 + ( f * ( h F a ˜ 1 ) ) ζ 1 ( f * ( h F a ˜ 2 ) ) ζ 2 ) .
That is
L N P W A ( a ˜ 1 , a ˜ 2 ) = ζ 1 a ˜ 1 ζ 2 a ˜ 2 = f * 1 ( j = 1 2 ( 1 + f * ( h T a ˜ ) ) ζ j j = 1 2 ( 1 f * ( h T a ˜ ) ) ζ j j = 1 2 ( 1 + f * ( h T a ˜ ) ) ζ j + j = 1 2 ( 1 f * ( h T a ˜ ) ) ζ j ) , f * 1 . ( 2 j = 1 2 ( f * ( h I a ˜ ) ) ζ j j = 1 2 ( 2 f * ( h I a ˜ ) ) ζ j + j = 1 2 ( f * ( h I a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 2 ( f * ( h F a ˜ ) ) ζ j j = 1 2 ( 2 f * ( h F a ˜ ) ) ζ j + j = 1 2 ( f * ( h F a ˜ ) ) ζ j )
Thus, when n = 2 , Equation (5) is true.
(2) Suppose that when n = k , Equation (5) is true. That is,
L N P W A ( a ˜ 1 , a ˜ 2 , , a ˜ k ) = f * 1 ( j = 1 k ( 1 + f * ( h T a ˜ ) ) ζ j j = 1 k ( 1 f * ( h T a ˜ ) ) ζ j j = 1 k ( 1 + f * ( h T a ˜ ) ) ζ j + j = 1 k ( 1 f * ( h T a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k ( f * ( h I a ˜ ) ) ζ j j = 1 k ( 2 f * ( h I a ˜ ) ) ζ j + j = 1 k ( f * ( h I a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k ( f * ( h F a ˜ ) ) ζ j j = 1 k ( 2 f * ( h F a ˜ ) ) ζ j + j = 1 k ( f * ( h F a ˜ ) ) ζ j ) .
Then, when n = k + 1 , the following result can be obtained:
L N P W A ( a ˜ 1 , a ˜ 2 , , a ˜ k + 1 ) = L N P W A ( a ˜ 1 , a ˜ 2 , , a ˜ k ) ζ k + 1 a ˜ k + 1 = f * 1 ( j = 1 k ( 1 + f * ( h T a ˜ ) ) ζ j j = 1 k ( 1 f * ( h T a ˜ ) ) ζ j j = 1 k ( 1 + f * ( h T a ˜ ) ) ζ j + j = 1 k ( 1 f * ( h T a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k ( f * ( h I a ˜ ) ) ζ j j = 1 k ( 2 f * ( h I a ˜ ) ) ζ j + j = 1 k ( f * ( h I a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k ( f * ( h F a ˜ ) ) ζ j j = 1 k ( 2 f * ( h F a ˜ ) ) ζ j + j = 1 k ( f * ( h F a ˜ ) ) ζ j ) f * 1 ( ( 1 + f * ( h T a ˜ k + 1 ) ) ζ k + 1 ( 1 f * ( h T a ˜ k + 1 ) ) ζ k + 1 ( 1 + f * ( h T a ˜ ) ) ζ + ( 1 f * ( h T a ˜ ) ) ζ ) , f * 1 ( 2 ( f * ( h I a ˜ k + 1 ) ) ζ k + 1 ( 2 f * ( h I a ˜ k + 1 ) ) ζ k + 1 + ( f * ( h I a ˜ k + 1 ) ) ζ k + 1 ) , f * 1 ( 2 ( f * ( h F a ˜ k + 1 ) ) ζ k + 1 ( 2 f * ( h F a ˜ k + 1 ) ) ζ k + 1 + ( f * ( h F a ˜ k + 1 ) ) ζ k + 1 ) = f * 1 ( j = 1 k + 1 ( 1 + f * ( h T a ˜ ) ) ζ j j = 1 k + 1 ( 1 f * ( h T a ˜ ) ) ζ j j = 1 k + 1 ( 1 + f * ( h T a ˜ ) ) ζ j + j = 1 k + 1 ( 1 f * ( h T a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k + 1 ( f * ( h I a ˜ ) ) ζ j j = 1 k + 1 ( 2 f * ( h I a ˜ ) ) ζ j + j = 1 k + 1 ( f * ( h I a ˜ ) ) ζ j ) , f * 1 ( 2 j = 1 k + 1 ( f * ( h F a ˜ ) ) ζ j j = 1 k + 1 ( 2 f * ( h F a ˜ ) ) ζ j + j = 1 k + 1 ( f * ( h F a ˜ ) ) ζ j )
Then, when n = k + 1 , Equation (5) is true. Therefore, Equation (5) is true for all n .

References

  1. Abdullah, L.; Zulkifli, N. Integration of fuzzy AHP and interval type-2 fuzzy DEMATEL: An application to human resource management. Expert Syst. Appl. 2015, 42, 4397–4409. [Google Scholar] [CrossRef]
  2. Filho, C.F.F.C.; Rocha, D.A.R.; Costa, M.G.F. Using constraint satisfaction problem approach to solve human resource allocation problems in cooperative health services. Expert Syst. Appl. 2012, 39, 385–394. [Google Scholar] [CrossRef]
  3. Marcolajara, B.; ÚbedaGarcía, M. Human resource management approaches in Spanish hotels: An introductory analysis. Int. J. Hosp. Manag. 2013, 35, 339–347. [Google Scholar] [CrossRef]
  4. Bohlouli, M.; Mittas, N.; Kakarontzas, G.; Theodosiou, T.; Angelis, L.; Fathi, M. Competence assessment as an expert system for human resource management: A mathematical approach. Expert Syst. Appl. 2017, 70, 83–102. [Google Scholar] [CrossRef]
  5. Zhang, X.; Wang, J.; Zhang, H.; Hu, J. A heterogeneous linguistic MAGDM framework to classroom teaching quality evaluation. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 4929–4956. [Google Scholar] [CrossRef]
  6. Smarandache, F. A Unifying Field in Logics: Neutrosophic Logic: Neutrosophy, Neutrosophic Set, Neutrosophic Probability; American Research Press: Rehoboth, DE, USA, 1999; pp. 1–141. [Google Scholar]
  7. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  8. Wu, X.H.; Wang, J.Q.; Peng, J.J.; Qian, J. A novel group decision-making method with probability hesitant interval neutrosphic set and its application in middle level manager’ selection. Int. J. Uncertain. Quantif. 2018, 8, 291–319. [Google Scholar] [CrossRef]
  9. Ji, P.; Wang, J.Q.; Zhang, H.Y. Frank prioritized Bonferroni mean operator with single-valued neutrosophic sets and its application in selecting third-party logistics providers. Neural Comput. Appl. 2016, 30, 799–823. [Google Scholar] [CrossRef]
  10. Liang, R.; Wang, J.; Zhang, H. Evaluation of e-commerce websites: An integrated approach under a single-valued trapezoidal neutrosophic environment. Knowl.-Based Syst. 2017, 135, 44–59. [Google Scholar] [CrossRef]
  11. Liang, R.X.; Wang, J.Q.; Zhang, H.Y. A multi-criteria decision-making method based on single-valued trapezoidal neutrosophic preference relations with complete weight information. Neural Comput. Appl. 2017. [Google Scholar] [CrossRef]
  12. Liang, R.X.; Wang, J.Q.; Li, L. Multi-criteria group decision making method based on interdependent inputs of single valued trapezoidal neutrosophic information. Neural Comput. Appl. 2018, 30, 241–260. [Google Scholar] [CrossRef]
  13. Tian, Z.P.; Wang, J.; Wang, J.Q.; Zhang, H.Y. Simplified neutrosophic linguistic multi-criteria group decision-making approach to green product development. Group Decis. Negot. 2017, 26, 597–627. [Google Scholar] [CrossRef]
  14. Ji, P.; Zhang, H.Y.; Wang, J.Q. Selecting an outsourcing provider based on the combined MABAC–ELECTRE method using single-valued neutrosophic linguistic sets. Comput. Ind. Eng. 2018, 120, 429–441. [Google Scholar] [CrossRef]
  15. Karaaslan, F. Correlation coefficients of single-valued neutrosophic refined soft sets and their applications in clustering analysis. Neural Comput. Appl. 2017, 28, 2781–2793. [Google Scholar] [CrossRef]
  16. Ye, J. Single-valued neutrosophic clustering algorithms based on similarity measures. J. Classif. 2017, 34, 148–162. [Google Scholar] [CrossRef]
  17. Li, Y.Y.; Wang, J.Q.; Wang, T.L. A linguistic neutrosophic multi-criteria group decision-making approach with EDAS method. Arab. J. Sci. Eng. 2018. [Google Scholar] [CrossRef]
  18. Chen, Z.S.; Chin, K.S.; Li, Y.L.; Yang, Y. Proportional hesitant fuzzy linguistic term set for multiple criteria group decision making. Inf. Sci. 2016, 357, 61–87. [Google Scholar] [CrossRef]
  19. Rodríguez, R.M.; Martínez, L.; Herrera, F. Hesitant fuzzy linguistic term sets for decision making. IEEE Trans. Fuzzy Syst. 2012, 20, 109–119. [Google Scholar] [CrossRef]
  20. Wang, H. Extended hesitant fuzzy linguistic term sets and their aggregation in group decision making. Int. J. Comput. Intell. Syst. 2014, 8, 14–33. [Google Scholar] [CrossRef]
  21. Wang, X.K.; Peng, H.G.; Wang, J.Q. Hesitant linguistic intuitionistic fuzzy sets and their application in multi-criteria decision-making problems. Int. J. Uncertain. Quantif. 2018, 8, 321–341. [Google Scholar] [CrossRef]
  22. Tian, Z.P.; Wang, J.Q.; Zhang, H.Y.; Wang, T.L. Signed distance-based consensus in multi-criteria group decision-making with multi-granular hesitant unbalanced linguistic information. Comput. Ind. Eng. 2018, 124, 125–138. [Google Scholar] [CrossRef]
  23. Zhang, H.M. Linguistic intuitionistic fuzzy sets and application in MAGDM. J. Appl. Math. 2014, 2014. [Google Scholar] [CrossRef]
  24. Chen, Z.C.; Liu, P.H.; Pei, Z. An approach to multiple attribute group decision making based on linguistic intuitionistic fuzzy numbers. Int. J. Comput. Intell. Syst. 2015, 8, 747–760. [Google Scholar] [CrossRef] [Green Version]
  25. Peng, H.G.; Wang, J.Q. A multicriteria group decision-making method based on the normal cloud model with Zadeh’s Z-numbers. IEEE Trans. Fuzzy Syst. 2018. [Google Scholar] [CrossRef]
  26. Peng, H.G.; Zhang, H.Y.; Wang, J.Q. Cloud decision support model for selecting hotels on TripAdvisor.com with probabilistic linguistic information. Int. J. Hosp. Manag. 2018, 68, 124–138. [Google Scholar] [CrossRef]
  27. Luo, S.Z.; Zhang, H.Y.; Wang, J.Q.; Li, L. Group decision-making approach for evaluating the sustainability of constructed wetlands with probabilistic linguistic preference relations. J. Oper. Res. Soc. 2018. [Google Scholar] [CrossRef]
  28. Fang, Z.B.; Ye, J. Multiple attribute group decision-making method based on linguistic neutrosophic numbers. Symmetry 2017, 9, 111. [Google Scholar] [CrossRef]
  29. Li, Y.Y.; Zhang, H.Y.; Wang, J.Q. Linguistic neutrosophic sets and their application in multicriteria decision-making problems. Int. J. Uncertain. Quantif. 2017, 7, 135–154. [Google Scholar] [CrossRef]
  30. Fan, C.X.; Ye, J.; Hu, K.L.; Fan, E. Bonferroni mean operators of linguistic neutrosophic numbers and their multiple attribute group decision-making methods. Information 2017, 8, 107. [Google Scholar] [CrossRef]
  31. Shi, L.L.; Ye, J. Cosine measures of linguistic neutrosophic numbers and their application in multiple attribute group decision-making. Information 2017, 8, 117. [Google Scholar]
  32. Liang, W.Z.; Zhao, G.Y.; Wu, H. Evaluating investment risks of metallic mines using an extended TOPSIS method with linguistic neutrosophic numbers. Symmetry 2017, 9, 149. [Google Scholar] [CrossRef]
  33. Herrera, F.; Martínez, L. A 2-tuple fuzzy linguistic representation model for computing with words. IEEE Trans. Fuzzy Syst. 2000, 8, 746–752. [Google Scholar]
  34. Delgado, M.; Verdegay, J.L.; Vila, M.A. Linguistic decision-making models. Int. J. Intell. Syst. 1992, 7, 479–492. [Google Scholar] [CrossRef]
  35. Hu, J.; Zhang, X.; Yang, Y.; Liu, Y.; Chen, X. New doctors ranking system based on VIKOR method. Int. Trans. Oper. Res. 2018. [Google Scholar] [CrossRef]
  36. Li, D.Y.; Meng, H.J.; Shi, X.M. Membership clouds and membership cloud generators. Comput. Res. Dev. 1995, 32, 16–21. [Google Scholar]
  37. Bordogna, G.; Fedrizzi, M.; Pasi, G. A linguistic modeling of consensus in group decision making based on OWA operators. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 1997, 27, 126–133. [Google Scholar] [CrossRef]
  38. Doukas, H.; Karakosta, C.; Psarras, J. Computing with words to assess the sustainability of renewable energy options. Expert Syst. Appl. 2010, 37, 5491–5497. [Google Scholar] [CrossRef]
  39. Wang, J.Q.; Wu, J.T.; Wang, J.; Zhang, H.Y.; Chen, X.H. Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems. Inf. Sci. 2014, 288, 55–72. [Google Scholar] [CrossRef]
  40. Yager, R.R. The power average operator. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2001, 31, 724–731. [Google Scholar] [CrossRef]
  41. Jiang, W.; Wei, B.; Zhan, J.; Xie, C.; Zhou, D. A visibility graph power averaging aggregation operator: A methodology based on network analysis. Comput. Ind. Eng. 2016, 101, 260–268. [Google Scholar] [CrossRef]
  42. Gong, Z.; Xu, X.; Zhang, H.; Aytun Ozturk, U.; Herrera-Viedma, E.; Xu, C. The consensus models with interval preference opinions and their economic interpretation. Omega 2015, 55, 81–90. [Google Scholar] [CrossRef]
  43. Liu, P.D.; Qin, X.Y. Power average operators of linguistic intuitionistic fuzzy numbers and their application to multiple-attribute decision making. J. Intell. Fuzzy Syst. 2017, 32, 1029–1043. [Google Scholar] [CrossRef]
  44. Yager, R.R. Applications and extensions of OWA aggregations. Int. J. Man-Mach. Stud. 1992, 37, 103–132. [Google Scholar] [CrossRef]
  45. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  46. Yager, R.R. Families of OWA operators. Fuzzy Sets Syst. 1993, 59, 125–148. [Google Scholar] [CrossRef]
  47. Huang, J.J.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  48. Baykasoğlu, A.; Gölcük, İ. Development of an interval type-2 fuzzy sets based hierarchical MADM model by combining DEMATEL and TOPSIS. Expert Syst. Appl. 2017, 70, 37–51. [Google Scholar] [CrossRef]
  49. Joshi, D.; Kumar, S. Interval-valued intuitionistic hesitant fuzzy Choquet integral based TOPSIS method for multi-criteria group decision making. Eur. J. Oper. Res. 2016, 248, 183–191. [Google Scholar] [CrossRef]
  50. Mehrdad, A.M.A.K.; Aghdas, B.; Alireza, A.; Mahdi, G.; Hamed, K. Introducing a procedure for developing a novel centrality measure (Sociability Centrality) for social networks using TOPSIS method and genetic algorithm. Comput. Hum. Behav. 2016, 56, 295–305. [Google Scholar]
  51. Afsordegan, A.; Sánchez, M.; Agell, N.; Zahedi, S.; Cremades, L.V. Decision making under uncertainty using a qualitative TOPSIS method for selecting sustainable energy alternatives. Int. J. Environ. Sci. Technol. 2016, 13, 1419–1432. [Google Scholar] [CrossRef] [Green Version]
  52. Xu, Z.S. A method based on linguistic aggregation operators for group decision making with linguistic preference relations. Inf. Sci. 2004, 166, 19–30. [Google Scholar] [CrossRef]
  53. Chou, Y.C.; Sun, C.C.; Yen, H.Y. Evaluating the criteria for human resource for science and technology (HRST) based on an integrated fuzzy AHP and fuzzy DEMATEL approach. Appl. Soft Comput. 2012, 12, 64–71. [Google Scholar] [CrossRef]
  54. Yu, D.; Wu, Y.; Lu, T. Interval-valued intuitionistic fuzzy prioritized operators and their application in group decision making. Knowl.-Based Syst. 2012, 30, 57–66. [Google Scholar] [CrossRef]
  55. Yu, S.M.; Wang, J.; Wang, J.Q.; Li, L. A multi-criteria decision-making model for hotel selection with linguistic distribution assessments. Appl. Soft Comput. 2018, 67, 741–755. [Google Scholar] [CrossRef]
  56. Qiu, J.; Wang, T.; Yin, S.; Gao, H. Data-based optimal control for networked double-layer industrial processes. IEEE Trans. Ind. Electron. 2017, 64, 4179–4186. [Google Scholar] [CrossRef]
  57. Qiu, J.; Wei, Y.; Karimi, H.R.; Gao, H. Reliable control of discrete-time piecewise-affine time-delay systems via output feedback. IEEE Trans. Reliab. 2017, 67, 79–91. [Google Scholar] [CrossRef]
  58. Liu, A.Y.; Liu, F.J. Research on method of analyzing the posterior weight of experts based on new evaluation scale of linguistic information. Chin. J. Manag. Sci. 2011, 19, 149–155. [Google Scholar]
  59. Klement, E.P.; Mesiar, R. Logical, Algebraic, Analytic, and Probabilistic Aspects of Triangular Norms; Elsevier: New York, NY, USA, 2005. [Google Scholar]
  60. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Springer: Berlin, Germany, 2007; Volume 12, pp. 139–141. [Google Scholar]
Figure 1. Ranking results by the LNPWA operator.
Figure 1. Ranking results by the LNPWA operator.
Symmetry 10 00364 g001
Figure 2. Ranking results by the LNPWG operator.
Figure 2. Ranking results by the LNPWG operator.
Symmetry 10 00364 g002
Table 1. Evaluation information of D 1 .
Table 1. Evaluation information of D 1 .
D 1 C 1 C 2 C 3 C 4
A 1 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 5 , h 3 , h 2
A 2 h 5 , h 3 , h 1 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 0 , h 3 , h 0
A 3 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 5 , h 3 , h 0
A 4 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 2 h 5 , h 3 , h 0
A 5 h 5 , h 3 , h 2 h 5 , h 3 , h 2 h 5 , h 3 , h 2 h 0 , h 3 , h 2
A 6 h 6 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 0 , h 3 , h 2
Table 2. Evaluation information of D 2 .
Table 2. Evaluation information of D 2 .
D 2 C 1 C 2 C 3 C 4
A 1 h 6 , h 3 , h 0 h 5 , h 3 , h 2 h 5 , h 3 , h 2 h 5 , h 3 , h 0
A 2 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 5 , h 3 , h 0
A 3 h 5 , h 3 , h 0 h 5 , h 3 , h 0 h 5 , h 3 , h 2 h 5 , h 0 , h 0
A 4 h 6 , h 3 , h 2 h 6 , h 3 , h 2 h 5 , h 3 , h 2 h 5 , h 3 , h 2
A 5 h 5 , h 5 , h 0 h 5 , h 3 , h 0 h 6 , h 3 , h 0 h 0 , h 3 , h 2
A 6 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 6 , h 3 , h 2 h 5 , h 3 , h 1
Table 3. Evaluation information of D 3 .
Table 3. Evaluation information of D 3 .
D 3 C 1 C 2 C 3 C 4
A 1 h 6 , h 3 , h 0 h 5 , h 3 , h 0 h 6 , h 3 , h 2 h 5 , h 3 , h 0
A 2 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 2 h 5 , h 3 , h 2
A 3 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 6 , h 3 , h 0 h 5 , h 3 , h 0
A 4 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 6 , h 3 , h 2 h 0 , h 3 , h 2
A 5 h 5 , h 3 , h 2 h 5 , h 3 , h 0 h 6 , h 3 , h 2 h 5 , h 3 , h 2
A 6 h 5 , h 3 , h 2 h 0 , h 3 , h 2 h 5 , h 3 , h 0 h 5 , h 3 , h 0
Table 4. Weighted evaluation information of D 1 .
Table 4. Weighted evaluation information of D 1 .
D 1 C 1 C 2 C 3 C 4
A 1 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 1.8772 , h 5.1166 , h 4.7165
A 2 h 2.0696 , h 5.0201 , h 3.9304 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 0 , h 5.1166 , h 0
A 3 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 0 , h 5.1166 , h 0
A 4 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 4.5335 h 0 , h 5.1166 , h 0
A 5 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 5.4224 h 2.1327 , h 4.9881 , h 4.5335 h 0 , h 5.1166 , h 4.7165
A 6 h 6 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 0 , h 5.1166 , h 4.7165
Table 5. Weighted evaluation information of D 2 .
Table 5. Weighted evaluation information of D 2 .
D 2 C 1 C 2 C 3 C 4
A 1 h 6 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 5.4224 h 2.1327 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 0
A 2 h 2.0696 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 1.8772 , h 5.1166 , h 0
A 3 h 2.0696 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 4.5335 h 1.8772 , h 0 , h 0
A 4 h 6 , h 5.0201 , h 4.579 h 6 , h 5.6051 , h 5.4224 h 2.1327 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 4.7165
A 5 h 2.0696 , h 5.6974 , h 0 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 0 , h 5.1166 , h 4.7165
A 6 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 4.1228
Table 6. Weighted evaluation information of D 3 .
Table 6. Weighted evaluation information of D 3 .
D 3 C 1 C 2 C 3 C 4
A 1 h 6 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 0
A 2 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 4.7165
A 3 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 1.8772 , h 5.1166 , h 0
A 4 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 4.5335 h 0 , h 5.1166 , h 4.7165
A 5 h 2.0696 , h 5.0201 , h 4.579 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 4.5335 h 1.8772 , h 5.1166 , h 4.7165
A 6 h 2.0696 , h 5.0201 , h 4.579 h 0 , h 5.6051 , h 5.4224 h 2.1327 , h 4.9881 , h 0 h 1.8772 , h 5.1166 , h 0
Table 7. Comprehensive evaluation information by LNPWA operator.
Table 7. Comprehensive evaluation information by LNPWA operator.
D 2 C 1 C 2 C 3 C 4
A 1 h 6 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 1.8772 , h 5.1166 , h 0
A 2 h 2.0696 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 0 h 1.2689 , h 5.1166 , h 0
A 3 h 2.0696 , h 5.0201 , h 0 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 1.8772 , h 0 , h 0
A 4 h 6 , h 5.0201 , h 4.579 h 6 , h 5.6051 , h 0 h 6 , h 4.9881 , h 4.5335 h 1.2689 , h 5.1166 , h 0
A 5 h 2.0696 , h 5.2356 , h 0 h 0.8573 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 0.6358 , h 5.1166 , h 4.7165
A 6 h 2.0696 , h 5.0201 , h 4.579 h 0.5864 , h 5.6051 , h 0 h 6 , h 4.9881 , h 0 h 1.2721 , h 5.1166 , h 0
Table 8. Comprehensive evaluation information by LNPWG operator.
Table 8. Comprehensive evaluation information by LNPWG operator.
D 2 C 1 C 2 C 3 C 4
A 1 h 4.5387 , h 5.0201 , h 1.8471 h 0.8573 , h 5.6051 , h 2.6567 h 3.1737 , h 4.9881 , h 3.4741 h 1.8772 , h 5.1166 , h 1.9671
A 2 h 2.0696 , h 5.0201 , h 3.2402 h 0.8573 , h 5.6051 , h 0 h 2.1327 , h 4.9881 , h 1.8396 h 0 , h 5.1166 , h 1.9918
A 3 h 2.0696 , h 5.0201 , h 3.5544 h 0.8573 , h 5.6051 , h 0 h 3.1737 , h 4.9881 , h 1.8834 h 1.8772 , h 4.182 , h 0
A 4 h 3.0839 , h 5.0201 , h 4.579 h 1.7569 , h 5.6051 , h 2.6119 h 3.1414 , h 4.9881 , h 4.5335 h 0 , h 5.1166 , h 3.6868
A 5 h 2.0696 , h 5.3227 , h 3.5549 h 0.8573 , h 5.6051 , h 2.6553 h 4.5051 , h 4.9881 , h 3.4741 h 0 , h 5.1166 , h 4.7165
A 6 h 3.0839 , h 5.0201 , h 4.579 h 0 , h 5.6051 , h 2.6553 h 3.1201 , h 4.9881 , h 1.8174 h 0 , h 5.1166 , h 3.3929
Table 9. Separations by the LNPWA operator.
Table 9. Separations by the LNPWA operator.
Distance d ( A i , A *   ) d ( A i , A c *   ) d ( A i , A *   ) d ( A i , A * +   ) I i  
A 1 2.19032.11621.62571.70550.7132
A 2 2.32292.06531.58631.71750.698
A 3 1.37432.89682.356200.7926
A 4 2.32292.06531.58631.71750.698
A 5 2.92880.56102.35620.499
A 6 2.32222.06561.58641.71740.6981
Table 10. Separations by the LNPWG operator.
Table 10. Separations by the LNPWG operator.
Distance d ( A i , A *   ) d ( A i , A c *   ) d ( A i , A *   ) d ( A i , A * +   ) I i  
A 1 2.28631.51181.10970.72590.5942
A 2 2.7111.36810.90820.96410.5445
A 3 1.95752.18151.720600.6659
A 4 2.90160.82540.34321.41380.4709
A 5 3.06280.519401.72050.4224
A 6 2.86150.91760.44121.32950.4844
Table 11. Results of different LSFs f * ( λ = 2 ).
Table 11. Results of different LSFs f * ( λ = 2 ).
AlternativesRanking Results
A 1   A 2   A 3   A 4   A 5   A 6  
f 1 * LNPWA0.7130.6980.7930.6980.4990.698 A 3 A 1 A 6 A 2 = A 4 A 5
LNPWG0.5940.5440.6660.4710.4220.484 A 3 A 1 A 2 A 6 A 4 A 5
f 2 * LNPWA0.70.690.7730.690.480.69 A 3 A 1 A 6 A 2 A 4 A 5
LNPWG0.5780.5490.640.4470.4010.462 A 3 A 1 A 2 A 6 A 4 A 5
f 3 * LNPWA0.7210.7040.8060.7040.5140.704 A 3 A 1 A 6 A 2 = A 4 A 5
LNPWG0.6080.540.6840.4860.4390.496 A 3 A 1 A 2 A 6 A 4 A 5
Table 12. Results of different parameter λ ( f * = f 1 * ).
Table 12. Results of different parameter λ ( f * = f 1 * ).
λ f * = f 1 *
Ranking by LNPWA operatorRanking by LNPWG operator
1 A 3 A 1 A 6 A 2 = A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
2 A 3 A 1 A 2 A 6 A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
3 A 3 A 1 A 2 A 6 A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
4 A 3 A 1 A 6 A 2 = A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
5 A 3 A 1 A 2 = A 6 = A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
6 A 3 A 1 A 6 A 2 = A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
7 A 1 A 3 A 6 A 2 = A 4 A 5 A 3 A 1 A 2 A 6 A 4 A 5
8 A 1 A 6 A 2 = A 4 A 3 A 5 A 3 A 1 A 2 A 6 A 4 A 5
9 A 1 A 6 A 2 = A 4 A 3 A 5 A 3 A 1 A 2 A 6 A 4 A 5
10 A 1 A 6 A 2 = A 4 A 3 A 5 A 3 A 1 A 2 A 6 A 4 A 5
Table 13. Comparison results with the existing methods.
Table 13. Comparison results with the existing methods.
MCGDMRanking Results
Proposed method by LNPWA operator A 3 A 1 A 6 A 2 = A 4 A 5
Proposed method by LNPWG operator A 3 A 1 A 2 A 6 A 4 A 5
LNNWAA operator [28] A 3 A 1 A 2 A 6 A 5 A 4
LNNWGA operator [28] A 3 A 1 A 2 A 6 A 4 A 5
LNNNWBM operator [30] ( p = q = 1 ) A 1 A 3 A 6 A 2 A 5 A 4
LNNNWGBM operator [30] ( p = q = 1 ) A 1 A 3 A 6 A 2 A 4 A 5
An extended TOPSIS method [32] ( λ = 2 ) A 3 A 1 A 6 A 2 = A 4 A 5

Share and Cite

MDPI and ACS Style

Liang, R.-x.; Jiang, Z.-b.; Wang, J.-q. A Linguistic Neutrosophic Multi-Criteria Group Decision-Making Method to University Human Resource Management. Symmetry 2018, 10, 364. https://doi.org/10.3390/sym10090364

AMA Style

Liang R-x, Jiang Z-b, Wang J-q. A Linguistic Neutrosophic Multi-Criteria Group Decision-Making Method to University Human Resource Management. Symmetry. 2018; 10(9):364. https://doi.org/10.3390/sym10090364

Chicago/Turabian Style

Liang, Ru-xia, Zi-bin Jiang, and Jian-qiang Wang. 2018. "A Linguistic Neutrosophic Multi-Criteria Group Decision-Making Method to University Human Resource Management" Symmetry 10, no. 9: 364. https://doi.org/10.3390/sym10090364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop