Next Article in Journal
A Parallel Optimization Method for Robustness Verification of Deep Neural Networks
Next Article in Special Issue
Personnel Selection in a Coffee Shop Company Based on a Multi-Criteria Decision-Aiding and Artificial Intelligence Approach
Previous Article in Journal
Hybrid Genetic Algorithm and Tabu Search for Solving Preventive Maintenance Scheduling Problem for Cogeneration Plants
Previous Article in Special Issue
Protecting Infrastructure Networks: Solving the Stackelberg Game with Interval-Valued Intuitionistic Fuzzy Number Payoffs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Consensus-Based 360 Degree Feedback Evaluation Method with Linguistic Distribution Assessments

by
Chuanhao Fan
1,
Jiaxin Wang
1,
Yan Zhu
2,* and
Hengjie Zhang
1
1
Business School, Hohai University, Nanjing 211100, China
2
Business School, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1883; https://doi.org/10.3390/math12121883
Submission received: 16 May 2024 / Revised: 11 June 2024 / Accepted: 12 June 2024 / Published: 17 June 2024
(This article belongs to the Special Issue Advances in Fuzzy Decision Theory and Applications, 2nd Edition)

Abstract

:
The 360 degree feedback evaluation method is a multidimensional, comprehensive assessment method. Evaluators may hesitate among multiple evaluation values and be simultaneously constrained by the biases and cognitive errors of the evaluators, evaluation results are prone to unfairness and conflicts. To overcome these issues, this paper proposes a consensus-based 360 degree feedback evaluation method with linguistic distribution assessments. Firstly, evaluators provide evaluation information in the form of linguistic distribution. Secondly, utilizing an enhanced ordered weighted averaging (OWA) operator, the model aggregates multi-source evaluation information to handle biased evaluation information effectively. Subsequently, a consensus-reaching process is established to coordinate conflicting viewpoints among the evaluators, and a feedback adjustment mechanism is designed to guide evaluators in refining their evaluation information, facilitating the attainment of a unanimous evaluation outcome. Finally, the improved 360 degree feedback evaluation method was applied to the performance evaluation of the project leaders in company J, thereby validating the effectiveness and rationality of the method.

1. Introduction

The 360 degree feedback evaluation method, also known as comprehensive or multi-source feedback evaluation [1], differs from traditional top-down, unidirectional methods as it gathers evaluation information from multiple stakeholders who interact with the evaluated individuals, providing comprehensive feedback from various perspectives [2,3]. In the early stages of human resource management with enterprises, 360 degree feedback evaluation was primarily employed for developmental assessments of individuals. This approach involved evaluating the personal capabilities and career development status of employees or organizational members [4,5]. Later, 360 degree feedback evaluation was gradually introduced into performance evaluations, assessing the job performance of employees or organizational members. This evaluation served as the basis for salary adjustments and position changes. The indicators used in 360 degree feedback evaluations are primarily qualitative [6], and the evaluation process consists of the preparation phase, training phase, evaluation implementation phase, and feedback coaching phase [7]. As a tool for assessing employee development and performance, 360 degree feedback evaluation has gained favor among numerous benchmark enterprises in human resource management due to its comprehensiveness and relative objectivity. It has been widely applied in corporate management [2,8,9]. Today, 360 degree feedback evaluation has expanded into various domains, including medicine, engineering projects, education, and beyond [10,11,12,13,14,15].
The key to the effective implementation of 360 degree feedback evaluation lies in assessing the evaluated individuals in an objective and fair manner [3,16]. Within this system, multiple evaluators are involved, including superiors, subordinates, peers, customers, and the individuals being assessed [6,17]. By receiving evaluations from multiple sources, individuals gain diverse and valuable perspectives, enabling them to accurately assess themselves and identify their strengths and areas for improvement [3,18]. Traditional 360 degree feedback evaluation methods rely on precise numerical assessments and use subjective weighting to determine indicator weights. By aggregating evaluation information from various evaluators through weighted averaging, these methods provide feedback to the individuals being assessed. This feedback aims to help individuals improve their behavior, enhance their abilities, and boost their performance [18,19,20].
Currently, scholars primarily explore and analyze the reliability and validity of 360 degree feedback from a theoretical perspective [21,22,23]. However, three issues persist in the implementation process. Firstly, the evaluation content is complex and abstract. Due to the qualitative nature of the indicators, evaluators’ limited capabilities, and the uncertainty in judgment, it is challenging for evaluators to provide precise numerical information [24,25,26,27,28]. Secondly, the determination of 360 degree feedback indicator weights often relies solely on experts’ subjective judgment, leading to significant arbitrariness [29]. Lastly, evaluations are influenced by individual cognition, and cognitive errors and biases can lead to inaccuracies, affecting the quality of the results [1,8,30]. Given these factors, 360 degree feedback evaluation may provide organizations and employees a false sense of objectivity and rationality. Addressing these challenges and improving the effectiveness of 360 degree feedback evaluation outcomes holds significant research importance.
Group decision making refers to the process in which multiple decision-makers provide their evaluation perspectives on a given set of options according to specific evaluation rules. These individual perspectives are then aggregated to form a collective evaluation [31,32]. The process of 360 degree feedback evaluation involves multiple evaluators, necessitating the gathering and integration of their evaluation information, making it a typical group decision-making problem [33,34]. Significant progress has been made in the acquisition and aggregation of decision information within group decision making, and it has been widely applied in various fields [35]. This progress holds important guiding significance for management, investment, and other practical problems [36,37,38,39].
In response to the aforementioned issues, scholars have begun employing group decision-making methods to conduct improvement research. For example, Anisseh et al. [40] utilized intuitionistic fuzzy numbers for evaluation information. Espinilla et al. [41] propose using the linguistic two-tuple representation model and its extended approach to address the heterogeneous information involved in 360 degree evaluations. Cheng [42] employed the entropy method to determine objective weights for performance appraisal indicators, thereby avoiding the subjectivity and arbitrariness previously associated with determining these weights. These improved techniques have played a positive role in overcoming the deficiencies of traditional 360 degree feedback evaluation methods. However, to ensure the objectivity, fairness, and effectiveness of 360 degree feedback evaluations, there are still several issues that need further discussion:
(1)
The evaluators hesitate between multiple values. In enterprises that use 360 degree feedback evaluation, evaluators typically assign scores to individuals using precise numerical values, such as a scale from 1 to 10 [43]. However, in many cases, due to the qualitative nature of the evaluation indicators and the inherent complexity of the 360 degree evaluation process, evaluators may encounter hesitation or uncertainty when making their assessments [28]. Some scholars have proposed using linguistic variables to represent evaluation information, but it is rarely used in 360 degree feedback evaluation. It is necessary to explore further applications of linguistic variables in 360 degree feedback evaluation.
(2)
Evaluation information exists with individual biases [44]. The participation of multiple evaluators can ensure relatively objective results [45], but it cannot eliminate subjectivity. During the evaluation process, emotional factors and personal interests can easily infiltrate, leading evaluators to adopt strategic assessments [43]. For example, subordinates may offer excessively high ratings due to interpersonal needs [23], while colleagues might provide extremely low ratings due to competitive relationships [46]. Handling these extreme evaluations and reasonably aggregating them to ensure the results are as objective and fair as possible is key to effective assessment.
(3)
There are discrepancies in the evaluative information from multiple evaluators. Multiple evaluators come from diverse backgrounds with varying knowledge structures, levels of judgment, and familiarity with the evaluated individuals’ work [47]. As a result, discrepancies may arise in evaluators’ preference information and result rankings [9]. The current 360 degree feedback evaluation method aggregates individual evaluations into collective data without ensuring consensus among evaluators. Handling the discrepancies between evaluators’ assessments to ensure that the evaluation results are as acceptable as possible to all evaluators is also an important issue in the evaluation process.
In summary, to address the challenges of 360 degree feedback evaluation in complex environments, this study aims to present a consensus-based 360 degree feedback evaluation method with linguistic distribution assessments. This approach encompasses the following aspects:
(1)
This paper investigates the 360 degree feedback evaluation method within a linguistic context. Considering the uncertainty and hesitation in evaluators’ expressions, it proposes using linguistic distribution assessments to represent evaluation information. The use of linguistic distribution assessments not only aligns with the evaluators’ expression habits but also captures the uncertainty of the evaluation information, thereby helping to obtain results that closely reflect the evaluators’ cognition.
(2)
The enhanced ordered weighted averaging (OWA) operator is utilized to aggregate evaluative information from multiple evaluators, forming a collective assessment. By applying slight value weighting to unfairness parameters, the influence of these parameters on the decision outcome is mitigated. This approach effectively addresses the issue of evaluator biases, ensuring fairness throughout the evaluation process.
(3)
Utilize group consensus decision-making methods to address conflicts in evaluators’ viewpoints during the evaluation process. This paper designs a consensus-reaching process that embeds a new feedback regulation mechanism to guide evaluators in adjusting their viewpoints to achieve consistent evaluation perspectives. This process improves the reliability and validity of the evaluation results and enhances evaluators’ acceptance of the outcomes.
The rest of the paper is organized as follows. Section 2 introduces the foundational knowledge utilized in this study. Section 3 proposes a consensus-based 360 degree feedback evaluation method, elucidating its specific solution steps. Section 4 employs the proposed 360 degree feedback evaluation method to assess the performance of enterprise employees, examining the effectiveness and practicality of this approach. In Section 5, some discussions are conducted, while Section 6 outlines the research conclusions of this paper.

2. Preliminaries

This section introduces several basics involved in constructing a consensus-based 360 degree feedback evaluation method.

2.1. Two-Tuple Linguistic Model

There are some inevitable uncertainties in the evaluation process, which brings additional challenges. Therefore, in decision making, linguistic terms are often more convenient than precise numerical scales. Let S = { s i | i = 0 , 1 , 2 , , g } be a linguistic term set, where s i represents a possible linguistic value and g + 1 represents the granularity of S . The following essential characteristics are commonly fulfilled: (1) S is ordered: s i > s j , if and only if i > j . (2) A negation operator is N e g ( s i ) = s g i , where i = 0 , 1 , , g . For more details, kindly refer to Herrera and Martínez [48].
The two-tuple linguistic model is a symbolic computation model applied to text. The equivalent information of β can be derived from the following functions, resulting in a two-tuple representation:
Δ : [ 0 , g ] S × [ 0.5 , 0.5 )
Δ ( β ) = s i , σ ,   with   s i , i = round ( β ) σ = β i , σ [ 0.5 , 0.5 )
where round ( ) is the usual round operation, s i has the closest index label to “ β ”, and “ σ ” is the value of the symbolic translation.
Let S = { s i | i = 0 , 1 , 2 , , g } be a linguistic term set and s i , σ be a two-tuple. There is always a Δ 1 function, such that, from a two-tuple it returns its equivalent numerical value β [ 0 , g ] R . Specifically, the following function can be used to convert a two-tuple linguistic term into its equivalent numerical value:
Δ 1 : S × [ 0.5 , 0.5 ) [ 0 , g ] Δ 1 ( s i , σ ) = i + σ = β
Based on the discussion of the two-tuple linguistic model, it is evident that the process of converting a linguistic term into a linguistic two-tuple involves adding a value of 0 as a symbolic translation: s i S ( s i , 0 ) .
In the above linguistic model, β [ 0 , g ] is the result of the element operation in set S , Δ is a one-to-one mapping function, and Δ 1 is an inverse operator for Δ . Here, ( s i , σ ) and ( s j , μ ) are used to express two linguistic two-tuples. When Δ 1 ( s i , σ ) > Δ 1 ( s j , μ ) , then ( s i , σ ) is larger than ( s j , μ ) .

2.2. Numerical Scale Function and Linguistic Distribution Assessments

2.2.1. Numerical Scale Function

The numerical scales of the linguistic terms are used to handle the linguistic distribution assessments and convert linguistic terms into precise numbers.
Definition 1
([49]). Let S = { s i | i = 0 , 1 , 2 , , g } be as shown above, R be the set of precise numbers. Then, the function N S : S R is regarded as a numerical scale of S , and N S ( s t ) is referred to as the numerical index of term s t . In this paper, for the following use, we assume that N S ( s t ) = t ( t = 0 , 1 , , g ) .

2.2.2. Linguistic Distribution Assessments

Linguistic distribution assessments provide symbolic proportion information concerning linguistic terms. The fundamental concept of linguistic distribution is outlined as follows:
Definition 2
([50]). Let S = { s i | i = 0 , 1 , 2 , , g } be as shown above; A = { ( s t , β t ) | t = 0 , 1 , 2 , , g } is the linguistic distribution assessments of S , where s t S , 0 β t 1 is the symbolic proportion of linguistic term as well as t = 0 g β t = 1 .
Definition 3
([50]). Let A = { ( s t , β t ) | t = 0 , 1 , 2 , , g } be as shown above. A linguistic two-tuple E ( A ) is used to define the expectation of A . Afterwards, the expectation of A is
E ( A ) = Δ t = 0 g β t × N S ( s t )
Obviously,  E ( A ) S . The comparison operator exists on  S :  A 1  and  A 2  are the two linguistic distribution assessments. The definition of the comparison operator and negation operator for linguistic distributions is as follows:
(1) 
Comparison operator: If  E ( A 1 ) > E ( A 2 ) ,  A 1  is higher than  A 2 . If  E ( A 1 ) = E ( A 2 ) , then  A 1  is equal to  A 2 .
(2) 
Negation operator:  N e g { ( s t , β t ) | t = 0 , 1 , 2 , , g } = { ( s t , β g t ) | t = 0 , 1 , 2 , , g } .
Definition 4
([50]). Let { A 1 , A 2 , , A n } be a set of n linguistic distribution assessments on S , where A i = { ( s t , β t i ) | t = 0 , 1 , 2 , g } , i = 1 , 2 , , n ; let w = ( w 1 , w 2 , , w n ) T be the related weight vector satisfying w i 0 , i = 1 n w i = 1 . Afterwards the weighted averaging operator of { A 1 , A 2 , , A n } is shown as follows:
D A W A ( A 1 , A 2 , , A n ) = { ( s t , β t | t = 0 , 1 , 2 , , g }
where  β t = i = 1 n ω i × β t i .

2.3. OWA Operator

The OWA operator, proposed by Yager [51], involves three main steps: reordering the input data in descending order, determining the weights of the OWA using an appropriate method, and aggregating these reordered data using the OWA weights. The mathematical expression is as follows:
Definition 5
([51]). An OWA operator of dimension n is a mapping, denoted by h :   R n R .
h x 1 , x 2 , , x n = i = 1 n λ i a i
where  λ = ( λ 1 , λ 2 , , λ n )  represents the weighted vector associated with the function  h ,  λ i [ 0 , 1 ] ,  i = 1 n λ i = 1 ,  i = 1 , 2 , , n , and  a i  stands for the data in  h ( x 1 , x 2 , , x n )  sorted in the  i -th position.  R  represents the entire set of real numbers, and the function  h  is termed an OWA operator of dimension n. It is important to note that  λ i  is unrelated to  x i  and only associated with the  i -th position during the aggregation process.
Yager introduced two measures of representativeness, namely, the “orness measure” and “dispersion measure” The “orness measure” quantifies the degree of “or” or “and” operations and is defined as follows:
o r n e s s ( λ ) = 1 n 1 i = 1 n ( n i ) λ i
The “dispersion measure” is employed to quantify the contribution of each piece of data to the result and is defined as follows:
d i s p ( λ ) = i = 1 n λ i ln λ i
A key step in the ordered weighted averaging (OWA) operator is determining the position weights for each piece of data. The size of these position weights is crucial to the final aggregation result. Wang and Xu [52] discussed the position weights from the perspective of the normal distribution and linked these weights to the decision-making data, proposing a method for assigning weights dependent on the decision data. This paper employs this method to aggregate multi-source evaluation information. The specific implementation steps are as follows:
(1) For a set of decision data denoted as D = ( d 1 , d 2 , , d n ) T with corresponding weights f = ( f 1 , f 2 , f n ) , satisfying f i 0 , i = 1 m f i = 1 , the mean of this data set is defined as d ¯ , and the standard deviation is denoted as σ ,
d ¯ = 1 n i = 1 n d i
σ = 1 n i = 1 n d ¯ d i 2
(2) Standardize the data to obtain O = o 1 , o 2 , , o n T , where
o i = d i d ¯ σ
(3) For a continuous random variable x , using the function
h μ , σ ( x ) = 1 2 π σ e ( x μ ) 2 2 σ 2
A continuous distribution serving as the density function is referred to as a normal distribution with parameters μ and σ , denoted as N ( μ , σ 2 ) . Specifically, when the parameters are 0 and 1, the normal distribution is referred to as the standard normal distribution N ( 0 , 1 ) , with a density function denoted as h ( x ) , where
h ( x ) = 1 2 π e x 2 2
Calculate the density function Q = ( q 1 , q 2 , , q n ) T for O = o 1 , o 2 , , o n T , where
q i = h o i
(4) Calculate the weight f = ( f 1 , f 2 , f n ) for data D = ( d 1 , d 2 , , d n ) T , where
f i = q i / i = 1 n q i

3. Consensus-Based 360 Degree Feedback Evaluation Method

Based on the analysis conducted in the preceding sections, it is evident that the implementation of 360 degree feedback evaluation has limitations. To address the complexities associated with 360 degree feedback evaluation, a consensus-based 360 degree feedback evaluation method has been developed. The method employs an improved OWA operator to aggregate evaluation information, mitigating the impact of unfair parameters on evaluation outcomes. Additionally, a group consensus decision-making approach is utilized to manage divergent viewpoints among evaluators, aiming to achieve consensus and enhance the fairness and rationality of 360 degree feedback evaluation results.

3.1. Introduction to the Consensus-Based 360 Degree Feedback Evaluation Method

This paper enhances the traditional 360 degree feedback evaluation method to address prevalent issues during the evaluation process. It comprises four specific phases: the preparation phase, training phase, evaluation implementation phase, and feedback coaching phase.
Preparation phase: Establish the objectives of 360 degree feedback evaluation, principles, evaluation indicators, and the corresponding evaluators and evaluated employees. This ensures they understand the purpose of implementing 360 degree feedback evaluation within the enterprise, enabling them to participate in the entire evaluation process fairly, scientifically, rigorously, and reasonably. Let M = { M 1 , M 2 , , M m } be the set of m evaluators. Let I = { I 1 , I 2 , , I y } be the set of y evaluation indicators, with the corresponding weight vector w = ( w 1 , w 2 , , w y ) T , where j = 1 y w j = 1 . Let X = { X 1 , X 2 , , X n } be the set of n evaluated employees. The evaluator M = { M 1 , M 2 , , M m } ( i = 1 , 2 , , m ) assesses the evaluated employees X = { X 1 , X 2 , , X n } ( k = 1 , 2 , , n ) based on 360 degree feedback evaluation indicators I = { I 1 , I 2 , , I y } ( j = 1 , 2 , , y ) .
Training phase: Establish an evaluation workgroup. The evaluation workgroup consists of relevant individuals, including the superior, subordinates, colleagues, and customers of the evaluated employees (experts may be involved in the review as needed based on specific circumstances). Explain the optimized 360 degree feedback evaluation process to all evaluators and the evaluated employees, ensuring their familiarity and correct utilization of the evaluation tools to provide genuine and objective feedback.
Evaluation implementation phase: The 360 degree feedback evaluation in this study comprises four main stages: (1) Collecting evaluation information from evaluators. (2) Aggregating the information using the enhanced OWA operator. (3) Calculating the weights of the evaluation indicators through a combination weighting method. (4) Establishing a consensus-reaching process to address conflicts among evaluators’ assessments. This process guides evaluators to adjust their evaluation information of the assessed individuals through a feedback regulation mechanism, ultimately leading to satisfactory evaluation results for the assessed employees.
Feedback coaching phase: During the feedback stage, the evaluation results and outcomes are communicated to the evaluated employees, encouraging self-reflection, identification of potential issues, and the active adoption of improvement measures. This continuous process aims to enhance work performance and professional capabilities, further propelling individual career development in the future.
The specific steps of the proposed consensus-based 360 degree feedback evaluation method are illustrated in Figure 1.

3.2. The Construction of the Consensus-Based 360 Degree Feedback Evaluation Method

This section provides an in-depth analysis of the evaluation implementation phase within the consensus-based 360 degree feedback evaluation method. In addressing the challenges of 360 degree feedback evaluation in complex environments, this paper attempts to utilize an improved OWA operator to aggregate evaluative information from multiple evaluators. Additionally, it employs group consensus decision-making methods to handle discrepancies in evaluators’ viewpoints. This approach aims to make the implementation process of 360 degree feedback evaluation more scientific and rational. The method diagram is illustrated as shown in Figure 2.
The crucial steps involve four distinct stages:
1.
Collecting evaluation information from evaluators.
The evaluators employ the linguistic term set S = { s 0 , s 1 , s 2 , , s g } to assess the performance of the employees being evaluated on each evaluation indicator, as well as obtaining the evaluation matrix D k = ( d i j k ) m × y , d i j k = { ( s t , β i j , t k ) | t = 0 , 1 , 2 , , g } , where s t S , β i j , t k 0 , t = 0 g β i j , t k = 1 , k = 1 , 2 , , n .
2.
Aggregating the collected evaluation information to obtain a provisional collective evaluation.
Step 1: Calculate the expectation matrix E D k for the evaluated employees of X k X using Definition 3, satisfying E D k = ( e d i j k ) m × y .
e d i j k = Δ t = 0 g N S s t β i j , t k
Step 2: Utilize Equations (9)–(14) to standardize the data in the evaluation matrix and calculate the probability density of the evaluation data as G k = ( g i j k ) m × y .
Step 3: Utilize Equation (15) to compute the weight matrix for the evaluation data, denoted as F k = ( f i j k ) m × y , where i = 1 m f i j k = 1 , k = 1 , 2 , , n , i = 1 , 2 , , m , j = 1 , 2 , , y . f i j k represents the weight assigned by the evaluator M i to the assessed employee X k for indicator I j among all evaluation data provided by evaluators.
Step 4: Calculate the comprehensive evaluation C = ( c k j c ) n × y for the evaluated employee X k on the evaluation indicator I j , where
c k j c = Δ i = 1 n f i j k Δ 1 e d i j k
3.
Calculating the weights of the 360 degree feedback evaluation indicators.
Using the improved analytic hierarchy process [53] and entropy method [42], calculate the subjective weights ω j s and objective weights w j o of the evaluation indicators, respectively.
Compute the comprehensive weight w = ( w 1 , w 2 , , w y ) T for the evaluation indicator I j , where
w j = ε w j s + ( 1 ε ) w j o
4.
Consensus reaching process.
Step 1: Consensus measurement
To measure the consistency level among evaluators on the evaluated employees, we initially used the indicator weights from the third stage to consolidate evaluation information for each evaluated individual, producing the integrated evaluation matrix Z i = ( z 1 i , z 2 i , , z n i ) T for each evaluator, where
z k i = Δ j = 1 y Δ 1 ( e d i j k ) ω j
Aggregate the evaluation matrices acquired in the second stage to form the comprehensive collective evaluation matrix Z c = ( z 1 c , z 2 c , , z n c ) , where
z k c = Δ j = 1 y Δ 1 ( c k j c ) ω j
Let O = ( o 1 , o 2 , , o n ) T be called a preference ordering, where o i ( i = 1 , 2 , , n ) indicates the position of x i among X = { x 1 , x 2 , , x n } . Let O i = ( o 1 i , o 2 i , , o n i ) T ( i = 1 , 2 , , m ) be a preference ordering of evaluator M i , and this paper calculates the preference orderings using the collective evaluation matrix Z i = ( z 1 i , z 2 i , , z n i ) T . The higher the evaluation values, the lower the positional order o k i . Likewise, the collective preference orderings O c = ( o 1 c , o 2 c , , o n c ) T can also be calculated using the integrated collective evaluation matrix Z c = ( z 1 c , z 2 c , , z n c ) T .
Definition 6
([54]). The ordinal consensus degree (OCD) of each evaluator M i , where i = 1 , 2 , , m , is defined as follows:
O C D M i = 1 n 2 k = 1 n o k i o k c
Utilize Equations (9)–(15) to compute the position weights for the data  O C D ( M 1 ) O C D ( M 2 ) , …,  O C D ( M m ) , denoted as  μ = ( μ 1 , μ 2 , , μ m ) , where  i = 1 m μ i = 1 ,  μ i 0 ; subsequently, the ordinal consensus degree among all the evaluators can be denoted as follows:
O C D { M 1 , M 2 , , M m } = i = 1 m μ i O C D ( M i )
If O C D { M 1 , M 2 , , M m } = 0 , all evaluators reach a consensus on the evaluation opinion. The higher the O C D { M 1 , M 2 , , M m } value is, the lower the degree of consensus, and the greater the conflict between the evaluators.
Step 2: Adjustment of the evaluation information
If a satisfactory consensus is not reached, the evaluators could adjust their evaluation information according to specific rules in order to facilitate achieving the desired consensus level. The adjustment rules are detailed as follows:
Similarly, referring to Dong et al. [55], and based on the linguistic distribution assessments utilized in this study, adjustments to the evaluation information should adhere to the guidelines specified below:
If e d i j k > c k j c , M i should lower their evaluation information of X k over I j ;
If e d i j k = c k j c , the evaluation information of X k over I j should remain the same;
If e d i j k < c k j c , M i should raise their evaluation information of X k over I j .
By modifying the new round of evaluation information, a new consensus level among evaluators can be obtained. If the adjusted consensus level is acceptable, the final evaluation result is determined based on this collective agreement. If not, return to the first stage until a satisfactory consensus level is achieved.
Note 1: This paper focuses on proposing a consensus-based 360 degree feedback evaluation method with linguistic distribution assessments. An improved OWA operator is employed to aggregate evaluation information from different evaluators, forming a collective evaluation. A consensus-reaching process is embedded within the evaluation process to help evaluators achieve a consensus on the evaluation. This study focuses on the practical issues associated with 360 degree feedback evaluation, specifically addressing evaluator bias and conflicts in evaluation viewpoints. It emphasizes the application of this method in corporate performance assessment. In future research, we will include axiomatic analysis to explore the theoretical implications of this method in greater depth and propose relevant lemmas to further support and validate our theoretical framework.

4. Case Study

4.1. Background

J company, established in 2011 in Nanjing, Jiangsu Province, China, specializes in the electric power industry. As a high-tech enterprise, it focuses on developing application systems, integrating information systems, conducting data mining and analysis, and applying visualization technology. The company combines research and development, production, sales, and services to provide comprehensive information solutions for the electric power industry. Its business scope includes computer software development, technical services, system integration, and automation control system research and design.
The power dispatch department is the core of the company, responsible for product development, design, and customer technical support. It is divided into four project teams, each consisting of a project leader, development engineer, testing engineer, implementation engineer, and operation and maintenance engineer. The project leader is the central figure, bearing dual responsibilities for leadership and management. They oversee project implementation, track progress, and manage processes, playing an indispensable role. Evaluating the project leader involves scrutinizing their performance. Summarizing and assessing their work helps identify issues, enhance performance, and improve project completion. Therefore, this study uses the consensus-based 360 degree feedback evaluation method to assess the performance of project leaders at J Company, validating the method’s effectiveness and rationality.

4.2. Construction of the 360 Degree Feedback Evaluation Indicators System

In the context of 360 degree feedback evaluation, the accurate assessment of an individual’s work capabilities and development potential relies on scientifically sound performance indicators. When selecting these indicators, it is crucial to consider not only the project leader’s work performance but also their work attitude and abilities. Accordingly, this study identifies work performance, work attitude, and work ability as the first-level indicators, with project schedule, project delivery quality, project standardization, work responsibility, collaboration, organizational discipline, professional ability, team management ability, and decision-making ability as the second-level indicators. The 360 degree performance evaluation indicators system constructed in this study is illustrated in Figure 3.

4.3. Implementation of the Consensus-Based 360 Degree Feedback Evaluation Method

The consensus-based 360 degree feedback evaluation method consists of the following four phases, with specific steps outlined as follows:
Preparation phase. Determine the objectives and principles of the 360 degree feedback evaluation. To enhance the efficiency of project leaders; achieve J Company’s overall operational objectives; and provide accurate bases for salary adjustments, training, and promotions, the 360 degree feedback evaluation is employed. The evaluation process adheres to the following principles:
(1)
The three public principles: Fairness, impartiality, and transparency in the evaluation process and results;
(2)
Objectivity principle: Basing evaluations on facts and avoiding excessive subjective judgments;
(3)
Results-oriented principle: Emphasizing outcomes, key behaviors, and value contributions;
(4)
Performance coaching: Consistently implementing coaching as the core of performance management throughout the process.
Training phase: Form an evaluation working group. The 360 degree feedback evaluation involves six evaluators, namely, the project director, subordinates, customers, and relevant personnel from the project service center and marketing commerce department. These six evaluators assess the performance of four project leaders, denoted as X = { X 1 , X 2 , X 3 , X 4 } . There are three first-level indicators denoted as I = { I 1 , I 2 , I 3 } , with nine corresponding secondary evaluation indicators, denoted as I 1 = { I 1 1 , I 1 2 , I 1 3 } , I 2 = { I 2 1 , I 2 2 , I 2 3 } , I 3 = { I 3 1 , I 3 2 , I 3 3 } . Evaluators use a five-grade linguistic term set to assess the performance of project leaders across various evaluation indicators, represented as S = { s 0 = v e r y   p o o r ,   s 1 = p o o r ,   s 2 = f a i r ,   s 3 = g o o d ,   s 4 = v e r y   g o o d } .
Evaluation implementation phase: The specific procedure consists of the following four steps.
1.
Collecting evaluation information from evaluators.
Evaluators use a set of linguistic terms to assess the performance of the project leaders on each evaluation indicator, resulting in an evaluation matrix D k = ( d i j k ) m × y , as shown in Table 1, Table 2, Table 3 and Table 4.
2.
Evaluation information aggregation.
The OWA operator is used to aggregate evaluation information, specifically as follows:
Step 1: Calculate the expected value matrix E D k of the project leader X k { X 1 , X 2 , X 3 , X 4 } using Equation (16);
Step 2: Use Equations (9)–(14) to standardize the data in the evaluation matrix, and calculate the probability density of the evaluation data as: G k = ( g i j k ) m × y ;
Step 3: Using Equation (15), the positional weight matrix is denoted as: F k = ( f i j k ) m × y ;
Step 4: Use Equation (17) to calculate the comprehensive evaluation C = ( c k j c ) n × y of the evaluated employee X k .
3.
Determine the weights of performance evaluation indicators.
Step 1: Objective weights
The objective weight is obtained by entropy method. The objective weights for each indicator are denoted as w o = ( 0.104 , 0.175 , 0.110 , 0.130 , 0.097 , 0.131 , 0.078 , 0.081 , 0.095 ) T .
Step 2: Subjective weights
The subjective weights are determined based on expert opinions. The subjective weights for each indicator are denoted as w s = ( 0.210 , 0.131 , 0.094 , 0.083 , 0.069 , 0.049 , 0.144 ,   0.120 , 0.100 ) T .
Step 3: Combination weights
Based on Equation (18), the combined weights for the 360 degree performance indicators are determined as w = ( 0.157 , 0.153 , 0.102 , 0.107 , 0.083 , 0.090 , 0.111 , 0.100 , 0.097 ) T , with ε set to 0.5 in this study.
4.
Consensus reaching process.
Step 1: Consensus measurement
To measure the consensus level among evaluators M = { M 1 , M 2 , , M 6 } on the performance evaluation of the assessed employees X = { X 1 , X 2 , X 3 , X 4 } , the evaluation information is aggregated using Equation (19). This results in the evaluation matrix, as shown in Table 5.
Then, by utilizing Equation (20), the comprehensive evaluation matrix of the group is obtained, Z c = { ( s 3 , 0.12 ) , ( s 3 , 0.23 ) , ( s 3 , 0.15 ) , ( s 3 , 0.16 ) } .
Utilizing the definition of preference ordering to obtain the preference ordering vectors for the evaluators and the group, the results are denoted as O 1 = ( 2 , 1 , 3 , 4 ) T , O 2 = ( 3 , 1 , 2 , 4 ) T , O 3 = ( 2 , 3 , 1 , 4 ) T , O 4 = ( 2 , 1 , 3 , 4 ) T , O 5 = ( 2 , 1 , 4 , 3 ) T , O 6 = ( 2 , 3 , 1 , 4 ) T , O c = 3 , 1 , 2 , 4 T .
In this study, the threshold of ordinal consensus degree was set as 0.1. If O C D { M 1 , M 2 , M m } > 0.1 , then evaluator M i in with O C D ( M i ) > 0.1 needs to adjust their evaluation information using the feedback adjustment mechanism. Using Equation (21), the O C D for M i ( i = 1 , 2 , , 6 ) is calculated and the results are as follows: O C D ( M 1 ) = 0.13 , O C D ( M 2 ) = 0 , O C D ( M 3 ) = 0.25 , O C D ( M 4 ) = 0.13 , O C D ( M 5 ) = 0.25 , O C D ( M 6 ) = 0.25 . Utilizing Equations (9)–(15), the location weights for the ordered consensus degree for evaluators are calculated, denoted as μ = ( 0.22 , 0.06 , 0.17 , 0.22 , 0.17 , 0.17 ) T . Based on this, the collective O C D { M 1 , M 2 , , M 6 } = 0.18 is computed by Equation (22). If an acceptable consensus level has not been reached among the evaluators, evaluators M 1 , M 3 , M 4 , M 5 , and M 6 need to adjust their evaluation information according to the feedback adjustment mechanism. The specific adjustment process is detailed in the second round of the 360 degree performance evaluation process.
Step 2: Adjustment of the evaluation information
The adjusted evaluation matrix is represented as D k = ( d i j k ) m × y , as shown in Table 6, Table 7, Table 8 and Table 9.
Repeating the previously mentioned 360 degree feedback evaluation process results in the collective evaluation matrix C = ( c k j c ) n × y . The weights for the 360 degree feedback evaluation indicators are denoted as w = ( 0.155 , 0143 , 0.104 , 0.121 , 0.082 , 0.088 , 0.110 , 0.099 , 0.098 ) T . The comprehensive evaluation matrix for each evaluated employee is shown in Table 10.
Equation (20) is utilized to obtain the comprehensive evaluation matrix Z c = { ( s 3 , 0.13 ) , ( s 3 , 0.23 ) , ( s 3 , 0.10 ) , ( s 3 , 0.13 ) } . O C D for M i ( i = 1 , 2 , , 6 ) is calculated, and the results are as follows: O C D ( M 1 ) = 0 , O C D ( M 2 ) = 0.13 , O C D ( M 3 ) = 0 , O C D ( M 4 ) = 0 , O C D ( M 5 ) = 0.13 , O C D ( M 6 ) = 0.25 . The collective O C D { M 1 , M 2 , , M 6 } = 0.07 is calculated by Equation (22). At this point, because O C D { M 1 , M 2 , , M 6 } < 0.1 , an acceptable level of consensus has been reached among the evaluators. Therefore, the evaluation information obtained in this consensus round represents the final evaluation results for the project leaders, denoted as X 2 X 1 X 3 X 4 .
Feedback stage: Finally, the evaluation results are provided as feedback to each project leader. This feedback helps them identify areas of improvement, address some issues, and enhance their performance for the next evaluation cycle.

5. Discussion

In this section, the proposed consensus-based 360 degree feedback evaluation method is compared with related methods to highlight the contribution of this study.
(1)
Evaluation information representation: Evaluation information representation is a foundational task of 360 degree feedback evaluation. Traditional 360 degree feedback evaluation methods often use numerical forms, such as assigning scores within a 1–10 range [43,46,56]. Archer et al. [56] employed a six-point scale to assess the clinical performance of participants. Baker et al. [46] used a five-point rating scale to assess the performance of the evaluated individuals. However, due to the qualitative nature of evaluation criteria, limited capacity of evaluators, and time constraints, evaluators may hesitate among multiple assessment values. Consequently, linguistic assessment approaches have been employed [40,41,57,58]. Andrés et al. [58] presented a multi-granular linguistic 360 degree feedback evaluation model using a symbolic approach based on the fuzzy linguistic two-tuple. Utilizing linguistic variables is a flexible and practical approach for portraying evaluators’ cognitive information. In contrast to these methods, this study employed linguistic distribution assessments to represent evaluators’ evaluation information, providing a more general linguistic model that can effectively represent qualitative or uncertain evaluation data [59].
(2)
Aggregation of evaluation information: Information aggregation refers to consolidating evaluation information from various sources or types into a comprehensive assessment result [35]. In the 360 degree feedback evaluation process, effectively handling the collected multi-source evaluation information and utilizing a specific aggregation mechanism to integrate opinions from multiple evaluators are critical factors for successful implementation of 360 degree feedback evaluation. Traditional 360 degree feedback evaluation methods employ weighted average operators or simple average operators to aggregate evaluation information from multiple evaluators [43,60]. Joshi et al. [60] used the weighted average operator to calculate the total score of the residents. Saberzadeh-Ardestani et al. [43] generated an overall score for each individual in the multi-source feedback (MSF) assessment system by averaging the scores received from all evaluators. This study utilized the improved OWA operator to determine the positional weights for each evaluation information. In this approach, weights are assigned based on evaluation data, giving relatively lower weights to information located at the ends and higher weights to information in the middle [53]. This approach effectively addresses evaluation information with extreme highs or lows that may contain biases, encouraging evaluators to express their opinions more genuinely and objectively.
(3)
Consensus issue: In implementing the 360 degree feedback evaluation method, multiple evaluators may come from different professional backgrounds. Due to variations in their understanding of the individuals being assessed, the evaluation information provided by evaluators may exhibit significant conflicts. Addressing the conflicts among multi-source evaluation information to render the final evaluation results more acceptable to both evaluators and the evaluated individuals is a critical issue in 360 degree feedback evaluation. However, traditional 360 degree feedback evaluation methods merely aggregate individual evaluation information into collective evaluation information to obtain the ranking of evaluation results, without fully addressing the issue of evaluators’ consensus [41,58,60]. In this study, the integration of a consensus-reaching process into the 360 degree feedback evaluation method was developed, and a feedback mediation mechanism was designed to guide evaluators in adjusting their evaluation information. This approach aims to handle conflicts in evaluators’ assessment opinions during the evaluation process, encouraging evaluators to achieve as much consensus as possible. A detailed comparison highlighting the characteristics of the proposed consensus-based 360 degree feedback evaluation method is provided in Table 11.

6. Conclusions

The primary contribution of this study is the enhancement of the 360 degree feedback evaluation method. A consensus-based 360 degree feedback evaluation method has been developed to elevate the precision and scientific validity of the final evaluation results. Additionally, it is crucial to acknowledge that within the domain of human resource management in enterprises, there are various assessment issues, such as employee recruitment, talent selection, performance appraisal, and diverse decision-making matters associated with personnel management. The improved 360 degree feedback evaluation method proposed in this paper is equally applicable to a range of employee assessment challenges in human resource management.
The primary research contents of this paper are as follows:
(1)
We investigated a 360 degree feedback evaluation method with linguistic distribution assessments. The study utilized linguistic distribution assessments to represent evaluators’ evaluation information, adapting to evaluators’ expression habits while capturing the uncertainty in the evaluation data. Additionally, a combination weighting method was employed to determine the indicator weights, striking a balance between the subjective importance of the indicators and the objective data information.
(2)
We designed a 360 degree feedback evaluation aggregation mechanism based on an improved OWA operator. The improved OWA operator determines the positional weights of each evaluation information, applying lower weights to extreme evaluation information. Research findings indicate that this approach mitigates the significant impact of extreme appraisal information resulting from individual biases, resulting in fairer evaluation outcomes.
(3)
We constructed a consensus-based 360 degree feedback evaluation method. We utilized group consensus decision-making methods to measure the degree of consensus among evaluators and devised a feedback adjustment mechanism to guide evaluators with lower consensus levels within the group to adjust their appraisal perspectives effectively, facilitating the achievement of consensus among evaluators.
We acknowledge several limitations in our proposed 360 degree feedback evaluation method, and these issues warrant resolution in future research.
(1)
In the evaluation process, the subjective factors such as the knowledge, experience, and cognitive levels of the individuals being evaluated, as well as objective factors like the complexity and uncertainty of evaluation issues, can lead different people to have varying interpretations of the same linguistic terms [61]. Research that incorporates personalized individual semantics should also be integrated into the 360 degree feedback evaluation method.
(2)
In the 360 degree feedback evaluation process, the evaluation indicators are not isolated, and there exists interaction among them, thereby influencing the final evaluation results. The presence of such interaction and correlation adds complexity to the evaluation process, potentially necessitating a more careful design of the evaluation indicator system or the adoption of more sophisticated methods or models, such as the Sugeno integral-based ordered weighted maximum operator (OWMax) [62], to handle the interactions among indicators and ensure the accuracy of comprehensive assessments.
(3)
With the development of information network technology, an organization’s human resource management is moving towards digitization and informatization, significantly enhancing the efficiency of human resource management. Information-based human resource management has seen vigorous growth [63]. The 360 degree feedback evaluation consensus model constructed in this paper can be further optimized with the help of information technology to design organizational performance evaluation systems or employee evaluation systems. This can enable networked 360 degree feedback evaluation management, ultimately improving the efficiency of performance evaluation work.

Author Contributions

Conceptualization, methodology, writing—original draft preparation, funding acquisition, and supervision were performed by C.F. Data analysis and writing—original draft preparation were performed by J.W. Data collection and writing—review and editing were performed by Y.Z. Methodology was performed by H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly supported by the Social Science Fund of Jiangsu Province (No. 23GLB004).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors have no competing interests to declare that are relevant to the content of this study.

References

  1. London, M.; Volmer, J.; Zyberaj, J.; Kluger, A.N. Gaining feedback acceptance: Leader-member attachment style and psychological safety. Hum. Resour. Manage Rev. 2023, 33, 100953. [Google Scholar] [CrossRef]
  2. Semeijn, J.H.; Van Der Heijden, B.I.; Van Der Lee, A. Multisource ratings of managerial competencies and their predictive value for managerial and organizational effectiveness. Hum. Resour. Manag. 2014, 53, 773–794. [Google Scholar] [CrossRef]
  3. Jackson, D.J.R.; Michaelides, G.; Dewberry, C.; Schwencke, B.; Toms, S. The implications of unconfounding multisource performance ratings. J. Appl. Psychol. 2020, 105, 312–329. [Google Scholar] [CrossRef]
  4. Day, D.V.; Dragoni, L. Leadership development: An outcome-oriented review based on time and levels of analyses. Ann. Rev. Organ. Psychol. Organ. Behav. 2015, 2, 133–156. [Google Scholar] [CrossRef]
  5. Ock, J. Construct validity evidence for multisource performance ratings: Is interrater reliability enough? Ind. Organ. Psychol. 2016, 9, 329–333. [Google Scholar] [CrossRef]
  6. Vergauwe, J.; Hofmans, J.; Wille, B. The Leadership Arena–Reputation–Identity (LARI) model: Distinguishing shared and unique perspectives in multisource leadership ratings. J. Appl. Psychol. 2022, 107, 2243–2268. [Google Scholar] [CrossRef] [PubMed]
  7. Yahiaoui, D.; Nakhle, S.F.; Farndale, E. Culture and performance appraisal in multinational enterprises: Implementing French headquarters’ practices in Middle East and North Africa subsidiaries. Hum. Resour. Manag. 2021, 60, 771–785. [Google Scholar] [CrossRef]
  8. Atwater, L.E.; Brett, J.F.; Charles, A.C. Multisource feedback: Lessons learned and implications for practice. Hum. Resour. Manag. 2007, 46, 285–307. [Google Scholar] [CrossRef]
  9. Kostopoulos, K.; Syrigos, E.; Kuusela, P. Responding to inconsistent performance feedback on multiple goals: The contingency role of decision maker’s status in introducing changes. Long Range Plan. 2023, 56, 102269. [Google Scholar] [CrossRef]
  10. Hill, J.J.; Asprey, A.; Richards, S.H.; Campbell, J.L. Multisource feedback questionnaires in appraisal and for revalidation: A qualitative study in UK general practice. Br. J. Gen. Pract. 2012, 62, e314–e321. [Google Scholar] [CrossRef]
  11. Al Khalifa, K.; Al Ansari, A.; Violato, C.; Donnon, T. Multisource feedback to assess surgical practice: A systematic review. J. Surg. Educ. 2013, 70, 475–486. [Google Scholar] [CrossRef] [PubMed]
  12. Donnon, T.; Al Ansari, A.; Al Alawi, S.; Violato, C. The reliability, validity, and feasibility of multisource feedback physician assessment: A systematic review. Acad. Med. 2014, 89, 511–516. [Google Scholar] [CrossRef]
  13. Watling, C.J.; Ginsburg, S. Assessment, feedback and the alchemy of learning. Med. Educ. 2019, 53, 76–85. [Google Scholar] [CrossRef] [PubMed]
  14. Zuo, W.J.; Liu, L.J.; Hu, Q.; Zeng, S.Z.; Hu, Z.M. A property perceived service quality evaluation method for public buildings based on multisource heterogeneous information fusion. Eng. Appl. Artif. Intel. 2023, 122, 106070. [Google Scholar] [CrossRef]
  15. Xu, L.L.; Zhang, T.F. Engaging with multiple sources of feedback in academic writing: Postgraduate students’ perspectives. Assess. Eval. High. Educ. 2023, 48, 995–1008. [Google Scholar] [CrossRef]
  16. Van der Heijden, B.I.; Nijhof, A.H. The value of subjectivity: Problems and prospects for 360-degree appraisal systems. Int. J. Hum. Resour. Manag. 2004, 15, 493–511. [Google Scholar] [CrossRef]
  17. Jiao, W. Performance evaluation of state-owned enterprises based on fuzzy neural network combination model. Soft Comput. 2022, 26, 11105–11113. [Google Scholar] [CrossRef]
  18. Selvarajan, T.; Cloninger, P.A. Can performance appraisals motivate employees to improve performance? A Mexican study. Int. J. Hum. Resour. Manag. 2012, 23, 3063–3084. [Google Scholar] [CrossRef]
  19. Brown, T.C.; O’Kane, P.; Mazumdar, B.; McCracken, M. Performance management: A scoping review of the literature and an agenda for future research. Hum. Resour. Dev. Rev. 2019, 18, 47–82. [Google Scholar] [CrossRef]
  20. Lockyer, J.; Sargeant, J. Multisource feedback: An overview of its use and application as a formative assessment. Can. Med. Educ. J. 2022, 13, 30–35. [Google Scholar] [CrossRef]
  21. Ferguson, J.; Wakeling, J.; Bowie, P. Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: A systematic review. BMC Med. Educ. 2014, 14, 76. [Google Scholar] [CrossRef]
  22. Brett, J.F.; Atwater, L.E. 360° feedback: Accuracy, reactions, and perceptions of usefulness. J. Appl. Psychol. 2001, 86, 930–942. [Google Scholar] [CrossRef]
  23. Bing-You, R.; Varaklis, K.; Hayes, V.; Trowbridge, R.; Kemp, H.; McKelvy, D. The feedback tango: An integrative review and analysis of the content of the teacher–learner feedback exchange. Acad. Med. 2018, 93, 657–663. [Google Scholar] [CrossRef] [PubMed]
  24. Ng, K.Y.; Koh, C.; Ang, S.; Kennedy, J.C.; Chan, K.Y. Rating leniency and halo in multisource feedback ratings: Testing cultural assumptions of power distance and individualism-collectivism. J. Appl. Psychol. 2011, 96, 1033–1044. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, Z.; Guo, C.H. A method for multi-granularity uncertain linguistic group decision making with incomplete weight information. Knowl.-Based Syst. 2012, 26, 111–119. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Guo, C.H.; Martínez, L. Managing multigranular linguistic distribution assessments in large-scale multiattribute group decision making. IEEE Trans. Syst. Man Cybern. Syst. 2016, 47, 3063–3076. [Google Scholar] [CrossRef]
  27. Zhang, G.Q.; Wu, Y.Z.; Dong, Y.C. Generalizing linguistic distributions in hesitant decision context. Int. J. Comput. Intell. Syst. 2017, 10, 970–985. [Google Scholar] [CrossRef]
  28. Jin, L.S.; Chen, Z.S.; Yager, R.R.; Langari, R. Interval type interval and cognitive uncertain information in information fusion and decision making. Int. J. Comput. Intell. Syst. 2023, 16, 60. [Google Scholar] [CrossRef]
  29. Zhou, M.; Liu, X.B.; Chen, Y.W.; Yang, J.B. Evidential reasoning rule for MADM with both weights and reliabilities in group decision making. Knowl.-Based. Syst. 2018, 143, 142–161. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Weng, Q.X. The analysis of characteristics and internal mechanisms of multisource feedback. Adv. Psychol. Sci. 2018, 26, 1131–1140. [Google Scholar] [CrossRef]
  31. Liu, W.Q.; Zhang, H.J.; Liang, H.M.; Li, C.C.; Dong, Y.C. Managing consistency and consensus issues in group decision-making with self-confident additive preference relations and without feedback: A nonlinear optimization method. Group. Decis. Negot. 2022, 31, 213–240. [Google Scholar] [CrossRef]
  32. Yang, Y.L.; Gai, T.T.; Cao, M.S.; Zhang, Z.; Zhang, H.J.; Wu, J. Application of group decision making in shipping industry 4.0: Bibliometric analysis, trends, and future directions. Systems 2023, 11, 69. [Google Scholar] [CrossRef]
  33. Smither, J.W.; London, M.; Reilly, R.R. Does performance improve following multisource feedback? A theoretical model, meta-analysis, and review of empirical findings. Pers. Psychol. 2005, 58, 33–66. [Google Scholar] [CrossRef]
  34. Manoharan, T.; Muralidharan, C.; Deshmukh, S. An integrated fuzzy multi-attribute decision-making model for employees’ performance appraisal. Int. J. Hum. Resour. Manag. 2011, 22, 722–745. [Google Scholar] [CrossRef]
  35. Chen, Z.S.; Yu, C.; Chin, K.S.; Martínez, L. An enhanced ordered weighted averaging operators generation algorithm with applications for multicriteria decision making. Appl. Math. Model. 2019, 71, 467–490. [Google Scholar] [CrossRef]
  36. Chen, Z.S.; Zhang, X.; Govindan, K.; Wang, X.J.; Chin, K.S. Third-party reverse logistics provider selection: A computational semantic analysis-based multi-perspective multi-attribute decision-making approach. Expert. Syst. Appl. 2021, 166, 114051. [Google Scholar] [CrossRef]
  37. Chen, Y.W.; Zhao, P.W.; Zhang, Z.; Bai, J.C.; Guo, Y.Q. A stock price forecasting model integrating complementary ensemble empirical mode decomposition and independent component analysis. Int. J. Comput. Intell. Syst. 2022, 15, 75. [Google Scholar] [CrossRef]
  38. Gao, Y.; Zhang, Z. Consensus reaching with non-cooperative behavior management for personalized individual semantics-based social network group decision making. J. Oper. Res. Soc. 2021, 73, 2518–2535. [Google Scholar] [CrossRef]
  39. Xu, X.J.; Liu, Y.; Liu, S.T. Supplier selection method for complex product based on grey group clustering and improved criteria importance. Int. J. Comput. Intell. Syst. 2023, 16, 195. [Google Scholar] [CrossRef]
  40. Anisseh, M.; Yusuff, R.b.M.; Shakarami, A. Aggregating group MCDM problems using a fuzzy Delphi model for personnel performance appraisal. Sci. Res. Essays. 2009, 4, 381–391. [Google Scholar]
  41. Espinilla, M.; de Andres, R.; Martinez, F.J.; Martinez, L. A 360-degree performance appraisal model dealing with heterogeneous information and dependent criteria. Inf. Sci. 2013, 222, 459–471. [Google Scholar] [CrossRef]
  42. Cheng, S. The KPI design method of performance assessment of hydraulic engineering construction enterprise based on entropy method. J. Shandong. Univ. Eng. Sci. 2020, 50, 80–84. [Google Scholar]
  43. Saberzadeh-Ardestani, B.; Sima, A.R.; Khosravi, B.; Young, M.; Mortaz Hejri, S. The impact of prior performance information on subsequent assessment: Is there evidence of retaliation in an anonymous multisource assessment system? Adv. Health. Sci. Educ. 2024, 29, 531–550. [Google Scholar] [CrossRef] [PubMed]
  44. Bizzarri, F.; Mocenni, C.; Tiezzi, S. A markov decision process with awareness and present bias in decision-making. Mathematics 2023, 11, 2588. [Google Scholar] [CrossRef]
  45. DeNisi, A.S.; Murphy, K.R. Performance appraisal and performance management: 100 years of progress? J. Appl. Psychol. 2017, 102, 421–433. [Google Scholar] [CrossRef] [PubMed]
  46. Baker, K.; Haydar, B.; Mankad, S. A feedback and evaluation system that provokes minimal retaliation by trainees. Anesthesiology 2017, 126, 327–337. [Google Scholar] [CrossRef] [PubMed]
  47. Gai, T.T.; Cao, M.S.; Chiclana, F.; Zhang, Z.; Dong, Y.C.; Herrera-Viedma, E.; Wu, J. Consensus-trust driven bidirectional feedback mechanism for improving consensus in social network large-group decision making. Group. Decis. Negot. 2023, 32, 45–74. [Google Scholar] [CrossRef]
  48. Herrera, F.; Martinez, L. The 2-tuple linguistic computational model: Advantages of its linguistic description, accuracy and consistency. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2001, 9 (Suppl. S1), 33–48. [Google Scholar]
  49. Dong, Y.C.; Zhang, G.Q.; Hong, W.C.; Yu, S. Linguistic computational model based on 2-tuples and intervals. IEEE T. Fuzzy Syst. 2013, 21, 1006–1018. [Google Scholar] [CrossRef]
  50. Zhang, G.Q.; Dong, Y.C.; Xu, Y.F. Consistency and consensus measures for linguistic preference relations based on distribution assessments. Inf. Fusion. 2014, 17, 46–55. [Google Scholar] [CrossRef]
  51. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. Syst. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  52. Wang, Y.; Xu, Z.S. A new method of giving OWA weights. Math. Pract. Theory. 2008, 38, 51–61. [Google Scholar]
  53. Huang, D.C.; Zheng, H.R. Scale-extending method for consturcting judgment matrix in the analytic hierarchy process. Systems. Eng. 2003, 21, 105–109. [Google Scholar]
  54. Dong, Y.C.; Zhang, H.J. Multiperson decision making with different preference representation structures: A direct consensus framework and its properties. Knowl.-Based Syst. 2014, 58, 45–57. [Google Scholar] [CrossRef]
  55. Dong, Y.C.; Luo, N.; Liang, H.M. Consensus building in multiperson decision making with heterogeneous preference representation structures: A perspective based on prospect theory. Appl. Soft. Comput. 2015, 35, 898–910. [Google Scholar] [CrossRef]
  56. Archer, J.; Norcini, J.; Southgate, L.; Heard, S.; Davies, H. mini-PAT (Peer Assessment Tool): A valid component of a national assessment programme in the UK? Adv. Health Sci. Educ. 2008, 13, 181–192. [Google Scholar] [CrossRef] [PubMed]
  57. Luo, S.Z.; Xing, L.N.; Ren, T. Performance evaluation of human resources based on Linguistic Neutrosophic Maclaurin Symmetric mean Operators. Cogn. Comput. 2022, 14, 547–562. [Google Scholar] [CrossRef]
  58. de Andrés, R.; García-Lapresta, J.L.; Martínez, L. A multi-granular linguistic model for management decision-making in performance appraisal. Soft. Comput. 2010, 14, 21–34. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Yu, W.Y.; Martínez, L.; Gao, Y. Managing multigranular unbalanced hesitant fuzzy linguistic information in multiattribute large-scale group decision making: A linguistic distribution-based approach. IEEE Trans. Fuzzy Syst. 2019, 28, 2875–2889. [Google Scholar] [CrossRef]
  60. Joshi, R.; Ling, F.W.; Jaeger, J. Assessment of a 360-degree instrument to evaluate residents’ competency in interpersonal and communication skills. Acad. Med. 2004, 79, 458–463. [Google Scholar] [CrossRef]
  61. Wu, J.; Wang, S.; Chiclana, F.; Herrera-Viedma, E. Two-Fold personalized feedback mechanism for social network consensus by uninorm interval trust propagation. IEEE Trans. Cybern. 2022, 52, 11081–11092. [Google Scholar] [CrossRef]
  62. Marichal, J.L. On Sugeno integral as an aggregation function. Fuzzy. Sets. Syst. 2000, 114, 347–365. [Google Scholar] [CrossRef]
  63. Sanders, K.; Nguyen, P.T.; Bouckenooghe, D.; Rafferty, A.E.; Schwarz, G. Human resource management system strength in times of crisis. J. Bus. Res. 2024, 171, 114365. [Google Scholar] [CrossRef]
Figure 1. The proposed consensus-based 360 degree feedback evaluation method.
Figure 1. The proposed consensus-based 360 degree feedback evaluation method.
Mathematics 12 01883 g001
Figure 2. Flowchart of consensus-based 360 degree feedback evaluation method.
Figure 2. Flowchart of consensus-based 360 degree feedback evaluation method.
Mathematics 12 01883 g002
Figure 3. The 360 degree feedback evaluation indicators system.
Figure 3. The 360 degree feedback evaluation indicators system.
Mathematics 12 01883 g003
Table 1. Evaluation information matrix for employee X 1 .
Table 1. Evaluation information matrix for employee X 1 .
D 1 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 1 ) }
M 2 { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) }
M 3 { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) }
M 4 { ( s 3 , 1 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) }
M 5 { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 4 , 1 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 1 ) }
M 6 { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 1 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) }
Table 2. Evaluation information matrix for employee X 2 .
Table 2. Evaluation information matrix for employee X 2 .
D 2 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 4 , 1 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) }
M 2 { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 4 , 1 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 4 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 3 { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) }
M 4 { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 4 , 1 ) } { ( s 4 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 1 ) }
M 5 { ( s 3 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) }
M 6 { ( s 2 , 1 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) }
Table 3. Evaluation information matrix for employee X 3 .
Table 3. Evaluation information matrix for employee X 3 .
D 3 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 4 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 1 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) }
M 2 { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) }
M 3 { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 3 , 1 ) }
M 4 { ( s 4 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.9 ) ( s 3 , 0.1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 4 , 1 ) } { ( s 4 , 1 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) }
M 5 { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 1 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) }
M 6 { ( s 3 , 1 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
Table 4. Evaluation information matrix for employee X 4 .
Table 4. Evaluation information matrix for employee X 4 .
D 4 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) }
M 2 { ( s 2 , 1 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) } { ( s 2 , 0.9 ) ( s 3 , 0.1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) }
M 3 { ( s 1 , 0.1 ) ( s 2 , 0.9 ) } { ( s 3 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) } { ( s 1 , 0.1 ) ( s 2 , 0.9 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.9 ) ( s 3 , 0.1 ) }
M 4 { ( s 1 , 0.2 ) ( s 2 , 0.8 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) } { ( s 2 , 1 ) } { ( s 4 , 1 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) }
M 5 { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 1 , 0.1 ) ( s 2 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) }
M 6 { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) }
Table 5. Comprehensive evaluation matrix.
Table 5. Comprehensive evaluation matrix.
Z i X 1 X 2 X 3 X 4
Z 1 ( s 3 , 0.15 ) ( s 3 , 0.20 ) ( s 3 , 0.06 ) ( s 3 , 0.12 )
Z 2 ( s 3 , 0.06 ) ( s 3 , 0.41 ) ( s 3 , 0.15 ) ( s 3 , 0.16 )
Z 3 ( s 3 , 0.18 ) ( s 3 , 0.03 ) ( s 3 , 0.31 ) ( s 3 , 0.08 )
Z 4 ( s 3 , 0.16 ) ( s 3 , 0.28 ) ( s 3 , 0.15 ) ( s 3 , 0.12 )
Z 5 ( s 3 , 0.06 ) ( s 3 , 0.43 ) ( s 3 , 0.05 ) ( s 3 , 0.01 )
Z 6 ( s 3 , 0.05 ) ( s 3 , 0.04 ) ( s 3 , 0.14 ) ( s 3 , 0.17 )
Table 6. Evaluation information matrix for employee X 1 .
Table 6. Evaluation information matrix for employee X 1 .
D 1 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) }
M 2 { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 4 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 1 ) } { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) }
M 3 { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) }
M 4 { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) }
M 5 { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) }
M 6 { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) }
Table 7. Evaluation information matrix for employee X 2 .
Table 7. Evaluation information matrix for employee X 2 .
D 2 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.6 ) ( s 3 , 0.4 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 2 { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 4 , 1 ) } { ( s 3 , 0.1 ) ( s 4 , 0.9 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 4 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 3 { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 4 { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.6 ) ( s 3 , 0.4 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 5 { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) }
M 6 { ( s 2 , 0.6 ) ( s 3 , 0.4 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) }
Table 8. Evaluation information matrix for employee X 3 .
Table 8. Evaluation information matrix for employee X 3 .
D 3 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) }
M 2 { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 3 , 1 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) }
M 3 { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 3 , 1 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) }
M 4 { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 0.6 ) ( s 3 , 0.4 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 0.3 ) ( s 3 , 0.7 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) }
M 5 { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 1 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) }
M 6 { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 1 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.1 ) ( s 3 , 0.9 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 1 ) }
Table 9. Evaluation information matrix for employee X 4 .
Table 9. Evaluation information matrix for employee X 4 .
D 4 I 1 1 I 1 2 I 1 3 I 2 1 I 2 2 I 2 3 I 3 1 I 3 2 I 3 3
M 1 { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) }
M 2 { ( s 2 , 1 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) } { ( s 3 , 1 ) } { ( s 4 , 1 ) } { ( s 2 , 0.9 ) ( s 3 , 0.1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) }
M 3 { ( s 2 , 0.9 ) ( s 3 , 0.1 ) } { ( s 2 , 0.2 ) ( s 3 , 0.8 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) }
M 4 { ( s 1 , 0.1 ) ( s 2 , 0.9 ) } { ( s 2 , 0.4 ) ( s 3 , 0.6 ) } { ( s 3 , 0.9 ) ( s 4 , 0.1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 2 , 1 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 0.7 ) ( s 3 , 0.3 ) }
M 5 { ( s 3 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 3 , 0.8 ) ( s 4 , 0.2 ) } { ( s 2 , 1 ) } { ( s 3 , 0.3 ) ( s 4 , 0.7 ) } { ( s 3 , 0.4 ) ( s 4 , 0.6 ) } { ( s 3 , 0.6 ) ( s 4 , 0.4 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) }
M 6 { ( s 2 , 1 ) } { ( s 2 , 0.5 ) ( s 3 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 2 , 1 ) } { ( s 3 , 0.2 ) ( s 4 , 0.8 ) } { ( s 3 , 0.5 ) ( s 4 , 0.5 ) } { ( s 3 , 0.7 ) ( s 4 , 0.3 ) } { ( s 2 , 0.8 ) ( s 3 , 0.2 ) }
Table 10. Comprehensive evaluation matrix.
Table 10. Comprehensive evaluation matrix.
Z i X 1 X 2 X 3 X 4
Z 1 ( s 3 , 0.11 ) ( s 3 , 0.18 ) ( s 3 , 0.08 ) ( s 3 , 0.13 )
Z 2 ( s 3 , 0.08 ) ( s 3 , 0.40 ) ( s 3 , 0.15 ) ( s 3 , 0.13 )
Z 3 ( s 3 , 0.16 ) ( s 3 , 0.19 ) ( s 3 , 0.14 ) ( s 3 , 0.09 )
Z 4 ( s 3 , 0.17 ) ( s 3 , 0.24 ) ( s 3 , 0.10 ) ( s 3 , 0.16 )
Z 5 ( s 3 , 0.06 ) ( s 3 , 0.27 ) ( s 3 , 0.01 ) ( s 3 , 0.01 )
Z 6 ( s 3 , 0.12 ) ( s 3 , 0.11 ) ( s 3 , 0.12 ) ( s 3 , 0.14 )
Table 11. The comparison between the proposal and some existing 360 degree feedback evaluation methods.
Table 11. The comparison between the proposal and some existing 360 degree feedback evaluation methods.
360 Degree Feedback
Evaluation Method
Evaluation Information RepresentationEvaluator BiasConsensus Issue
Saberzadeh-Ardestani et al. [43]NumericalNot consideredNot considered
Baker et al. [46]NumericalNot consideredNot considered
Archer et al. [56]NumericalNot consideredNot considered
Joshi et al. [60]NumericalNot consideredNot considered
Bing-You et al. [23]NumericalNot consideredNot considered
Andrés et al. [58]LinguisticNot consideredNot considered
Anisseh et al. [40] LinguisticNot consideredNot considered
Espinilla et al. [41]LinguisticNot consideredNot considered
The proposed 360 degree feedback evaluation methodLinguisticConsidered in information aggregationConsidered in consensus model
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, C.; Wang, J.; Zhu, Y.; Zhang, H. A Consensus-Based 360 Degree Feedback Evaluation Method with Linguistic Distribution Assessments. Mathematics 2024, 12, 1883. https://doi.org/10.3390/math12121883

AMA Style

Fan C, Wang J, Zhu Y, Zhang H. A Consensus-Based 360 Degree Feedback Evaluation Method with Linguistic Distribution Assessments. Mathematics. 2024; 12(12):1883. https://doi.org/10.3390/math12121883

Chicago/Turabian Style

Fan, Chuanhao, Jiaxin Wang, Yan Zhu, and Hengjie Zhang. 2024. "A Consensus-Based 360 Degree Feedback Evaluation Method with Linguistic Distribution Assessments" Mathematics 12, no. 12: 1883. https://doi.org/10.3390/math12121883

APA Style

Fan, C., Wang, J., Zhu, Y., & Zhang, H. (2024). A Consensus-Based 360 Degree Feedback Evaluation Method with Linguistic Distribution Assessments. Mathematics, 12(12), 1883. https://doi.org/10.3390/math12121883

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop