Next Article in Journal
Optimization and H Performance Analysis for Load Frequency Control of Power System with Transmission Delay Under DoS Attacks
Previous Article in Journal
Finite-Series Solutions of Hybrid PDE Systems for Conditional Moments of Regime-Switching Extended CEV Processes with Applications in Finance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Assessment Method for Ability Increment of Scientific Researchers Based on Interval Evaluation and Cloud Model Theory

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Beijing Institute of Tracking and Telecommunications Technology, Beijing 100096, China
3
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
*
Authors to whom correspondence should be addressed.
Mathematics 2026, 14(5), 823; https://doi.org/10.3390/math14050823
Submission received: 14 January 2026 / Revised: 18 February 2026 / Accepted: 23 February 2026 / Published: 28 February 2026

Abstract

The evaluation of the ability increment of scientific researchers is the most direct method to test the effect of scientific research training. Considering the variety and high professionalism of scientific research positions, and in order to reduce the difficulty of expert evaluation and improve the operability and reliability of the evaluation methods, this paper presents an evaluation method that combines interval evaluation and cloud model theory based on the characteristics of ability increment in scientific researchers. Firstly, the interval weights of each evaluation index are determined based on the interval analytic hierarchy process (IAHP) method without depending on evaluation data. Secondly, the interval weights are converted into value weights based on the constant deviation ratio criterion to avoid cross-hierarchical uncertainty in the evaluation results due to the direct involvement of interval weights in subsequent evaluation. To reversely convert the interval evaluation results of indexes into a cloud model, the probability density function applicable to the interval indexes is analyzed, and the conversion formula from interval-valued evaluation indexes to a cloud model is directly provided based on probability theory. On this basis, this paper describes in detail the method for reversely analyzing the effectiveness of index evaluation based on the characteristics of the cloud model, thus achieving an effective combination of interval evaluation and the cloud model. Finally, effective evaluation of ability increment indexes of scientific researchers at all levels is accomplished by adopting a comprehensive cloud model and a gray cloud clustering method. The effectiveness of the proposed method in this paper is validated through simulation examples.

1. Introduction

Scientific research training is the main way to quickly improve the ability of scientific researchers, and ability increment evaluation is the most direct way to test the training effect. It is helpful for further clarifying the deficiencies in the training of scientific researchers, identifying the core problems hindering their ability increment, and determining their future training needs. Therefore, the evaluation of the ability increment of scientific researchers is the key to improving their ability level and has great practical significance in promoting the overall development of scientific research units. However, there are many types of scientific research positions, and different positions have different requirements for the abilities of scientific researchers. In order to achieve an objective evaluation of the ability increment of different scientific researchers, it is necessary to design a unified evaluation method applicable to such problems.
Ability increment evaluation should analyze not only the overall ability increment of scientific researchers so as to compare the relative incremental level of abilities among scientific researchers in the same position, but also the increment of each ability so as to identify the core problems hindering the ability increment of each scientific researcher. It can be seen that evaluation is different from sorting, and the evaluation of the ability increment of different scientific researchers should have cardinal significance. In this regard, we divide the whole evaluation method into two steps: (1) Determine the evaluation index weights of the ability increment of scientific researchers. Since the evaluation index system of ability increment of scientific researchers is a hierarchical structure, it is necessary to determine the index weights to measure the relative importance of the indexes. (2) Design an upward aggregation method for indexes at the same hierarchical level. After determining the weights of indexes at each level, the evaluation values of indexes at the same lower hierarchical level should be aggregated into the evaluation value of an index at a higher hierarchical level, starting from the lowest hierarchical level and finally obtaining the overall evaluation value of the ability increment of scientific researchers.
At present, weight determination methods mainly include subjective weighting methods, objective weighting methods and combination weighting methods [1]. Subjective weighting methods include the expert consultation method (Delphi method), set-valued iteration method and analytic hierarchy process (AHP) method. These methods determine index weights based on the knowledge and experience of experts and have cardinal significance due to their independence from objective data. Objective weighting methods include the entropy weighting method, projection pursuit method, and principal component analysis [2,3,4]. These methods determine index weights by analyzing objectively recorded data and can show differences among index data, demonstrating strong objectivity. The combination weighting method integrates the weights obtained by subjective and objective weighting methods to further reduce deviations between the two methods. Commonly used combination weighting methods include multiplicative normalization, linear weighting and deviation minimization [5,6,7]. As for the evaluation of the ability increment of scientific researchers, to facilitate horizontal comparison of the ability increment among scientific researchers in the same position, it is necessary to analyze changes in ability increment against unified criteria. That is, increment values should be calculated using a unified weighting coefficient. Therefore, index weights must be determined using a subjective weighting method to ensure that the index weights remain constant (i.e., with cardinal significance) when the number of scientific researchers in the same position changes.
In addition, given that the evaluation of relative importance of indexes is highly subjective, the expert group should be allowed to carry out evaluations based on interval weights so as to reduce the difficulty and increase the reliability of expert evaluation. However, the evaluation model based on direct interval weights will produce interval evaluation results [8], thus resulting in cross-hierarchical uncertainty for the evaluation of the ability increment of scientific researchers. In this regard, the interval weights obtained should be converted to value weights. Currently, there are many methods to convert interval numbers to point value, such as the conversion methods based on set pair analysis [9] and possibility degree [10]. However, such types of methods are mainly used to intuitively compare the interval numbers by mapping the conversion from interval numbers to real numbers. It can be seen that these methods are more suitable for sorting solutions containing interval numbers, but not for solving criterion-based weights. In theory, equivalent mapping means bidirectional conversion between interval numbers and real numbers, but interval numbers cannot be directly and equivalently mapped to real numbers, so certain specific criteria should be provided to convert the interval weights to value weights. For example, the criteria of minimum projection deviation in the literature [11] and foreground-based theory in the literature [12] all enable the conversion from interval weights to value weights. However, these conversion methods, like the objective weighting method, are dependent on the objectively recorded index data and are therefore not suitable for the evaluation of the ability increment of scientific researchers. Therefore, we adopt the IAHP method, which combines interval analysis and subjective weighting, to determine the weights of indexes at each hierarchical level of the ability increment of scientific researchers. On this basis, we set a constant deviation ratio criterion to convert the interval weights to value weights without depending on the evaluation data of the indexes.
Several widely applied aggregation methods are the weighted average method, fuzzy comprehensive evaluation method as well as cloud model method [13,14,15]. Among them, the cloud model skillfully combines randomness and fuzziness, and achieves the interconversion between qualitative and quantitative index values, based on which, the cloud model solves the problem that uncertain indexes are difficult to quantify in the field of evaluation, and it has been widely used in many engineering fields, such as valuable medical equipment evaluation [16], tunnel seismic resilience assessment [17], and urban water cycle health status evaluation [18]. Considering that the evaluation indexes of the ability increment of scientific researchers are generally qualitative indexes, the cloud model is also considered an applicable evaluation method in this paper. In addition, the evaluation method based on cloud model has two other advantages: (1) The inherent characteristic hyper-entropy of the cloud model can be used to analyze the stability of the index evaluation by the expert group and identify the indexes with weakly consistent evaluation results given by experts. (2) The cloud model has three characteristic parameters, and can produce more comprehensive evaluation results compared with the traditional weighted average and fuzzy comprehensive evaluation methods. In view of the above advantages, we adopt the cloud model as the evaluation method for the ability increment of scientific researchers.
However, the backward cloud generator of the conventional cloud model is applicable only to point value samples, which means the expert group should first determine the point value scores of the indexes before effectively converting them to the corresponding cloud model of the indexes. Considering that qualitative evaluation by experts is required for most of the evaluation indexes of the ability increment of scientific researchers, and is affected by factors such as experts’ professional fields and experience level, it is usually difficult for experts to accurately determine the point value scores of indexes. As a result, if the interval evaluation is directly combined with the cloud model according to the interval algorithm, the interval cloud model obtained will produce results in the form of a range. In the meantime, the interval expansion effect will further lead to remarkable inaccuracy of the evaluation results [19]. Consequently, it is required to effectively combine interval evaluation with the cloud model. In recent years, there have been numerous studies on the application of interval clouds. Reference [20] proposes the linguistic interval-valued polytopic fuzzy set (LIVPOFS), which combines the trapezoidal cloud model and TOPSIS to construct a multi-criteria group decision-making method, and applies it to the evaluation of taxi services in Delhi, India. Reference [21] proposes a cloud model-based interval evidence fusion method, which generates interval evidence through cloud matching to achieve multi-feature fusion decision-making and achieve higher accuracy in motor rotor fault diagnosis. Reference [22] adopts the probability interval hesitant fuzzy aggregation cloud model and regret theory to construct a decision-making method that can fully utilize decision information and determine stage and attribute weights. Reference [23] combines the dual interval rough ensemble cloud model with the optimal and inferior methods for third-party logistics service provider selection decision-making. However, the above literature mainly uses interval clouds for decision-making, which belongs to the sorting problem [24]. Regarding the evaluation issue, reference [25] provides a reverse interval cloud generator. However, the approach of decomposing the upper and lower bounds of the score interval and subsequently deriving the comprehensive cloud based on the cloud aggregation formula is clearly irrational. Therefore, in order to achieve effective conversion of interval-valued evaluation indexes to the cloud model, it is necessary to analyze the conversion mechanism and provide a completely different conversion approach.
In summary, this paper provides an effective evaluation method combining the interval evaluation and cloud model theory, which is applicable to the ability increment of scientific researchers. With this method, the interval weight calculation of evaluation indexes is achieved based on IAHP method by combining strong consistency criterion and interval eigenvalue method, and on this basis, the effective conversion of interval weights to value weights is realized through the constant deviation ratio criterion. At the same time, the interval indexes are considered interval variables following general distribution, and the cloud model generation approach for interval-valued evaluation indexes is provided on the basis of the theoretical calculation formula of cloud model characteristics. On these bases, we provide the index score effectiveness evaluation method, the comprehensive cloud model generation method, and the grey cloud clustering method, and finally achieve effective evaluation of the ability increment of scientific researchers.

2. Theoretical Basis and Problem Analysis

2.1. Concept, Advantages and Implementation Approach of Cloud Model

Cloud model theory is founded on probability, statistics and fuzzy sets; the interconversion between qualitative indexes and quantitative values can be achieved by constructing a cloud model. Consequently, the cloud model that incorporates the probability of random occurrence and fuzzy bounds can quantify fuzzy indexes. The cloud is defined as follows:
Definition 1.
Let Y be a quantitative domain described by an accurate value, i.e., Y = y . If a random number with stable distribution tendency, ξ ( y ) [ 0 , 1 ] , can be found for any value representing the qualitative index, y Y , the random number ξ is called the membership degree of accurate value y in set of qualitative index, i.e., y Y , y ξ . The set of all membership degrees is called the cloud [26].
Cloud model employs three numerical characteristics expectation (Ex), entropy (En), and hyper-entropy (He). The cloud model with characteristic parameters (Ex,En,He) = (0.8,0.2,0.02) as an example is shown in Figure 1. In the figure, Ex is the mean of the qualitative concept converted to accurate values; En represents the fuzziness measure of qualitative indexes; and He represents the uncertainty measure of En.
The cloud model achieves the conversion between qualitative concept and quantitative value through the forward cloud generator or the backward cloud generator. The forward cloud generator implements the conversion from qualitative data to quantitative data. The specific steps for generating ħ cloud drops and their membership degree using the characteristics ( E x , E n , H e ) of the cloud model are as follows:
(1)
Generate n normal random numbers with ( E n , H e ) as expectation and standard deviation, E n i , i = 1 , 2 , , ħ = E n + H e · randn ( ħ ) ;
(2)
Generate n cloud drops with ( E x , E n i ) as expectation and standard deviation, x i , i = 1 , 2 , , ħ = E x + E n i · randn ( ħ ) ;
(3)
Calculate the membership degree of cloud drops, y i = e ( x i E x ) 2 2 ( E n i ) 2 , i = 1 , 2 , , ħ .
The backward cloud generator implements the conversion from quantitative data to qualitative data. The characteristics ( E x , E n , H e ) of the cloud model are output with ħ accurate sample values, as calculated below:
E x = X ¯ = 1 ħ i = 1 ħ x i , i = 1 , 2 , , ħ E n = π / 2 D , D = 1 ħ i = 1 ħ x i E x H e = S 2 ( E n ) 2 , S 2 = 1 ħ 1 i = 1 ħ ( x i E x ) 2
where X ¯ denotes the sample mean; D denotes the first-order absolute central moment of the sample; S2 denotes the sample variance.

2.2. Problem Difficulties and Transformation

To scientifically assess the dynamic development and growth of researchers’ capabilities, it is necessary to establish a hierarchical competency evaluation model. This paper adopts a tree-based assessment system, whose core advantage lies in its ability to clearly and structurally represent the hierarchical and affiliative relationships among competency elements.
This system takes the increment dimension of general core competencies for scientific researchers as the root node, covering key directions such as increment of professional competence, increment of problem-solving ability, increment of achievement generation ability, and increment of talent cultivation ability. Each dimension serves as a primary branch, and researchers of different job types can further refine different competency items based on their respective job characteristics, further decomposing them into several specific and measurable secondary or tertiary competency indicators, forming a complete “root branch leaf” tree structure. This structure not only ensures the systematicity and comprehensiveness of the evaluation, but its good scalability also facilitates personalized adaptation and refinement for different evaluation scenarios. See Section 6 for specific examples.
As can be observed, in the evaluation of the incremental capabilities of scientific researchers, the determination of index weights at each hierarchical level is a primary prerequisite. Currently, combined weighting methods are predominantly utilized for this purpose. Initially, subjective weights for indicators within the same hierarchy are elicited from experts via the Analytic Hierarchy Process (AHP). Subsequently, objective weights are computed using techniques such as entropy weighting and projection pursuit. These weights are then optimized by formulating an optimization objective based on the index scores provided by the expert panel. Ultimately, the optimal combined weights, which integrate both subjective and objective components, are derived through linear weighting and deviation minimization.
Fundamentally, this methodological framework presupposes that the subjective weights obtained through AHP are susceptible to inaccuracies. Conversely, objective weighting methods can amplify the weights assigned to indicators that exhibit substantial deviations in the expert-assigned scores. By analyzing the indicator weights within the same hierarchy, the integration of multiple weighting methodologies can maximize the contribution of subjective weights. In other words, the objective weights serve to refine and correct the subjective weights. In fact, this type of method is more suitable for the weight determination of point values, but not for the evaluation of the ability increment of scientific researchers. The main reasons are as follows:
(1)
The index weights determined for evaluation of the ability increment of scientific researchers should have cardinal significance. The main purpose of ability increment evaluation is to analyze the change in ability increment of scientific researchers against unified criteria. Therefore, regardless of the changes in on-the-job researchers or participants in the evaluation, the index weights will not change. In contrast, the index weights by the objective weighting method will change in each evaluation, so it is more suitable for ability increment sorting among fixed researchers.
(2)
It is difficult to determine the relative importance of indexes of the same hierarchy in the evaluation of the ability increment of scientific researchers. The effect of indexes on the ability increment of scientific researchers are complex, and the weight evaluation among the indexes of the same hierarchy is highly subjective, so it is difficult for experts to give accurate point values when evaluating the relative importance of the evaluation indexes of the same hierarchy. Therefore, the expert group should be allowed to carry out evaluations based on interval weights so as to reduce the difficulty and increase the reliability of expert evaluation.
Thus, the interval weighting method based on subjective weighting is more suitable for the evaluation of the ability increment of scientific researchers. In view of the fact that the interval-valued subjective weighting method relaxes the evaluation requirements, enabling experts to give more certain judgment results, the obtained weight intervals of indexes can be considered highly reliable. However, if the interval weights are directly combined with the cloud model, the interval cloud model obtained will produce evaluation results in the form of a range, with cross-hierarchical uncertainty. Meanwhile, the interval expansion effect can also lead to inaccurate evaluation outcomes, making it unsuitable for assessing the ability increment of scientific researchers.
In addition, considering that qualitative evaluation by experts is required for most of the evaluation indexes of the ability increment of scientific researchers, interval evaluation of indexes by experts should be allowed so as to reduce the difficulty of expert evaluation. However, the traditional backward cloud generator is intended for point value samples, i.e., the method shown in Formula (1) is not suitable for interval evaluation. Therefore, the expert group should analyze the conversion mechanism and provide completely different conversion approaches for the interval evaluation of indexes.
In summary, it can be concluded that the interval evaluation methodology can effectively mitigate the complexity of the evaluation process and improve the precision of the evaluation outcomes. The cloud model facilitates the transformation of qualitative expert judgments into quantitative index assessments, enabling the analysis of the stability of evaluation results and the identification of indicators with suboptimal consistency, thereby yielding more comprehensive evaluation findings. Therefore, the synergistic integration of interval evaluation and cloud model theory can achieve a more robust evaluation of the incremental capabilities of scientific researchers. To effectively combine these two approaches, it is essential to address two critical challenges: the determination of indicator weights and the conversion between interval scores.
(1)
The interval weights of indexes at each hierarchical level of the ability increment of scientific researchers are determined by the subjective interval weighting method, and converted to value weights without depending on the evaluation data of the indexes, so that the obtained weights are suitable for the evaluation with cardinal significance.
(2)
The theoretical meaning of the interval values of indexes given by each expert is analyzed, and the conversion from interval-valued evaluation indexes to cloud model is achieved by mechanism analysis.

3. Establishment of Evaluation Index Weights Based on IAHP and Interval Weight Conversion

To evaluate the ability increment of scientific researchers, the priority is to evaluate the weights of indexes at each hierarchical level. IAHP adopts an interval-based evaluation method, which is a type of fuzzy analytic hierarchy process (FAHP). As a special case of fuzzy numbers, interval numbers are more suitable for practical use compared to triangular fuzzy numbers, trapezoidal fuzzy numbers, intuitionistic fuzzy numbers, and so on. However, considering that objective weight determination methods such as grey relational analysis and entropy weighting require the use of indicator evaluation information, the determined indicator weights lack cardinal significance. Consequently, we employ the subjective weighting method IAHP to derive the interval weights of each evaluation index, as detailed in Section 3.1. Furthermore, to guarantee the stability of the evaluation outcomes, we adopt an interval weight conversion method grounded in the constant deviation ratio criterion. This method transforms the derived interval weights into crisp numerical weights in Section 3.2, yielding a definitive constant weight for each index. This, in turn, establishes a solid foundation for evaluating the ability increment of scientific researchers.

3.1. Calculation of Interval Weights of Evaluation Indexes Based on IAHP

There are multiple indexes of the same hierarchy at different hierarchical levels for the evaluation of the ability increment of scientific researchers, so the weights of indexes should be determined first. Considering that the effect of evaluation indexes on the ability increment of researchers is complex and the weight evaluation is highly subjective, it is difficult for the evaluation expert group to give accurate point values on the weights of evaluation indexes. Therefore, we employ the IAHP method as the weight evaluation method for the evaluation of the ability increment of scientific researchers.
In contrast to objective weighting techniques like entropy weighting and dispersion maximization, the AHP directly assesses index weights, independent of the evaluation data related to the incremental capabilities of scientific researchers. This characteristic renders it more appropriate for comparing the incremental abilities of researchers under a unified set of criteria. IAHP shares similarities with the AHP, with the key distinction being that the judgment matrix employed in IAHP is an interval matrix. This design guarantees that when comparing the relative importance of evaluation indices, even if expert judgments are characterized by fuzziness and uncertainty, the evaluation outcomes can still be represented in the form of interval numbers. The resulting interval evaluation results improve the reliability of the evaluation conclusions, and the index weight intervals derived through this approach are deemed highly credible.
For the IAHP method, we combine strong consistency criteria with the interval eigenvalue method to calculate interval weights. The strong consistency criterion ensures that the interval judgment matrix meets the consistency requirements. The interval eigenvalue rule adopts structured decomposition, which decomposes the interval judgment matrix into upper and lower extreme real number matrices for solving, and ensures the inclusiveness of the weight interval through scaling factors [27]. However, it should be noted that the interval eigenvalue method still uses interval operations when calculating scaling factors, which may lead to the expansion of interval weights to a certain extent, and cannot guarantee that the obtained weight interval is the closest interval. But as can be seen from Section 2.2, the method in this section is only for obtaining the possible intervals of indicator weights. In practical use, we also need to convert interval weights into value weights (taking values in interval weights), so it does not affect the actual use in the evaluation of researchers’ ability increments.
(1)
Constructing interval judgment matrix
In the same way as AHP, the expert group makes pairwise comparison on the intensity of importance among evaluation indexes of the same hierarchy by using the 1–9 scale. When comparing, if the expert group feels uncertain, the comparison result can be given in the form of interval numbers. For an index system at a certain level with n evaluation indexes, the interval matrix is as follows:
A   =   ( [ a i j L , a i j U ] ) n × n = [ 1 , 1 ] [ a 12 L , a 12 U ] [ a 1 n L , a 1 n U ] [ a 21 L , a 21 U ] [ 1 , 1 ] [ a 2 n L , a 2 n U ] [ a n 1 L , a n 1 U ] [ a n 2 L , a n 2 U ] [ 1 , 1 ]
where a i j L denotes the lower bound of the importance of indicator i relative to indicator j; a i j U denotes the upper bound of the importance of indicator i relative to indicator j.
(2)
Judging matrix consistency
For the IAHP method, the expert judgment matrix is an interval matrix. Therefore, the traditional matrix consistency evaluation method depending on whether the calculated CR is greater than 0.1 in the AHP method is no longer applicable. As can be seen from the paper [28]:
The interval judgment matrix A is strongly consistent if and only if k = 1 n ( a i k a k j ) = a i j , i , j = 1 , 2 , , n , for any i < j.
Where a i k a k j = [ a i k L a k j L , a i k U a k j U ] exists for interval numbers a i k = [ a i k L , a i k U ] and a k j = [ a k j L , a k j U ] .
When the interval eigenvalue method is used to calculate interval weights, if the interval judgment matrix A is weakly consistent, the lower bound of the calculated index weights will be greater than the upper bound. At the same time, considering that strong consistency should also be achieved when experts compare the relative importance of evaluation indexes at the same hierarchy, we take the strong consistency criterion as the benchmark for judging.
(3)
Calculating the weight intervals of indexes of the same hierarchy
  A = ( [ a i j L , a i j U ] ) n × n is decomposed to matrices A L = [ a i j L ] n × n and A U = [ a i j U ] n × n . The two decomposition matrices are then used to calculate the normalized eigenvectors W L = ( w 1 L , w 2 L , , w n L ) and W U = ( w 1 U , w 2 U , , w n U ) corresponding to their maximum eigenvalues and two positive real numbers, respectively:
c   =   j = 1 n ( 1 i = 1 n a i j U ) 0.5 , d   =   j = 1 n ( 1 i = 1 n a i j L ) 0.5
The interval weight of indexes obtained using matrix A is W = ( w 1 , w 2 , , w n ) = ( [ c w 1 L , d w 1 U ] , [ c w 2 L , d w 2 U ] , , [ c w n L , d w n U ] ) .

3.2. Conversion of Interval Weights Based on Constant Deviation Ratio Criterion

The index weights used for the evaluation of the ability increment of researchers are preferably deterministic values rather than intervals with uncertainty, so as to avoid cross-hierarchical uncertainty of the evaluation results of the ability increment of researchers and inaccurate evaluation results due to interval expansion effect resulting from interval weight-based calculation. Therefore, we should convert the interval weights to value weights so as to produce more certain evaluation results.
Due to the relaxation of the requirements for expert evaluation, the interval weights of indexes obtained by IAHP are considered highly reliable, and therefore it is more reasonable to assign weights to indexes in this interval. At present, the commonly used interval weight processing methods include the interval weight conversion methods based on foreground-based theory and dispersion maximization. Such types of methods can reflect the relative importance of indexes, and are suitable for solutions sorting but not for the evaluation with cardinal significance due to their dependence on the evaluation data of indexes. As for the evaluation of the ability increment of scientific researchers, the purpose is to use a unified standard to analyze the changes in personnel’s incremental capabilities, rather than just for personnel ranking. We adopt an interval weight conversion method based on a constant deviation ratio for this. Next, we will explain the theory:
In practice, experts first form an anchor point (implicit target weight) for the indicator weight values when determining weights, and then make asymmetric adjustments to cover uncertainty. The theory of bounded rationality suggests that experts are not entirely rational when estimating complex weight intervals [29]. When experts are more sensitive to underestimating weights than overestimating them, it is manifested as experts believing that the “target weight” is closer to the upper limit of the weight interval, that is, the deviation ratio ϖ > 1 ; When experts are more sensitive to overestimating weights than underestimating them, it manifests as the “target weight” being closer to the lower limit of the weight interval, i.e., the deviation ratio ϖ < 1 . The traditional midpoint rule in intervals is a special case when the deviation ratio ϖ = 1 , which forces the assumption of complete symmetry of the deviation. However, behavioral science evidence generally indicates that judgment bias often exhibits asymmetry [30]. When the same group of experts evaluates the same level indicator system, their shared evaluation scale and risk perception are relatively stable. Therefore, it can be assumed that they maintain an overall consistent bias tendency in the same evaluation task. The constant deviation ratio can be used as a personalized and stable “adjustment style” parameter for the expert group. Therefore, we directly calculate the deviation ratio by keeping the proportional deviation constant, relaxing the overly strong and often unrealistic assumption of “symmetry”, and more accurately reflecting the optimistic, conservative, or risk-averse tendencies that may exist in expert collective judgment.
This method assumes that the deviation ratio of the upper and lower bounds of the interval weight of each index to its target weight is constant. i.e.,
d p / d p + = ϖ , p [ 1 , 2 , , n ]
where d p is the lower deviation from the target weight with respect to the interval weight; d p + is the upper deviation from the target weight with respect to the interval weight; ϖ is a constant.
Let the target weight be W ˜ = ( w ˜ 1 , w ˜ 2 , , w ˜ n ) , then d p = w ~ p c w p L , d p + = d w p U w ~ p . Obtain Formula (5) from Formula (4):
ϖ = d p d p + = p = 1 n d p p = 1 n d p + = 1 p = 1 n c w p L p = 1 n d w p U 1
Then
w ˜ p = c w p L + ϖ ϖ + 1 ( d w p U c w p L ) = c w p L + 1 p = 1 n c w p L p = 1 n d w p U p = 1 n c w p L ( d w p U c w p L )
The interval weight vectors of indexes at the same hierarchical level can be converted to the target weight vector W ˜ in numerical form according to Formula (6). Combining Section 3.1 and Section 3.2, the method of determining index weights for evaluation of the ability increment of scientific researchers is shown in the Algorithm 1:
Algorithm 1: Method of determining index weights for evaluation of the ability increment of scientific researchers
Input: Meanings of scales 1 to 9, the total number of hierarchies of evaluation indexes Nmax.
Process: For N = 1 : 1 : N max do
     
1. The expert group makes pairwise comparison on the intensity of importance among evaluation indexes of the Nth hierarchy by using the 1–9 scale to construct an interval judgment matrix as shown in Formula (2);
2. Make judgment on the consistency of the matrix against k = 1 n ( a i k a k j ) = a i j , i , j = 1 , 2 , , n . If there is no consistency, return to step 1 to reconstruct the interval judgment matrix;
3. Decompose the interval judgment matrix to calculate the normalized eigenvectors W L , W U corresponding to the maximum eigenvalue, and the upper and lower coefficient bounds shown in Formula (3);
4. Convert the interval weight vector of indexes of the Nth hierarchy to the target weight vector in numerical form, W ˜ ( N ) , according to Formula (6).
End
Output: Set of target weight vectors of indexes at all hierarchical levels, W ˜ = [ W ˜ ( 1 ) , W ˜ ( 2 ) , , W ˜ ( N max ) ] .

4. Evaluation of the Ability Increment of Scientific Researchers Based on Comprehensive Cloud Model

For the evaluation of complex systems, experts often find it difficult to provide precise point values to describe each evaluation index. To reduce the difficulty of expert evaluation, we allow experts to assign interval scores to the evaluation indexes of scientific researchers’ ability increment. On this basis, we propose a cloud generation method for interval-valued evaluation indexes in Section 4.1 through mechanism analysis, which achieves the conversion of interval-valued evaluation indexes into a cloud model. To evaluate the stability of the evaluation index scoring provided by the expert group, we present a cloud model-based effectiveness evaluation method for index scoring in Section 4.2. This ensures the reliability of the evaluation results by judging the stability of the index evaluation. Finally, we introduce a bottom-up cloud aggregation method and a grey clustering-based calculation method for cloud membership degree in Section 4.2.2, thus achieving a comprehensive evaluation of indexes based on the cloud model. For ease of understanding, we provide the following explanation:
In Section 3, we can establish the target weight vector set W ˜ = [ W ˜ ( 1 ) , W ˜ ( 2 ) , , W ˜ ( N max ) ] for each indicator in all layers of the incremental evaluation of scientific researchers’ abilities based on Algorithm 1, thus achieving the determination of indicator weights. In this section, we will elaborate on the aggregation method of indicator evaluation, using expert groups’ group evaluation results for each indicator to achieve a cloud model representation of indicator evaluation. Then, the weights of each indicator are combined with the cloud model of each indicator evaluation and aggregated upwards to obtain the overall evaluation result of the incremental capabilities of scientific researchers. The combination of indicator weights and indicator evaluation models is presented in Formula (17) of Section 4.2.2. Section 4.1 primarily analyzes how cloud models can be employed to characterize the group evaluation outcomes of expert groups across various use cases. In contrast, Section 4.2.1 focuses on using cloud features to assess the stability of expert group scoring on evaluation indicators, a process that has no direct correspondence with the establishment of indicator weights detailed in Section 3.

4.1. Cloud Generation Method for Interval-Valued Evaluation Indexes

We employ the expert scoring method—inviting several experts to evaluate the index values of the ability increment of scientific researchers. In view of the uncertainty in the evaluation of the ability increment, it would be difficult for experts to give accurate point values for each index due to cognitive ambiguity. In this regard, the experts are allowed to give the most probable interval scores for each index depending on their respective cognitive ability.
To evaluate the ability increment of scientific researchers based on cloud model, the priority is to generate the corresponding index cloud with the values assigned by each expert to the evaluation indexes. Considering that the cloud model is mainly characterized by expectation Ex, entropy En and hyper-entropy He, we provide the concrete method of converting the interval values of indexes given by each expert to the model expectation Ex, entropy En, and hyper-entropy He.
Unlike the conversion method of point values given in Formula (1), the expert group should analyze the conversion mechanism and provide completely different approaches for conversion of interval scores. Taking the calculation of expectation as an example, the conversion of point values is done by directly calculating the mean of scores given by experts. According to the interval algorithm, the mean of interval values is still an interval number. If we follow the conversion method of point values, the expectation of the cloud model will be given in the form of interval numbers, which cannot be directly used to generate cloud model, and has no practical meaning of conversion. In this regard, we first analyze the theoretical meaning of the interval values of indexes given by experts.
Definition 2.
The random variable X with arbitrary distribution in the interval [e, g] is called an interval variable subject to general distribution.
For evaluation indexes of the same hierarchy of the ability increment of scientific researchers, let k be the expert sequence and Xp be the interval variable corresponding to the pth index, and e p ( k ) , g p ( k ) denotes the interval value of the pth evaluation index given by the kth expert. Then, for the interval variable Xp of the pth index, the interval score e p ( k ) , g p ( k ) given by the kth expert is an interval number with general distribution, and its corresponding interval variable is denoted as X p ( k ) . In view of the equal weight of the scores given by each expert, the index scores may fall in the scoring intervals given by all n experts (if some experts have greater voice, the scores given by them should be multiplied by a corresponding weighting coefficient).
On this basis, the distribution function of the interval variable Xp is defined as a set of variables { X p ( k ) , k = 1 , 2 , , m } with general distribution. The distribution function F X p ( x ) is:
F X p ( x ) = P { X p x } = 1 m { P ( X p ( 1 ) x ) + P ( X p ( 2 ) x ) + + P ( X p ( n ) x ) } = 1 m k = 1 m F X p ( k ) ( x )
Then the density function f X p ( x ) is:
f X p ( x ) = F X p ( x ) = 1 m k = 1 m f X p ( k ) ( x )
Based on the above analysis, we obtain the probability density function of interval-valued evaluation indexes, then we can calculate the mean, first-order absolute central moment and variance of evaluation indexes directly based on the probability theory, thus achieving the conversion of the interval values of evaluation indexes to expectation Ex, entropy En, and hyper-entropy He.
From the definition of mean and Formula (8), obtain the mean of the pth index by:
X ¯ p = + x f X p ( x ) d x = + x 1 m k = 1 m f X p ( k ) ( x ) d x = 1 m k = 1 m + x f X p ( k ) ( x ) d x = 1 m k = 1 m μ p ( k )
where μ p ( k ) is the mean. It follows that the mean of the evaluation index can be calculated from the mean of the interval scores given by n experts.
The first-order absolute central moment of the pth index is:
D p = + | x X ¯ p | f X p ( x ) d x = + | x X ¯ p | 1 m k = 1 m f X p ( k ) ( x ) d x = 1 m k = 1 m + | x X ¯ p | f X p ( k ) ( x ) d x
From the definition of variance and Formula (8), obtain the variance of the pth index by:
S p 2 = + ( x X ¯ p ) 2 f X p ( x ) d x = + ( x X ¯ p ) 2 1 m k = 1 m f X p ( k ) ( x ) d x = 1 m k = 1 m + ( x X ¯ p ) 2 f X p ( k ) ( x ) d x
According to + f X p ( x ) d x = 1 , + x f X p ( k ) ( x ) d x = μ p ( k ) and + ( x μ p ( k ) ) 2 f X p ( k ) ( x ) d x = σ p ( k ) , it can be obtained:
S p 2 = 1 m k = 1 m + ( x μ p ( k ) + μ p ( k ) X ¯ p ) 2 f X p ( k ) ( x ) d x = 1 m k = 1 m [ ( σ p ( k ) ) 2 + ( X ¯ p μ p ( k ) ) 2 ]
where σ p ( k ) is the standard deviation. It follows that the variance of the evaluation index can be calculated from the mean and standard deviation of the interval scores given by m experts.
This enables experts to not only provide the most likely interval values of indicators when scoring, but also to give the most likely distribution of indicator values within the score interval based on their own cognitive experience. If an expert gives a score of [82, 85] to an index with normal distribution, it means that the expert thinks that the most probable value of the index is 83.5, with a probability of 99.73% in the interval [82, 85]. In general, experts will ignore small probability events when giving interval scores, and consider that the values of the index follow uniform distribution in the interval, i.e., the index scores show equal probability in the interval.
For X p ( k ) ~ ( e p ( k ) , g p ( k ) ) , obtain the probability density function, mean μ p ( k ) , and standard deviation σ p ( k ) by:
f X p ( k ) ( x ) = 1 g p ( k ) e p ( k ) , x [ e p ( k ) , g p ( k ) ] 0 , x ( , e p ( k ) ) ( g p ( k ) , + ) , μ p ( k ) = e p ( k ) + g p ( k ) 2 , σ p ( k ) = ( e p ( k ) g p ( k ) ) 2 12
Then, when all the interval scores given by experts follow uniform distribution, from Formulas (9), (10) and (12), obtain:
X ¯ p = 1 2 m k = 1 m ( e p ( k ) + g p ( k ) ) D p = 1 m k = 1 m X ¯ p ( X ¯ p x ) f X p ( k ) ( x ) d x + X ¯ p + ( x X ¯ p ) f X p ( k ) ( x ) d x = 1 m k = 1 n ( k ) ( k ) = 1 2 ( g p ( k ) X ¯ p ) 2 ( e p ( k ) X ¯ p ) 2 g p ( k ) e p ( k ) , e p ( k ) X ¯ p 1 2 ( e p ( k ) X ¯ p ) 2 ( g p ( k ) X ¯ p ) 2 g p ( k ) e p ( k ) , g p ( k ) X ¯ p 1 2 ( g p ( k ) X ¯ p ) 2 + ( e p ( k ) X ¯ p ) 2 g p ( k ) e p ( k ) , e p ( k ) < X ¯ p < g p ( k ) S p 2 = 1 3 m k = 1 m ( e p ( k ) ) 2 + e p ( k ) g p ( k ) + ( g p ( k ) ) 2 1 4 m 2 k = 1 m ( e p ( k ) + g p ( k ) ) 2
Based on the above analysis, we can achieve the conversion of interval-valued evaluation indexes to cloud model through mechanism analysis.
E x ( p ) = X ¯ p , E n ( p ) = π / 2 D p , H e ( p ) = S p 2 ( E n ( p ) ) 2

4.2. Effectiveness and Comprehensive Evaluation of Index Scoring Based on Cloud Model

The major advantage of the cloud model is its hyper-entropy, which can be used to judge the stability of evaluation index scoring by the expert group. By evading unstable evaluation, relatively stable evaluation results can be obtained. In this regard, we provide a cloud model-based effectiveness evaluation method of index scoring in Section 4.2.1. The bottom-up cloud aggregation method enables the evaluation of indexes at each hierarchical level. In this regard, we provide a grey clustering-based calculation method of cloud membership degree in Section 4.2.2, thus achieving comprehensive evaluation of indexes based on cloud model by judging the similarity between clouds.

4.2.1. Cloud Model-Based Effectiveness Evaluation of Index Scoring

The hyper-entropy He is a measure of the dispersion degree of entropy En, namely the entropy of entropy. The greater the value of hyper-entropy, the greater the dispersion degree of cloud drops in the cloud model. The fogging phenomenon makes the cloud map blurred. From the three numerical characteristics ( E x , E n , H e ) of the cloud model, obtain the upper bound function f u p , the lower bound function f d o w n and the hyper-entropy function f H e of the cloud model:
f u p = e ( x E x ) 2 2 ( E n + 3 H e ) 2 ,   f d o w n = e ( x E x ) 2 2 ( E n 3 H e ) 2 ,   f H e = e ( x E x ) 2 2 ( E n + 3 H e ) 2 e ( x E x ) 2 2 ( E n 3 H e ) 2
The upper bound function, lower bound function, and hyper-entropy function of the cloud model corresponding to different values of hyper-entropy are shown in Figure 2. It can be seen that the greater the value of hyper-entropy, the greater the randomness of cloud drops, and the more serious the fogging degree of the cloud. From the three-sigma rule of the cloud model:
99.73% of the random entropy E n falls within the [ E n 3 H e , E n + 3 H e ] interval. When En′ = En + 3He, the distribution of cloud droplets is widest, forming the upper boundary f u p of the cloud. When En′ = En−3He, the distribution of cloud droplets is narrowest, forming the lower boundary f d o w n of the cloud. Therefore, the membership degree y = e ( x E x ) 2 2 ( E n ) 2 of the cloud droplets generated by 99.73% of E n must fall within the envelope band defined by f d o w n and f u p . When the cloud model hyper-entropic He < En/3, 99.73% of cloud droplets are between the lower boundary f d o w n and the upper boundary f u p . At this time, the atomization degree of the cloud map is low, and the normality is good. Therefore, comparing the size of He and En/3 can be used as a benchmark to evaluate whether the cloud model is severely atomized.
I.e., after the expert group scores the evaluation indexes, the interval values are converted into a cloud model using the method described in Section 4.1. Then, the magnitudes of He and En/3 are compared: He < En/3 indicates consistent index scores given by the expert group and stable evaluation cloud mode of this index, and He > En/3 indicates weakly consistent index scores given by the expert group and poorly stable evaluation cloud model of this index. In the latter case, the expert group should rescore the index after discussion, so as to avoid unstable evaluation of indexes by the expert group and obtain more stable evaluation results.

4.2.2. Cloud Model-Based Comprehensive Evaluation of Index Scoring

Finally, when conducting a comprehensive evaluation of the growth of researchers’ abilities, we adopt a bottom-up aggregated evaluation approach. This approach allows for a detailed evaluation of the various indicators that contribute to the incremental capabilities of researchers. According to the algorithm in cloud computing in Table 1, the comprehensive cloud model C l o u d ( E x , E n , H e ) can be computed by combining the weights with the characteristics of cloud of indexes at the same hierarchical level as follows:
E x = p = 1 n w ˜ p E x ( p ) , E n = p = 1 n ( w ˜ p ( E n ( p ) ) 2 ) , H e = p = 1 n ( w ˜ p ( H e ( p ) ) 2 )
where n represents the number of indexes included in the same evaluation hierarchy; and w ˜ p represents the weight of the pth index.
In view of the uncertainty in the evaluation of the ability increment of scientific researchers, we adopt the qualitative language to reflect the fuzziness in the evaluation, and take the four-level (“excellent”, “good”, “medium” and “poor”) as the benchmark for evaluation of the ability increment of researchers. Qualitative language and intervals are usually not equally divided. In this regard, the golden section method is employed to determine the intervals for many problems. As far as the assessment of scientific researchers’ capacity increment is concerned, the conventional interval partitioning approach is more consistent with public cognition in practical applications. Therefore, by consulting experts, we employ the distribution intervals shown in Table 2 to correspond to the four-level evaluation criteria.
According to the three-sigma rule of cloud, the bilateral constraint interval [e, g] can be converted to a cloud model by Formula (18). As a result, a stand cloud map corresponding to multiple qualitative evaluation criteria is established.
E x = e + g 2 , E n = g e 6 , H e = τ
where τ is a given constant to reflect the randomness of evaluation values. Considering the strongest determinacy of the left and right bounds of the evaluation criteria, the semi-normal cloud model is taken, i.e., when e = 0, Ex = 0 and En = g/3 exist; and when g = 100, Ex = 100 and En= (ge)/3 exist.
Next, the similarity between the cloud models is calculated. The standard cloud corresponding to the highest similarity is taken as the evaluation result of the index, which is referred to as grey cloud clustering. The membership degree of the cloud model is computed using this grey cloud clustering method. The concrete approach for calculating the similarity between clouds is presented in Algorithm 2.
Definition 3.
Generate ħ cloud drops by using the forward cloud generator for C l o u d 1 ( E x 1 , E n 1 , H e 1 ) . Let the membership degree of the ith cloud drop ( x ( i ) , η ( i ) ) in C l o u d 2 ( E x 2 , E n 2 , H e 2 ) be η ˜ ( i ) . Then the similarity between C l o u d 1 and C l o u d 2 is 1 ħ i = 1 ħ η ˜ ( i ) .
Algorithm 2: Specific calculation of similarity between cloud models
Input: Cloud models C l o u d 1 ( E x 1 , E n 1 , H e 1 ) and C l o u d 1 ( E x 1 , E n 1 , H e 1 ) .
Process: For i = 1:1: ħ
     
1. Generate normal random numbers E n i = E n 1 + H e 1 · randn ( 1 ) in C l o u d 1 ( E x 1 , E n 1 , H e 1 ) with ( E n 1 , H e 1 ) as expectation and standard deviation;
2. Generate cloud drops x i = E x 1 + E n i · randn ( 1 ) in C l o u d 1 ( E x 1 , E n 1 , H e 1 ) with ( E x 1 , E n i ) as expectation and standard deviation;
3. Substitute cloud drop x i into C l o u d 2 ( E x 2 , E n 2 , H e 2 ) to calculate its membership degree η ˜ ( i ) = e ( x i E x 2 ) 2 2 ( E n 2 ) 2 .
End
Output: Similarity between C l o u d 1 and C l o u d 2 ,   κ = 1 ħ i = 1 ħ η ˜ ( i ) .
The characteristics of the cloud model are taken into account when sorting the index scores: Ex represents the mean of the index score; En represents the reliability of the index score; and He represents the stability of the index score. Therefore, like the classical weighted average method, we take the expectation of the comprehensive cloud model as the index score of the ability increment of scientific researchers. When sorting the indexes, the priority is given to the evaluation level, followed by the evaluation score.
There are four situations of sorting index scores: (1) If an evaluation index belongs to the criterion at the same hierarchical level of its expectation, it will be rated by the expectation size; (2) If an evaluation index belongs to the criterion at a higher hierarchical level than that of its expectation, it will be rated lower than the indexes at a higher hierarchical level than that of its expectation, but rated higher than the indexes at the same hierarchical level as that of its expectation; (3) If an evaluation index belongs to the criterion at a lower hierarchical level than that of its expectation, it will be rated higher than the indexes at a lower hierarchical level than that of its expectation, but rated lower than the indexes at the same hierarchical level than that of its expectation; and (4) For similar evaluation indexes not belonging to the criterion at the same hierarchical level of their expectations, they will be rated by the evaluation scores. The sorting of ability increment among different scientific researchers is the same as above.
In summary, the complete algorithm flow for evaluating the ability increment of scientific researchers combing Section 4.1 and Section 4.2 is shown in Algorithm 3:
Algorithm 3: Evaluation method for ability increment of scientific researchers combining interval evaluation and cloud model theory
Input: Indicator system structure, relative importance intervals between indicators at the same level (determined by the expert group), interval scores for the lowest level indicators (determined by the expert group), Standard cloud models C l o u d 1 ( E x 1 , E n 1 , H e 1 ) ~ C l o u d 4 ( E x 4 , E n 4 , H e 4 ) and the number of cloud drops ħ .
Process: 
1. Establish the set of target weight vectors of indexes at all hierarchical levels of the evaluation system for the ability increment of scientific researchers as per Algorithm 1, W ˜ = [ W ˜ ( 1 ) , W ˜ ( 2 ) , , W ˜ ( N max ) ]
2. Experts give the interval scores of the indexes of each hierarchy and calculate the mean, first-order absolute central moment and variance of each index according to Formula (14).
3. Convert the interval evaluation results by the expert group to the cloud model of the corresponding index according to Formula (15).
4. Compare He and En/3. If He < En/3, the evaluation cloud model of the index is relatively stable, and continue to step 5; On the contrary, the evaluation cloud model of the indicators is unstable, and the expert group needs to reevaluate the indicators.
5. Obtain the comprehensive cloud model C l o u d ( E x , E n , H e ) by combining the weights and the cloud characteristics of indexes at the same hierarchical level according to Formula (17).
6. Calculate the similarity κ ( i ) , i = 1 , 2 , 3 , 4 between the comprehensive cloud model C l o u d ( E x , E n , H e ) and each standard cloud model according to Algorithm 2.
7. Determine the criteria of the standard cloud corresponding to max ( κ ( i ) ) .
8. Take the expectation Ex of the comprehensive cloud model as the index score of the ability increment of scientific researchers.
Output: Comprehensive cloud model C l o u d ( E x , E n , H e ) and its evaluation level and score.

5. Numerical Cases

In order to support a better understanding of the method proposed in this article, we briefly demonstrate the complete calculation process of the method in Figure 3. Below, we will use a simple numerical example to demonstrate steps 1–3 in Figure 3. To avoid confusion among readers, it should be noted that consistency in Section 3.1 refers to the degree of consistency in the expert group’s judgment of the relative importance between indicators, while consistency in Section 4.2.2 refers to the degree of consistency in the expert group’s evaluation of indicators.
Figure 4 shows a simple tree structure representing the architecture of indicators. Formula (19) is the interval number judgment matrix for the importance between indicators Z1Z3 given by the expert group. Table 3 shows the interval evaluations given by the expert group to a certain researcher for indicators Z1Z3.
A Z 1 ~ Z 3 = [ 1 , 1 ] 1 5 , 1 3 1 , 2 3 , 5 [ 1 , 1 ] 3 , 6 1 2 , 1 1 6 , 1 3 [ 1 , 1 ]
Next, we calculate the weights of the indicators and represent them in a cloud model. Finally, we combine the weights of the indicators with the cloud model to obtain a comprehensive cloud model that represents the incremental capabilities of the researcher.
(1)
Weight calculation
  • Consistency check. Determine the consistency of the interval matrix shown in Formula (19) based on k = 1 n ( a i k a k j ) = a i j , i , j = 1 , 2 , , n . From k = 1 3 ( a 1 k a k 2 ) = 1 5 , 1 3 = a 12 , k = 1 3 ( a 1 k a k 3 ) = 1 , 2 = a 13 , and k = 1 3 ( a 2 k a k 3 ) = 3 , 6 = a 23 , it can be seen that the interval matrices are consistent.
  • Calculation of indicator interval weights. Decompose the interval matrix A Z 1 ~ Z 3 into matrices A Z 1 ~ Z 3 L and A Z 1 ~ Z 3 U . Calculate the standardized eigenvectors W Z 1 ~ Z 3 L = ( 0.1876 , 0.6719 , 0.1405 ) and W Z 1 ~ Z 3 U = ( 0.1879 , 0.6637 , 0.1484 ) ; The corresponding two coefficients are c Z 1 ~ Z 3 = 0.9241 and d Z 1 ~ Z 3 = 1.0742 ; The interval weights of indicators Z1, Z2, and Z3 are W Z 1 ~ Z 3 = ( [ 0.1734 , 0.2018 ] , [ 0.6209 , 0.7130 ] , [ 0.1298 , 0.1594 ] ) .
  • Interval weight conversion. According to Formula (5), the constant deviation ratio is 1.0229. According to Formula (6), the weight vector of indicators Z1, Z2, and Z3 is W ˜ Z 1 ~ Z 3 = ( 0.1878 , 0.6675 , 0.1448 ) .
(2)
Indicator cloud model calculation
  • Conversion of indicator cloud models. Calculate the parameters of each indicator using Formula (14), and convert the interval into the corresponding indicator cloud model using Formula (15). The obtained results are shown in Table 4.
  • Analysis of the effectiveness of indicator evaluation. According to the hyper-entropy and entropy values of the indicator cloud model, it can be seen that indicators Z1, Z2, and Z3 all satisfy He < En/3, indicating that the expert group has good consistency in scoring the three indicators.
(3)
Comprehensive cloud model computing
Based on the weight vector W ˜ Z 1 ~ Z 3 of the indicators and the results of the indicator cloud model in Table 4, calculate the comprehensive cloud model C l o u d Z ( 56.8247 , 1.8117 , 0.5438 ) that represents the incremental ability of the researcher according to Formula (17).

6. Experimental Study and Effect Analysis

This section takes researchers engaged in system requirement research and argumentation design positions as an example for analysis. The scientific researchers in this position are primarily engaged in the research on requirements analysis, top-level planning, development strategy, system construction and development, etc. The professionalism skills can be further divided into basic technical skills, overall design skills, and analysis and evaluation skills. Among them, basic technical skills refer to the ability to provide basic support for requirements demonstration, including the theoretical technical level in requirements development and the level of utilizing requirements development tools; Overall design skills refer to the top-level planning and design ability to carry out system construction planning and overall scheme design; Analysis and evaluation skills refer to the ability to sort out and summarize requirements, and comprehensive analysis and evaluation ability based on modeling and simulation and multivariate data fusion; Problem-solving skills refer to the ability to find and propose problems, to analyze problem mechanisms and design solutions, and to coordinate and solve problems; Achievement production skills refer to the ability to produce theoretical results; And talent-cultivating skills refer to the ability to guide professional theories, problem research and result achievement. On these bases, we establish an evaluation index system of the ability increment of scientific researchers, as shown in Figure 5.
In general, the hyper-entropy τ in Formula (18) is usually between 0.001 and 0.1 [31]. Set to τ = 0.1 is only used for visualizing standard clouds in this paper. Then, according to Formula (18), the intervals of qualitative evaluation criteria given in Table 1 are converted to the standard cloud models: C l o u d p o o r ( 0 , 20 , 0.1 ) , C l o u d m e d i u m ( 70 , 3.33 , 0.1 ) , C l o u d g o o d ( 85 , 1.67 , 0.1 ) , C l o u d e x c e l l e n t ( 100 , 3.33 , 0.1 ) . The characteristics are shown in Table 5. In addition, the cloud map display in this section selects a cloud droplet count of 10,000.
The qualitative evaluation criteria in Table 5 are converted to the cloud map by using the forward cloud generator given in Section 2.1. The cloud corresponding to each criterion contains 10,000 cloud drops, as shown in Figure 6.
It can be seen that the expectation in the cloud model is the most representative value of the qualitative evaluation criteria. Taking the evaluation criterion “poor” as an example, when the ability increment of a scientific researcher is 0, the incremental ability is 100% rated as “poor”. With the increase of the evaluation value of the ability increment, the probability of the ability increment of the scientific researcher being rated as “poor” gradually decreases. When the evaluation score is above 60, there is only a 0.135% probability that the scientific researcher is rated as “poor”.

6.1. Experiment on Establishment of Evaluation Index Weights Based on Iahp and Interval Weight Conversion

In this section, we will calculate the weights of each index in the evaluation index system for the ability increment of scientific researchers as per Algorithm 1. The scale of relative importance among the indexes is obtained by using the 1–9 scale, as shown in Table 6.
To ensure the diversity of evaluation perspectives and the professionalism of evaluation results, we have invited 10 domain experts from the research institute based on the principles of multidisciplinary, multi professional title, and multi experience level. Among them, there are 3 urban planning and design experts, 3 public management experts, and 4 transportation engineering experts. The average working experience of the expert group is 14 years, including 3 researchers, 2 senior engineers, 3 associate professors, and 2 senior engineers.
Expert group make pairwise comparison among indexes of the same hierarchy in Figure 5 as per Table 6:
A U 1 ~ U 4 = [ 1 , 1 ] 1 3 , 1 2 1 3 , 1 [ 1 , 2 ] [ 2 , 3 ] [ 1 , 1 ] [ 1 , 2 ] [ 3 , 4 ] [ 1 , 3 ] 1 2 , 1 [ 1 , 1 ] [ 2 , 4 ] 1 2 , 1 1 4 , 1 3 1 4 , 1 2 [ 1 , 1 ] , A U 111 ~ U 112 = [ 1 , 1 ] [ 1 , 3 ] [ 1 3 , 1 ] [ 1 , 1 ] , A U 131 ~ U 132 = [ 1 , 1 ] [ 2 , 4 ] [ 1 4 , 1 2 ] [ 1 , 1 ] A U 11 ~ U 13 = [ 1 , 1 ] 1 5 , 1 3 1 , 2 3 , 5 [ 1 , 1 ] 3 , 6 1 2 , 1 1 6 , 1 3 [ 1 , 1 ] , A U 21 ~ U 23 = [ 1 , 1 ] 1 , 3 2 , 4 1 3 , 1 [ 1 , 1 ] 1 , 3 1 4 , 1 2 1 3 , 1 [ 1 , 1 ] , A U 41 ~ U 43 = [ 1 , 1 ] 1 3 , 1 1 2 , 1 1 , 3 [ 1 , 1 ] 1 , 2 1 , 2 1 2 , 1 [ 1 , 1 ]
The consistency test result of interval numbers A U 1 ~ U 4 is given in Table 7, and the consistency test results of other interval numbers A U 11 ~ U 13 , A U 21 ~ U 23 , A U 41 ~ U 43 , A U 111 ~ U 112 and A U 131 ~ U 132 are given in Table 8. From Table 7 and Table 8, it can be seen that k = 1 n ( a i k a k j ) = a i j exists for all interval judgment matrices. Therefore, the evaluations of all index weights of the same hierarchy by 10 experts all pass the consistency test.
The weight intervals of indexes of the same hierarchy are calculated. The judgment matrix of interval numbers A U 1 ~ U 4 is decomposed into matrices A U 1 ~ U 4 and A U 1 ~ U 4 U . From the decomposition matrices, we can obtain the W U 1 ~ U 4 L = ( 0.1628 , 0.4374 , 0.2812 , 0.1186 ) , and W U 1 ~ U 4 U = ( 0.1766 , 0.3872 , 0.3247 , 0.1115 ) , and the two corresponding coefficients, c U 1 ~ U 4 = 0.8894 , and d U 1 ~ U 4 = 1.1100 . The interval weights of indexes obtained from the judgment matrix of interval numbers A U 1 ~ U 4 are W U 1 ~ U 4 = ( [ 0.1448 , 0.1960 ] , [ 0.3891 , 0.4298 ] , [ 0.2501 , 0.3604 ] , [ 0.1055 , 0.1238 ] ) .
Similarly, we can obtain the interval weights of indexes of the same hierarchy from the judgment matrices of interval numbers A U 11 ~ U 13 , A U 21 ~ U 23 , A U 41 ~ U 43 , A U 111 ~ U 112 and A U 131 ~ U 132 , as shown in Table 9.
Finally, the interval weights are converted based on the constant deviation ratio criterion. According to Formula (6), the interval weight vectors of indexes of the same hierarchy are converted into the target weight vectors in numerical form W ˜ , i.e., W U 1 ~ U 4 W ˜ U 1 ~ U 4 = ( 0.1705 , 0.4095 , 0.3054 , 0.1147 ) ; W U 11 ~ U 13 W ˜ U 11 ~ U 13 = ( 0.1878 , 0.6675 , 0.1448 ) ; W U 21 ~ U 23 W ˜ U 21 ~ U 23 = ( 0.5159 , 0.3054 , 0.1787 ) ; W U 41 ~ U 43 W ˜ U 41 ~ U 43 = ( 0.2414 , 0.4382 , 0.3204 ) ; W U 111 ~ U 112 W ˜ U 111 ~ U 112 = ( 0.6340 , 0.3660 ) ; W U 131 ~ U 132 W ˜ U 131 ~ U 132 = ( 0.7388 , 0.2612 ) . The specific weight results are shown in Figure 7:

6.2. Experiment on Evaluation of the Ability Increment of Scientific Researchers Based on Comprehensive Cloud Model

We divide the simulation into three steps in this section: (1) Converting the interval evaluation of indexes to cloud model as per the method in Section 4.1 (see Section 6.2.1); (2) Analyzing the effectiveness of index scores as per the method in Section 4.2.1 to avoid weakly consistent evaluation results (see Section 6.2.2); and (3) Generating the comprehensive cloud model of indexes at the higher hierarchical level as per the method in Section 4.2.2, and determining the corresponding level and scores of the ability increment (see Section 6.2.3).

6.2.1. Experiment on Cloud Generation of Interval-Valued Evaluation Indexes

Taking the evaluation of the ability increment of a scientific researcher as an example, Expert group score the basic indexes with no subnodes shown in Figure 7. As per the method in Section 3.1, the experts are allowed to give interval scores for the evaluation indexes, as shown in Table 10, Table 11, Table 12 and Table 13.
By default, the experts have ignored small probability events when giving interval scores, i.e., the index scores given by the experts follow uniform distribution in the interval. Then, the interval evaluation made by the expert group is converted to a cloud model of the corresponding index according to Formula (15). The results obtained are shown in Table 14. It is critical to highlight that the mean and variance of each index in the table calculated according to Formula (14) are the same as those calculated according to Formulas (9) and (12). Therefore, for score intervals with general distribution, the statistical parameters can be calculated according to Formulas (9), (10) and (12). For score intervals of indexes given by experts with uniform distribution, the calculation using Formula (14) is simpler.

6.2.2. Experiment on Cloud Model-Based Effectiveness Evaluation of Index Scoring

As shown in Table 14, He < En/3 exists for the evaluation results of all ability increment indexes, except for the ability increment in the theoretical technical level in requirements development U111. It is apparent that the scores given are weakly consistent for the ability increment in the theoretical technical level in requirements development U111. The evaluation cloud model of this index is formed by using the forward cloud generator, as shown in Figure 8.
As shown in the figure, this cloud map shows obvious fogging, indicating that the evaluation cloud model of this index is unstable. Therefore, the expert group should re-score this index after discussion. The new scores and the characteristics of the new converted cloud model are shown in Table 15. As a result, the unstable evaluation of the index by the expert group is avoided and relatively stable evaluation results of the index are obtained.
In the following, based on the characteristic parameters of cloud models of indexes given in Table 14 and Table 15, the cloud maps of all 12 basic indexes in Figure 7 are generated by using the forward cloud generator, as shown in Figure 9. It can be seen that the evaluation results by the expert group of the five basic indexes included in the increment of professionalism skills of this scientific researcher are weakly consistent, and the evaluation results of the basic indexes included in the ability increment in conducting problem research, ability increment in producing achievements and ability increment in cultivating talents are relatively consistent. In this regard, the evaluation criteria of the five basic indexes included in the increment of professionalism skills can be further refined, so that the experts can provide definite scores when evaluating the indexes, so as to achieve relatively consistent evaluation.
In addition, although there are some difference in the evaluation of the basic indexes U111, U112, U121, U131 and U132 of this scientific researcher by the expert group (the scores of the 5 basic indexes are closer to the consistency bound of He = En/3), the scores of these 5 basic indexes still satisfy the consistency requirement of He < En/3 and do not affect the subsequent evaluation of the indexes. It can be seen that the method in this paper allows a certain deviation in the evaluation of indexes by the expert group, which is also in line with the actual situation.

6.2.3. Experiment on Cloud Model-Based Comprehensive Evaluation of Indexes

We divide the simulation into two steps in this section: (1) Generating the cloud models of all indexes by the bottom-up aggregation method in Section Generating Comprehensive Evaluation Cloud; (2) Determining the level of each index as per Algorithm 3 and analyze the scores of each index in Section Determining the Level and Score of Each Index.
Generating Comprehensive Evaluation Cloud
In order to conduct a comprehensive evaluation on the ability increment of scientific researchers, we first generated a comprehensive evaluation cloud by combining multiple cloud models of indexes, and then in combination with the weights of indexes of the same hierarchy given in Figure 7 and the cloud characteristics of each basic index given in Table 14 and Table 15, calculated the comprehensive cloud model of indexes at higher hierarchical level according to Formula (17).
(1)
The cloud models of Level 3 indexes are aggregated from Level 4 indexes: The cloud model of Level 3 index of the increment of basic technical skills is obtained from C l o u d U 111 and C l o u d U 112 , as C l o u d U 11 ( 77.3855 , 1.7059 , 0.4843 ) ; the cloud model of Level 3 index of the increment of overall design skills is obtained from C l o u d U 121 , as C l o u d U 12 ( 86.2500 , 1.4024 , 0.4426 ) ; and the cloud model of Level 3 index of the increment of analysis and evaluation skills is obtained from C l o u d U 131 and C l o u d U 132 , as C l o u d U 13 ( 80.6012 , 1.5553 , 0.4655 ) .
(2)
The cloud models of Level 2 indexes are aggregated from Level 3 indexes: The cloud model of Level 2 index of the increment of professionalism skills is obtained from C l o u d U 11 , C l o u d U 12 and C l o u d U 13 , as C l o u d U 1 ( 83.7759 , 1.4865 , 0.4541 ) ; the cloud model of Level 2 index of the increment of problem research skills is obtained from C l o u d U 21 , C l o u d U 22 and C l o u d U 23 , as C l o u d U 2 ( 82.4366 , 1.3699 , 0.1838 ) ; and the cloud model of Level 2 index of the increment of achievement-production skills is obtained from C l o u d U 31 , as C l o u d U 3 ( 70.6000 , 1.4945 , 0.2451 ) .
(3)
The cloud model of Level 2 index of the increment of talent-cultivating skills is obtained from Level 3 indexes C l o u d U 41 , C l o u d U 42 and C l o u d U 43 , as C l o u d U 4 ( 79.5179 , 1.5554 , 0.2389 ) . The cloud models of Level 1 indexes are aggregated from Level 2 indexes: The comprehensive cloud model of Level 1 index of the ability increment of scientific researchers is obtained from C l o u d U 1 , C l o u d U 2 , C l o u d U 3 and C l o u d U 4 , as C l o u d t o t a l ( 78.7235 , 1.4508 , 0.2718 ) .
The comprehensive cloud models of indexes at higher hierarchical level after aggregation are shown in Figure 10, and the complete evaluation models of the ability increment of scientific researchers are shown in Figure 11. As shown in Figure 10, the comprehensive cloud model shows low fogging, indicating that the evaluation results of the ability increment of scientific researchers are relatively stable.
Determining the Level and Score of Each Index
As per Algorithm 2, we first calculate the similarity between the comprehensive cloud model C l o u d t o t a l corresponding to the comprehensive evaluation value of the ability increment of scientific researchers and the C l o u d p o o r , C l o u d m e d i u m , C l o u d g o o d and C l o u d e x c e l l e n t . Taking the number of cloud drops as 10000, the calculation results are shown in Table 16.
As shown in Table 16, the similarity between the comprehensive cloud model C l o u d t o t a l and the standard cloud model C l o u d m e d i u m corresponding to the evaluation criterion “medium” is the highest, so the comprehensive evaluation result of the ability increment of scientific researchers is determined as “medium”. The comparison of similarity between the comprehensive cloud model and each standard cloud model is shown in Figure 12.
It should be noted that, the criteria are only roughly divided into four levels “excellent, good, medium and poor” in Table 5, so the similarity value calculated in Table 16 is low. However, the level of indexes is determined only based on the relative degree of similarity, and the value of similarity itself does not affect the evaluation of indexes. If more fine-grained evaluation of indexes is needed, the evaluation criteria can be further split. For example, the criterion “medium” can be further divided into “medium-upper, medium-middle, and medium-lower”. As a result, we can obtain evaluation criteria with higher similarity to the comprehensive cloud model C l o u d t o t a l .
Likewise, the similarity of indicator cloud models is computed using Algorithm 2. With 10,000 cloud droplets adopted in the calculation, the corresponding results are summarized in Table 17, Table 18 and Table 19. The ultimate evaluation grade for each capability increment indicator of the researcher is presented in Table 20.
As shown in Figure 11 and Table 20, the results obtained by the method in this paper are basically consistent with those obtained by the classical weighted average method (calculated based on the expectation). The difference lies in the evaluation of Level 2 index of the increment of talent-cultivating skills. Since the classical weighted average method takes the expectation of the index as the only evaluation criterion, the index is rated as “medium” by the expectation of the index 79.5179 [ 60 , 80 ) . Although the expectation reflects the average level of the index evaluation, it still needs to consider the probability of distribution in index evaluation. In this regard, the grey cloud clustering method in this paper considers the probability and fuzziness of distribution in index evaluation. This index is rated as “good” by comparing the similarity between this index cloud C l o u d U 4 ( 79.5179 , 1.5554 , 0.2389 ) and the standard cloud. Evidently, the evaluation outcomes derived from the proposed method are more scientific, reasonable, and reliable. The similarity between the cloud models of the Level 2 indicators and each standard cloud model is visually illustrated in Figure 13. It can also be seen from Figure 13d that the similarity between C l o u d U 4 and C l o u d g o o d is higher.
Taking the expectation of the cloud model as the score of each ability increment index of the scientific researcher, the overall score of the ability increment of this scientific researcher is 78.7235. The distribution of the scores of indexes at each hierarchical level is shown in Figure 14. According to Figure 14, we sorted the index scores of the ability increment of this scientific researcher by using the method given in Section 4.2.2, as follows: U3 < U4 < U2 < U1 for Level 2 indexes; U43 < U31 < U41 < U21 < U11 < U13 < U12 < U22 < U42 < U23 for Level 3 indexes; and U111 < U131 < U112 < U121 < U132 for Level 4 indexes.
From the comparison in Figure 14, the score of Level 2 index of the increment of achievement-production skills U3 is lower than the overall score, indicating that this ability increment lags behind the overall ability increment of this scientific researcher, which restricts the overall improvement of the ability increment of this researcher. Similarly, the scores of Level 3 indexes of the increment of basic technical skills and analysis and evaluation skills are lower than the score of Level 2 index of the increment of professionalism skills to which they belong; the score of Level 3 index of the ability increment in finding and proposing problems is lower than the score of Level 2 index of the increment of problem-solving skills to which it belongs; the scores of Level 3 indexes of the ability increment in guiding professional theories and in guiding achievement production are lower than the score of Level 2 index of the increment of talent-cultivating skills to which they belong; the score of Level 4 index of the ability increment in theoretical technical level in requirements development is lower than the score of Level 3 index of the increment of basic technical skills to which it belongs; and the score of Level 4 index of the ability increment in requirements sorting and summarization is lower than the score of Level 3 index of the ability increment in analysis and evaluation to which it belongs, indicating that the increment of these basic abilities lags behind the ability increment of their higher-level indexes, which restricts the incremental improvement of their higher-level abilities and needs to be further strengthened. It can be seen that the method in this paper can not only analyze each ability increment of scientific researchers, but also achieve effective evaluation of their overall ability increment, with cardinal significance.

7. Sensitivity Analysis

As demonstrated by the numerical examples in Section 5, the proposed approach relies solely on the expert group’s interval judgment matrix for indicator importance and their interval evaluations of each bottom-level indicator to derive the comprehensive cloud model characterizing the incremental capabilities of scientific researchers. The entire calculation process relies on the initial indicator system structure and indicator evaluators. In addition, there are no variable parameters, and the algorithm has strong stability. As shown in Section 6, the entire process also includes setting the number of cloud droplets ħ and the hyper-entropy τ of the standard cloud. According to Algorithms 2 and 3, when calculating the similarity between clouds, only the number of cloud droplets ħ is relied upon. The hyper-entropy τ of the standard cloud is only used to visually display the shape of the standard cloud and is not related to the final result. Therefore, among the cloud droplet number ħ and the standard cloud hyper-entropy τ , only the cloud droplet number is a variable parameter for calculating the similarity between clouds. In view of this, this section will first discuss the impact of cloud droplet number ħ on the final evaluation level of researchers’ ability increment. Secondly, analyze the impact of changes in the indicator system and the number of experts on the entire evaluation.

7.1. The Impact of Cloud Droplet Number on Evaluation Results

We first determine the range of cloud droplets. The larger the number of cloud droplets, the closer the cloud distribution is to the theoretical normal cloud and the more stable the membership statistics are, but the computation time and memory usage increase; Generally, ħ ≥ 1000 can meet most evaluation requirements, while ħ ≥ 5000 can be considered as “high precision”. Balancing efficiency and accuracy, select a range of cloud droplet counts ħ [ 1000 , 10000 ] .
In addition to ħ = 10000 , we will uniformly select an additional 9 cloud droplet numbers. Based on this, calculate the similarity between the overall cloud model and the C l o u d p o o r , C l o u d m e d i u m , C l o u d g o o d , C l o u d e x c e l l e n t under different cloud droplet numbers, and the calculation results are shown in Table 21.
We can see from Table 21 that, under different cloud droplet numbers, the overall cloud model C l o u d t o t a l still maintains the highest similarity with the standard cloud model C l o u d m e d i u m corresponding to the evaluation criterion “medium”. Therefore, the change in cloud droplet count does not affect the comprehensive evaluation result of the researcher’s ability increment, which is recognized as “medium”. It can be seen that the cloud similarity calculation in this article’s algorithm is not sensitive to changes in cloud droplet count, and the algorithm has strong stability.

7.2. The Impact of Changes in the Number of Indicators on Indicator Weights

In response to the possible need for modification of the evaluation index system for the incremental capabilities of scientific researchers in practice, we take the top-level indicators U1~U4 as an example and add a new indicator U5—the incremental team collaboration capability—to this layer to demonstrate the weight expansion calculation method and analyze the changes in indicator weights.
Keeping the interval judgment matrix A U 1 ~ U 4 in Formula (19) unchanged, select the indicator from U1 to U4 that is most easily comparable to indicator U5. Assuming that the expert group believes that the increment of talent cultivation ability U4 is most easily compared with the increment of team collaboration ability U5, the importance of U5 is between equally important and slightly important compared to U4, i.e., a 45 = [ 1 / 3 , 1 ] . Directly using the strong consistency relationship a i 5 = a i 4 · a 45 , the new interval judgment matrix is obtained as follows:
A U 1 ~ U 5 = [ 1 , 1 ] 1 3 , 1 2 1 3 , 1 [ 1 , 2 ] 1 3 , 2 [ 2 , 3 ] [ 1 , 1 ] [ 1 , 2 ] [ 3 , 4 ] 1 , 4 [ 1 , 3 ] 1 2 , 1 [ 1 , 1 ] [ 2 , 4 ] 2 3 , 4 1 2 , 1 1 4 , 1 3 1 4 , 1 2 [ 1 , 1 ] 1 3 , 1 1 2 , 3 1 4 , 1 1 4 , 3 2 1 , 3 [ 1 , 1 ]
We can see that the above matrix meets the consistency requirement k = 1 n ( a i k a k j ) = a i j , i , j = 1 , 2 , , n . According to the interval eigenvalue method, the interval weight of the indicator is W U 1 ~ U 5 = ( [ 0.1146 , 0.1694 ] , [ 0.3116 , 0.3601 ] , [ 0.2013 , 0.3198 ] , [ 0.0865 , 0.0992 ] , [ 0.1064 , 0.2375 ] ) . According to Formula (6), the weight vector of the indicator is W ˜ U 1 ~ U 5 = ( 0.1415 , 0.3354 , 0.2595 , 0.0927 , 0.1708 ) . Compared with the original indicator weight W ˜ U 1 ~ U 4 = ( 0.1705 , 0.4095 , 0.3054 , 0.1147 ) , when the indicator system structure changes, the indicator weight changes significantly. Therefore, in practice, if the indicator system structure changes, Algorithm 1 needs to be used to recalculate the indicator weights. But the relative importance relationship between the original indicators can be kept unchanged, and new indicator weights can be directly obtained based on the relative importance relationship between the new indicator and any of the original indicators. However, it should be noted that although this method is simple, it uses a strong consistency relationship a i 5 = a i 4 · a 45 , which may lead to a broader judgment of the importance relationship between indicators than in reality. Therefore, in practice, the expert group can also judge the relationship between the new indicators and the original indicators one by one to obtain a new interval judgment matrix.

7.3. Impact of Changes in the Number of Experts on Evaluation Results

In response to the possible changes in the number of experts during indicator evaluation in practice, we take the same level indicators U41~U43 as an example to demonstrate the indicator cloud representation method when the number of experts changes by increasing or decreasing the number of experts, and analyze the changes in indicator evaluation.
Keeping the original indicator evaluation in Table 13 unchanged, two new experts were added to evaluate indicators U41 and U43, and the evaluation of indicator U42 by expert 9 and expert 10 was removed. The new evaluation situation for the incremental talent cultivation ability indicator of this scientific researcher is shown in Table 22.
Convert the interval into the corresponding indicator cloud model using Formula (15). The obtained results are shown in Table 23. According to Algorithm 2, calculate the similarity between clouds, and the changes are shown in Table 24.
According to Table 23, the cloud model of indicator U43 no longer satisfies He < En/3 after evaluation changes, indicating poor consistency in the expert group’s scoring of this indicator. Mainly due to a significant deviation between the ratings of expert 12 and other experts. However, according to Table 24, after calculating the similarity between clouds, the indicator evaluation level of the researcher’s achievement generation guidance ability U41 is still “medium”. So the consistency criterion for indicator evaluation is mainly used to avoid serious differences in expert group evaluations, but it does not necessarily affect the final rating result of the indicator.
Based on Table 23 and Table 24, we can see that in practice, if there are changes in the members of the expert group, it is highly likely to have an impact on the cloud model representation of the evaluation indicators, and Algorithm 3 needs to be used to re implement the calculation. Combining with Section 7.2, it can be found that regardless of changes in the indicator system structure or indicator evaluators, it is necessary to reuse the algorithm proposed in this article to evaluate researchers. However, both the indicator system structure and the indicator evaluators are the initial inputs of the algorithm in this article, and will not affect the stability of the algorithm in this article. The newly added cases in Section 7.2 and Section 7.3 further demonstrate the generality of the algorithm proposed in this paper.

8. Comparative Analysis

8.1. Comparative Analysis of Indicator Representation Methods

One of the key issues in evaluation is how to effectively integrate the group evaluation results of multiple experts on the same indicator. Below, we will conduct a comparative analysis using the expert evaluation results shown in Table 10 as an example. Firstly, we randomly select 30,000 points with uniform distribution within the expert’s scoring interval, and then calculate the actual representation of the indicator cloud using the point value type inverse cloud generator given in Formula (17). Second, the scores of each indicator are decomposed into the upper and lower bounds of the interval. The cloud model corresponding to the upper and lower limits of each indicator is calculated using Formula (1), and then the cloud models for these bounds are synthesized in accordance with Formula (17). Finally, compare the above calculation results with the results obtained by our method in Table 25.
It can be observed that the approach of decomposing the upper and lower bounds of the scoring interval and deriving cloud-based indicator values through cloud aggregation formulas yields results that are clearly inconsistent with those obtained using the Monte Carlo method. This is because the method splits the complete interval into upper and lower limits for separate aggregation, which destroys the integrity and distribution consistency of expert evaluation. The upper and lower limit clouds obtained separately are two edge distribution clouds. Among them, the lower bound cloud represents the lowest score evaluation of the indicator by the expert group, while the upper bound cloud represents the highest score evaluation of the indicator by the expert group. Hard synthesis of data with different meanings and distributions can disrupt the sample structure, leading to group evaluations no longer following the basic premise of the same distribution.
In practice, the evaluation of the same indicator by an expert group is essentially a subjective judgment of the same objective thing/attribute, and therefore has three core homogeneity characteristics: (1) the evaluation object is unified. All experts focus on the ‘same indicator’ rather than different indicators, and there is no essential difference in the core evaluation criteria; (2) The evaluation rules are consistent. The expert group follows the same evaluation criteria, scale, and definition, and there is no situation where “some experts evaluate according to rule A and some evaluate according to rule B”; (3) Experts have homogeneous abilities. The members selected for the expert group have been screened based on their professional background, cognitive level, and evaluation ability, and their understanding of the indicator is at the same level (not a mix of “layman + expert”). The sources and patterns of their judgment errors are consistent. The above three homogeneity characteristics determine that the expert group’s group evaluation of indicators should belong to the same probability density function distribution form. Therefore, the method proposed in this article can objectively characterize the group opinion results of the expert group on the indicators. The comparison results between our method and Monte Carlo method in Table 25 also confirm the above conclusion.

8.2. Comparative Analysis of Indicator Evaluation Methods

The grey clustering method for clouds takes into account all three feature parameters of the cloud model, and compared to the classical weighted average method that only uses expected values, it can characterize the indicators in more detail. Taking Section 6 as an example, in the case, the cloud model for the incremental indicator of talent cultivation ability of the researcher obtained by aggregating the evaluation results of the expert group is C l o u d U 4 ( 79.5179 , 1.5554 , 0.2389 ) . Compared with the set standard clouds C l o u d p o o r ( 0 , 20 , 0.1 ) , C l o u d m e d i u m ( 70 , 3.33 , 0.1 ) , C l o u d g o o d ( 85 , 1.67 , 0.1 ) , C l o u d e x c e l l e n t ( 100 , 3.33 , 0.1 ) , we can intuitively see from Figure 13d that the similarity between C l o u d U 4 and C l o u d g o o d is higher. According to the similarity calculation between clouds in Table 17, the evaluation result of the incremental indicator of talent cultivation ability for this researcher is “good”. The rationality of this result lies in the fact that the evaluation logic of cloud models is based on the comprehensive similarity of three-dimensional features such as expected value, entropy, and hyper-entropy, rather than a single expected point position.
The expected value, as a single point value, compresses the scoring results of the expert group into a single number, completely ignoring its degree of dispersion. In group evaluation, the consistency (entropy) and stability (hyper-entropy) of expert group opinions are often more valuable for evaluation than simply average scores. Below, we will analyze a typical case.
For the virtual indicator Q, Table 26 provides two sets of evaluations with the same average score, one with high consensus among experts and the other with serious differences in opinions. Assuming that the weights of experts in each group are consistent, regardless of Q1 and Q2, the weighted average method can obtain a score of 80 for indicator Q. According to the rules in Table 5, this indicator is evaluated as “good” simultaneously. In addition, according to Formulas (14) and (15), the cloud models representing Q1 and Q2 are calculated as C l o u d Q 1 ( 80 , 1.128 , 0.0754 ) and C l o u d Q 2 ( 80 , 9.5252 , 5.7912 ) , respectively. Then, according to Algorithm 2, the evaluation levels of indicator Q can be calculated as “good” and “medium”, respectively. The specific results are shown in Table 27.
The entropy of C l o u d Q 1 is smaller, indicating that the evaluation distribution of the expert group is more concentrated, that is, the expert group has a unified understanding and higher consensus on the indicator. This highly consistent evaluation result has much higher credibility and operability than evaluations with scattered opinions. In practice, a “good” evaluation with high consensus is more instructive than one with significant differences in opinion. In addition, hyper-entropy also reflects the stability of the evaluation process itself. In complex group evaluations, the stability of the evaluation process directly affects the reliability of the results, which cannot be reflected by the “average score”. This also corresponds to the core purpose of group evaluation in practice, which is to gather the wisdom of multiple experts to form a reliable, stable, and highly consensus evaluation, rather than simply obtaining an “average score”. The entropy and hyper-entropy in cloud models are key indicators for measuring the degree of achievement of this goal, and together they constitute a comprehensive evaluation of the quality of the evaluation results, rather than a single numerical reference. From this, we can see that the evaluation conclusions provided by the method in this article are more scientific, reasonable, and reliable.

9. Conclusions

This paper conducts a scientific evaluation of the incremental capabilities of scientific researchers, accurately identifying key issues that constrain their ability to improve, and providing important references for the construction of a talent team and overall high-quality development in scientific research institutions. In this regard, this paper proposes an ability increment evaluation method for researchers by combining interval evaluation and cloud theory, which objectively evaluates the capability increment of researchers in a cardinal sense. In the process of evaluation implementation, a multidisciplinary, multi-tier, and multi-experience expert group is established to ensure the diversity of evaluation perspectives and the professionalism of evaluation results. At the same time, the intersection type strong consistency criterion is introduced; a group opinion consistency discrimination criterion is established by combining the cloud model entropy and hyper-entropy relationship, effectively weakening the evaluation bias caused by subjective differences among experts. The sensitivity analysis results indicate that the method only involves 3 adjustable parameters, cloud droplet number, and the value of this parameter has no significant impact on the final evaluation results. The method has good stability and robustness.
The main advantages of the method proposed in this paper are: (1) Adopting an interval-based evaluation method, which reduces the difficulty of experts conducting evaluation practices. (2) The combination of interval analytic hierarchy process and proportional deviation constraint method is used to determine the weights of each indicator, avoiding dependence on the evaluation information of each indicator and ensuring that the weight values of the indicators have cardinal significance. (3) Based on the premise that the evaluation function of indicator groups follows the same distribution, the probability density distribution function of interval type evaluation indicators is used to directly integrate the group evaluation into a cloud model, achieving the objective representation of indicator group evaluation. (4) The entire calculation process of the indicator cloud model relies only on the initial indicator system structure and indicator evaluators, with no variable parameters and strong stability.
In addition, the method proposed in this article is a universal comprehensive evaluation framework, which is based on interval evaluation and cloud theory. Its core framework has strong universality and does not rely on specific evaluation objects or domain knowledge. By simply replacing the corresponding indicator system structure, it can be widely applied to various evaluation problems in fields such as nature, engineering, and art.

Author Contributions

Conceptualization, H.Y. and W.D.; Methodology, H.Y. and W.D.; Validation, H.Y. and W.D.; Investigation, Z.W.; Resources, H.Y. and W.D.; Writing—original draft, H.Y. and W.D.; Writing—review and editing, H.Y., W.D., S.C. and Z.W.; Supervision, S.C.; Funding acquisition, H.Y. and W.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shao, M.; Han, Z.; Sun, J.; Xiao, C.; Zhang, S.; Zhao, Y. A review of multi-criteria decision making applications for renewable energy site selection. Renew. Energy 2020, 157, 377–403. [Google Scholar] [CrossRef]
  2. Li, Z.; Luo, Z.; Wang, Y.; Fan, G.; Zhang, J. Suitability evaluation system for the shallow geothermal energy implementation in region by Entropy Weight Method and TOPSIS method. Renew. Energy 2022, 184, 564–576. [Google Scholar] [CrossRef]
  3. Wu, S.; Tian, C.; Li, B.; Wang, J.; Wang, Z. Ecological environment health assessment of lake water ecosystem system based on simulated annealing-projection pursuit: A case study of plateau lake. Sustain. Cities Soc. 2022, 86, 104131. [Google Scholar] [CrossRef]
  4. Gao, Y.; Qian, H.; Ren, W.; Wang, H.; Liu, F.; Yang, F. Hydrogeochemical characterization and quality assessment of groundwater based on integrated-weight water quality index in a concentrated urban area. J. Clean. Prod. 2020, 260, 121006. [Google Scholar] [CrossRef]
  5. Ardil, C. Military Fighter Aircraft Selection Using Multiplicative Multiple Criteria Decision Making Analysis Method. Int. J. Math. Comput. Sci. 2021, 13, 184–193. [Google Scholar]
  6. Zou, Q.; Zhang, T.; Cheng, Z.; Jiang, Z.; Tian, S. A method for selection rationality evaluation of the first-mining seam in multi-seam mining. Geomech. Geophys. Geo-Energy Geo-Resour. 2022, 8, 17. [Google Scholar] [CrossRef]
  7. Shang, Z.; Yang, X.; Barnes, D.; Wu, C. Supplier selection in sustainable supply chains: Using the integrated BWM, fuzzy Shannon entropy, and fuzzy MULTIMOORA methods. Expert Syst. Appl. 2022, 195, 116567. [Google Scholar] [CrossRef]
  8. Tan, F.; Wang, J.; Jiao, Y.Y.; Ma, B.; He, L. Suitability evaluation of underground space based on finite interval cloud model and genetic algorithm combination weighting. Tunn. Undergr. Space Technol. 2021, 108, 103743. [Google Scholar] [CrossRef]
  9. Garg, H.; Kumar, K. A novel exponential distance and its based TOPSIS method for interval-valued intuitionistic fuzzy sets using connection number of SPA theory. Artif. Intell. Rev. 2020, 53, 595–624. [Google Scholar] [CrossRef]
  10. Wan, S.; Dong, J. A possibility degree method for interval-valued intuitionistic fuzzy multi-attribute group decision making. In Decision Making Theories and Methods Based on Interval-Valued Intuitionistic Fuzzy Sets; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–35. [Google Scholar]
  11. Ding, Q.; Wang, Y.M.; Goh, M. An extended TODIM approach for group emergency decision making based on bidirectional projection with hesitant triangular fuzzy sets. Comput. Ind. Eng. 2021, 151, 106959. [Google Scholar] [CrossRef] [PubMed]
  12. Chai, N.; Zhou, W.; Jiang, Z. Sustainable supplier selection using an intuitionistic and interval-valued fuzzy MCDM approach based on cumulative prospect theory. Inf. Sci. 2023, 626, 710–737. [Google Scholar] [CrossRef]
  13. Relich, M.; Pawlewski, P. A fuzzy weighted average approach for selecting portfolio of new product development projects. Neurocomputing 2017, 231, 19–27. [Google Scholar] [CrossRef]
  14. Wu, X.; Hu, F. Analysis of ecological carrying capacity using a fuzzy comprehensive evaluation method. Ecol. Indic. 2020, 113, 106243. [Google Scholar] [CrossRef]
  15. Yang, S.-S.; Yu, X.-L.; Cui, C.-H.; Ding, J.; He, L.; Dai, W.; Sun, H.-J.; Bai, S.-W.; Tao, Y.; Pang, J.-W.; et al. Cloud-model-based feature engineering to analyze the energy–water nexus of a full-scale wastewater treatment plant. Engineering 2022, 36, 63–75. [Google Scholar] [CrossRef]
  16. Zhang, X.; Fang, S.; Jin, Y.; Huang, Y.; Wang, S.; Wang, J.; Xu, Y. Comprehensive performance evaluation of valuable medical equipment based on cloud modelling and combined weighting methodologies. Eng. Appl. Artif. Intell. 2025, 154, 110969. [Google Scholar] [CrossRef]
  17. Shen, J.; Bao, X.; Chen, X.; Wu, X.; Qiu, T.; Cui, H. Seismic resilience assessment method for tunnels based on cloud model considering multiple damage evaluation indices. Tunn. Undergr. Space Technol. 2025, 157, 106360. [Google Scholar] [CrossRef]
  18. Hu, J.; Wang, F.; Liu, P.; Lü, S.; Lü, C.; Yu, F. Evaluation of urban water cycle health status based on DPSIRM framework and AHP-FCE-Cloud model. Ecol. Indic. 2025, 170, 112935. [Google Scholar] [CrossRef]
  19. Wang, L.; Yang, G. An interval uncertainty propagation method using polynomial chaos expansion and its ap-plication in complicated multibody dynamic systems. Nonlinear Dyn. 2021, 105, 837–858. [Google Scholar] [CrossRef]
  20. Rajput, L.; Kumar, S. Linguistic interval-valued polytopic fuzzy set and cloud model based technique for order of preference by similarity to ideal solution. Eng. Appl. Artif. Intell. 2025, 142, 109872. [Google Scholar] [CrossRef]
  21. Xu, X.; Guo, H.; Zhang, Z.; Yu, S.; Chang, L.; Steyskal, F.; Brunauer, G. A cloud model-based interval-valued evidence fusion method and its application in fault diagnosis. Inf. Sci. 2024, 658, 119995. [Google Scholar] [CrossRef]
  22. Zhu, C.; Liu, X.; Ding, W.; Zhang, S. Cloud model-based multi-stage multi-attribute decision-making method under probabilistic interval-valued hesitant fuzzy environment. Expert Syst. Appl. 2024, 255, 124595. [Google Scholar] [CrossRef]
  23. Sarwar, M.; Akram, M.; Gulzar, W.; Deveci, M. Group decision making method for third-party logistics management: An interval rough cloud optimization model. J. Ind. Inf. Integr. 2024, 41, 100658. [Google Scholar] [CrossRef]
  24. Ding, W.; Bai, X.; Wang, Q.; Yao, H.; Liu, J.; Yang, H. Research on the spacecraft ground equivalence test assessment problem: A comprehensive assessment method combining interval-type evaluation and prospect-two-dimensional cloud. Appl. Soft Comput. 2024, 166, 111882. [Google Scholar] [CrossRef]
  25. Wu, T.; Gao, Y.; Zhou, Y.; Sun, H. A novel comprehensive quantitative method for various geological disaster evaluations in underground engineering: Multidimensional finite interval cloud model (MFICM). Environ. Earth Sci. 2021, 80, 696. [Google Scholar] [CrossRef]
  26. Fang, H.; Li, J.; Song, W. A new method for quality function deployment based on rough cloud model theory. IEEE Trans. Eng. Manag. 2020, 69, 2842–2856. [Google Scholar] [CrossRef]
  27. Yang, Y.; Chen, G.; Wang, D. A Security Risk Assessment Method Based on Improved FTA-IAHP for Train Position System. Electronics 2022, 11, 2863. [Google Scholar] [CrossRef]
  28. Feng, X.; Wei, C.; Hu, G.; Li, Z. Consistency of interval judgment matrix. Control Decis. 2008, 23, 5. [Google Scholar]
  29. Botzen, W.J.W.; Thepaut, L.D.; Banerjee, S. Kahneman’s insights for climate risks: Lessons from bounded rationality, heuristics and biases. Environ. Resour. Econ. 2025, 88, 2663–2688. [Google Scholar] [CrossRef]
  30. Van Den Brink, M.; Dawson, M.; Zglinski, J. Revisiting the asymmetry thesis: Negative and positive integration in the EU. J. Eur. Public Policy 2025, 32, 209–234. [Google Scholar] [CrossRef]
  31. Jianxing, Y.; Haicheng, C.; Shibo, W.; Haizhao, F. A novel risk matrix approach based on cloud model for risk assessment under uncertainty. IEEE Access 2021, 9, 27884–27896. [Google Scholar] [CrossRef]
Figure 1. Example of cloud model.
Figure 1. Example of cloud model.
Mathematics 14 00823 g001
Figure 2. Examples of cloud models with different values of hyper-entropy: (a) ( E x , E n , H e ) = ( 85 , 0.2 , 0.01 ) ; (b) ( E x , E n , H e ) = ( 85 , 0.2 , 0.09 ) .
Figure 2. Examples of cloud models with different values of hyper-entropy: (a) ( E x , E n , H e ) = ( 85 , 0.2 , 0.01 ) ; (b) ( E x , E n , H e ) = ( 85 , 0.2 , 0.09 ) .
Mathematics 14 00823 g002
Figure 3. Calculation Flow Chart.
Figure 3. Calculation Flow Chart.
Mathematics 14 00823 g003
Figure 4. Schematic diagram of capability increment evaluation.
Figure 4. Schematic diagram of capability increment evaluation.
Mathematics 14 00823 g004
Figure 5. Evaluation index system of the ability increment of scientific researchers.
Figure 5. Evaluation index system of the ability increment of scientific researchers.
Mathematics 14 00823 g005
Figure 6. Standard cloud model corresponding to each qualitative evaluation criterion.
Figure 6. Standard cloud model corresponding to each qualitative evaluation criterion.
Mathematics 14 00823 g006
Figure 7. Value weights corresponding to the indexes for evaluation of the ability increment of scientific researchers.
Figure 7. Value weights corresponding to the indexes for evaluation of the ability increment of scientific researchers.
Mathematics 14 00823 g007
Figure 8. Evaluation cloud model of the ability increment in the theoretical technical level in requirements development U111.
Figure 8. Evaluation cloud model of the ability increment in the theoretical technical level in requirements development U111.
Mathematics 14 00823 g008
Figure 9. Evaluation cloud models of all 12 basic indexes: (a) C l o u d U 111 ; (b) C l o u d U 112 ; (c) C l o u d U 121 ; (d) C l o u d U 131 ; (e) C l o u d U 132 ; (f) C l o u d U 21 ; (g) C l o u d U 22 ; (h) C l o u d U 23 ; (i) C l o u d U 31 ; (j) C l o u d U 41 ; (k) C l o u d U 42 ; (l) C l o u d U 43 .
Figure 9. Evaluation cloud models of all 12 basic indexes: (a) C l o u d U 111 ; (b) C l o u d U 112 ; (c) C l o u d U 121 ; (d) C l o u d U 131 ; (e) C l o u d U 132 ; (f) C l o u d U 21 ; (g) C l o u d U 22 ; (h) C l o u d U 23 ; (i) C l o u d U 31 ; (j) C l o u d U 41 ; (k) C l o u d U 42 ; (l) C l o u d U 43 .
Mathematics 14 00823 g009aMathematics 14 00823 g009b
Figure 10. Comprehensive cloud models of indexes at higher hierarchical level: (a) C l o u d U 11 ; (b) C l o u d U 12 ; (c) C l o u d U 13 ; (d) C l o u d U 1 ; (e) C l o u d U 2 ; (f) C l o u d U 3 ; (g) C l o u d U 4 ; (h) C l o u d U t o t a l .
Figure 10. Comprehensive cloud models of indexes at higher hierarchical level: (a) C l o u d U 11 ; (b) C l o u d U 12 ; (c) C l o u d U 13 ; (d) C l o u d U 1 ; (e) C l o u d U 2 ; (f) C l o u d U 3 ; (g) C l o u d U 4 ; (h) C l o u d U t o t a l .
Mathematics 14 00823 g010
Figure 11. Cloud models of evaluation indexes of the ability increment of scientific researchers.
Figure 11. Cloud models of evaluation indexes of the ability increment of scientific researchers.
Mathematics 14 00823 g011
Figure 12. Comparison of similarity.
Figure 12. Comparison of similarity.
Mathematics 14 00823 g012
Figure 13. Similarity between cloud models of Level 2 indexes and each standard cloud model: (a) C l o u d U 1 ; (b) C l o u d U 2 ; (c) C l o u d U 3 ; (d) C l o u d U 4 .
Figure 13. Similarity between cloud models of Level 2 indexes and each standard cloud model: (a) C l o u d U 1 ; (b) C l o u d U 2 ; (c) C l o u d U 3 ; (d) C l o u d U 4 .
Mathematics 14 00823 g013
Figure 14. Radar chart of the scores of evaluation indexes at each hierarchical level: (a) Score of Level 2 indexes; (b) Score of Level 3 indexes; (c) Score of Level 4 indexes; (d) Comparison of index scores.
Figure 14. Radar chart of the scores of evaluation indexes at each hierarchical level: (a) Score of Level 2 indexes; (b) Score of Level 3 indexes; (c) Score of Level 4 indexes; (d) Comparison of index scores.
Mathematics 14 00823 g014
Table 1. Algorithm in cloud computing.
Table 1. Algorithm in cloud computing.
AlgorithmExEnHe
AdditionEx1 + Ex2 E n 1 2 + E n 2 2 H e 1 2 + H e 2 2
Subtraction E x 1 E x 2 E n 1 2 + E n 2 2 H e 1 2 + H e 2 2
Multiplication E x 1 × E x 2 E x 1 E x 2 ( E x 2 E n 1 ) 2 + ( E x 1 E n 2 ) 2 E x 1 E x 2 ( E x 2 H e 1 ) 2 + ( E x 1 H e 2 ) 2
Number multiplication ( λ > 0 ) λ E x 1 λ E n 1 λ H e 1
Table 2. Quantitative intervals corresponding to the qualitative evaluation criteria.
Table 2. Quantitative intervals corresponding to the qualitative evaluation criteria.
Qualitative CriteriaPoorMediumGoodExcellent
Quantitative interval[0,60)[60,80)[80,90)[90,100]
Table 3. Evaluation of indicators for the incremental ability of a scientific researcher.
Table 3. Evaluation of indicators for the incremental ability of a scientific researcher.
Expert 1Expert 2Expert 3Expert 4Expert 5Expert 6
Z1[60,62][62,64][59,62][61,64][58,61][63,65]
Z2[51,53][53,56][49,52][52,55][49,52][52,54]
Z3[72,76][71,74][70,73][72,75][69,71][71,73]
Table 4. Indicator cloud model results.
Table 4. Indicator cloud model results.
StatisticsCharacteristics of Cloud Model
Mean X ¯ First-Order Absolute Central Moment D Variance S 2 Expectation E x Entropy E n Hyper-Entropy H e
Z161.301.24772.243361.301.56370.4493
Z252.201.50673.226752.201.88830.5823
Z372.301.39582.843372.301.74940.4660
Table 5. Division of evaluation criteria and cloud characteristics thereof.
Table 5. Division of evaluation criteria and cloud characteristics thereof.
Evaluation CriteriaPoorMediumGoodExcellent
Quantitative interval[0,60)[60,80)[80,90)[90,100]
Expectation07085100
Entropy203.331.673.33
Hyper-entropy0.10.10.10.1
Table 6. Scale of relative importance.
Table 6. Scale of relative importance.
Scale of Relative Importance a i j Definition
1Ui is equally important as Uj
3Ui is slightly more important than Uj
5Ui is significantly more important than Uj
7Ui is strongly more important than Uj
9Ui is extremely more important than Uj
2,4,6,8The intensity of importance of index Ui compared to Uj is between the above two corresponding numbers
Reciprocal 1 / a i j The reciprocal of the scale of Ui compared to Uj is equal to the scale of Uj compared to Ui, i.e., a j i = 1 / a i j
Table 7. Consistency test of the judgment matrices A U 1 ~ U 4 of interval numbers.
Table 7. Consistency test of the judgment matrices A U 1 ~ U 4 of interval numbers.
k = 1 4 ( a i k a k j ) Line Number i
123
Column number j2 k = 1 4 ( a 1 k a k 2 ) = 1 3 , 1 2 = a 12 \\
3 k = 1 4 ( a 1 k a k 3 ) = 1 3 , 1 = a 13 k = 1 4 ( a 2 k a k 3 ) = 1 , 2 = a 23 \
4 k = 1 4 ( a 1 k a k 4 ) = 1 , 2 = a 14 k = 1 4 ( a 2 k a k 4 ) = 3 , 4 = a 24 k = 1 4 ( a 3 k a k 4 ) = 2 , 4 = a 34
Table 8. Consistency test of other judgment matrices of interval numbers.
Table 8. Consistency test of other judgment matrices of interval numbers.
Judgment Matrices of Interval Numbers
A U 11 ~ U 13 A U 21 ~ U 23 A U 41 ~ U 43 A U 111 ~ U 112 A U 131 ~ U 132
i = 1 j = 2 k = 1 3 ( a 1 k a k 2 ) = 1 5 , 1 3 k = 1 3 ( a 1 k a k 2 ) = 1 , 3 k = 1 3 ( a 1 k a k 2 ) = 1 3 , 1 k = 1 3 ( a 1 k a k 2 ) = 1 , 3 k = 1 3 ( a 1 k a k 2 ) = 2 , 4
i = 1 j = 3 k = 1 3 ( a 1 k a k 3 ) = 1 , 2 k = 1 3 ( a 1 k a k 3 ) = 2 , 4 k = 1 3 ( a 1 k a k 3 ) = 1 2 , 1 \\
i = 2 j = 3 k = 1 3 ( a 2 k a k 3 ) = 3 , 6 k = 1 3 ( a 2 k a k 3 ) = 1 , 3 k = 1 3 ( a 2 k a k 3 ) = 1 , 2 \\
Table 9. Interval weights of other indexes of the same hierarchy.
Table 9. Interval weights of other indexes of the same hierarchy.
Eigenvector Corresponding to the Maximum EigenvalueCoefficientInterval Weight
A U 11 ~ U 13 W U 11 ~ U 13 L = ( 0.1876 , 0.6719 , 0.1405 ) c U 11 ~ U 13 = 0.9241 W U 11 ~ U 13 = ( [ 0.1734 , 0.2018 ] , [ 0.6209 , 0.7130 ] , [ 0.1298 , 0.1594 ] )
W U 11 ~ U 13 U = ( 0.1879 , 0.6637 , 0.1484 ) d U 11 ~ U 13 = 1.0742
A U 21 ~ U 23 W U 21 ~ U 23 L = ( 0.5275 , 0.2890 , 0.1835 ) c U 21 ~ U 23 = 0.8515 W U 21 ~ U 23 = ( [ 0.4492 , 0.5808 ] , [ 0.2461 , 0.3631 ] , [ 0.1563 , 0.2006 ] )
W U 21 ~ U 23 U = ( 0.5075 , 0.3172 , 0.1753 ) d U 21 ~ U 23 = 1.1446
A U 41 ~ U 43 W U 41 ~ U 43 L = ( 0.2340 , 0.4290 , 0.3370 ) c U 41 ~ U 43 = 0.8660 W U 41 ~ U 43 = ( [ 0.2027 , 0.2793 ] , [ 0.3715 , 0.5033 ] , [ 0.2918 , 0.3483 ] )
W U 41 ~ U 43 U = ( 0.2470 , 0.4450 , 0.3080 ) d U 41 ~ U 43 = 1.1308
A U 111 ~ U 112 W U 111 ~ U 112 L = ( 0.6340 , 0.3660 ) c U 111 ~ U 112 = 0.8660 W U 111 ~ U 112 = ( [ 0.5490 , 0.7088 ] , [ 0.3170 , 0.4092 ] )
W U 111 ~ U 112 U = ( 0.6340 , 0.3660 ) d U 111 ~ U 112 = 1.1180
A U 131 ~ U 132 W U 131 ~ U 132 L = ( 0.7388 , 0.2612 ) c U 131 ~ U 132 = 0.9309 W U 131 ~ U 132 = ( [ 0.6878 , 0.7865 ] , [ 0.2432 , 0.2781 ] )
W U 131 ~ U 132 U = ( 0.7388 , 0.2612 ) d U 131 ~ U 132 = 1.0646
Table 10. Increment evaluation results of professionalism skills.
Table 10. Increment evaluation results of professionalism skills.
Ability Increment in the Theoretical Technical Level in Requirements Development, U111Ability Increment in the Level of Utilizing Requirements Development Tools, U112Ability Increment in System Construction Planning and Overall Scheme Design, U121Ability Increment in Requirements Sorting and Summarization, U131Ability Increment in Comprehensive Analysis and Evaluation Based on Modeling and Simulation and Multivariate Data Fusion, U132
Expert 1[71,76][80,83][85,87][74,79][91,96]
Expert 2[73,77][81,84][84,86][75,78][93,95]
Expert 3[69,72][83,86][86,88][76,79][90,92]
Expert 4[74,76][81,85][84,87][74,77][91,94]
Expert 5[71,73][79,83][84,88][75,78][89,93]
Expert 6[68,72][82,86][86,89][74,78][94,96]
Expert 7[75,77][80,84][83,86][77,79][95,97]
Expert 8[72,75][82,87][85,89][75,77][92,94]
Expert 9[74,78][83,85][87,88][74,76][91,95]
Expert 10[70,75][84,87][84,89][73,76][90,93]
Table 11. Increment evaluation results of problem-solving skills.
Table 11. Increment evaluation results of problem-solving skills.
Ability Increment in Finding and Proposing Problems, U21Ability Increment in Analyzing Problem Mechanism and Designing Solutions, U22Ability Increment in Coordinating and Solving Problems, U23
Expert 1[75,78][87,88][89,92]
Expert 2[76,79][87,89][88,91]
Expert 3[77,78][86,90][87,91]
Expert 4[74,78][85,89][88,93]
Expert 5[76,77][86,89][89,94]
Expert 6[75,79][85,87][87,90]
Expert 7[78,80][84,88][88,90]
Expert 8[75,77][85,90][89,91]
Expert 9[74,79][84,87][87,92]
Expert 10[77,81][86,88][89,93]
Table 12. Increment evaluation results of achievement-production skills.
Table 12. Increment evaluation results of achievement-production skills.
Expert 1Expert 2Expert 3Expert 4Expert 5Expert 6Expert 7Expert 8Expert 9Expert 10
Ability increment in producing theoretical achievements, U31[68,71][69,73][70,74][67,72][68,74][69,73][68,72][67,72][70,73][70,72]
Table 13. Increment evaluation results of talent-cultivating skills.
Table 13. Increment evaluation results of talent-cultivating skills.
Ability Increment in Guiding Professional Theories, U41Ability Increment in Guiding Problem Research, U42Ability Increment in Guiding Achievement Production, U43
Expert 1[73,75][87,89][67,72]
Expert 2[74,77][87,91][68,73]
Expert 3[74,76][86,92][70,72]
Expert 4[73,78][86,90][67,73]
Expert 5[72,76][85,88][66,72]
Expert 6[73,77][87,92][69,74]
Expert 7[74,78][88,93][67,71]
Expert 8[75,77][87,90][68,74]
Expert 9[74,79][88,91][70,73]
Expert 10[73,76][86,89][69,72]
Table 14. Cloud model conversion results of interval-valued evaluation indexes.
Table 14. Cloud model conversion results of interval-valued evaluation indexes.
StatisticCharacteristics of Cloud Model
Mean X ¯ First-Order Absolute Central Moment D Variance S 2 Expectation   E x Entropy E n Hyper-Entropy H e
Ability increment in the theoretical technical level in requirements development, U11173.401.99575.206773.402.45110.8952
Ability increment in the level of utilizing requirements development tools, U11283.251.43493.037583.251.79840.4435
Ability increment in system construction planning and overall scheme design, U12186.251.11901.770886.251.40240.4426
Ability increment in requirements sorting and summarization, U13176.201.12851.826776.201.41430.4167
Ability increment in comprehensive analysis and evaluation based on modeling and simulation and multivariate data fusion, U13293.051.51443.264293.051.89810.5818
Ability increment in finding and proposing problems, U2177.151.12311.944277.151.40750.1924
Ability increment in analyzing problem mechanism and designing solutions, U2287.001.01331.566787.001.27000.2152
Ability increment in coordinating and solving problems, U2389.901.13582.023389.901.42350.0542
Ability increment in producing theoretical achievements, U3170.601.19242.173370.601.49450.2451
Ability increment in guiding professional theories, U4175.201.06191.726775.201.33090.2115
Ability increment in guiding problem research, U4288.601.27472.506788.601.59760.2140
Ability increment in guiding achievement production, U4370.351.31732.644270.351.65100.2859
Table 15. New scores of the ability increment in the theoretical technical level in requirements development U111.
Table 15. New scores of the ability increment in the theoretical technical level in requirements development U111.
Expert 1Expert 2Expert 3Expert 4Expert 5Expert 6Expert 7Expert 8Expert 9Expert 10
Ability increment in the theoretical technical level in requirements development, U111[72,76][73,76][72,74][74,76][71,73][71,75][75,77][72,75][74,78][71,75]
StatisticCharacteristics of cloud model
Mean X ¯ First-order absolute central moment D Variance S 2 Expectation E x Entropy E n Hyper-entropy H e
74.001.31672.466774.001.65020.5064
Table 16. Similarity of the comprehensive cloud model.
Table 16. Similarity of the comprehensive cloud model.
Standard Cloud P o o r C l o u d p o o r M e d i u m C l o u d m e d i u m G o o d C l o u d G o o d E x c e l l e n t C l o u d e x c e l l e n t
Similarity 4.4975 × 10 4 0.05210.0149 4.5096 × 10 8
Table 17. Similarity between cloud models of Level 2 indexes and each standard cloud model.
Table 17. Similarity between cloud models of Level 2 indexes and each standard cloud model.
Similarity
C l o u d U 1 C l o u d U 2 C l o u d U 3 C l o u d U 4
C l o u d p o o r 1.63 × 10 4 2.13 × 10 4 0.0020 3.85 × 10 4
C l o u d m e d i u m 9.44 × 10 4 0.00240.89890.0322
C l o u d g o o d 0.63850.3791 4.37 × 10 8 0.0444
C l o u d e x c e l l e n t 6.81 × 10 5 6.49 × 10 6 8.66 × 10 14 3.77 × 10 7
Table 18. Similarity between cloud models of Level 3 indexes and each standard cloud model.
Table 18. Similarity between cloud models of Level 3 indexes and each standard cloud model.
Similarity
C l o u d U 11 C l o u d U 12 C l o u d U 13 C l o u d U 21 C l o u d U 22 C l o u d U 23 C l o u d U 31 C l o u d U 41 C l o u d U 42 C l o u d U 43
C l o u d p o o r 5.92 × 10 4 9.62 × 10 5 3.13 × 10 4 6.05 × 10 4 8.06 × 10 5 4.28 × 10 5 0.0020 8.75 × 10 4 5.81 × 10 5 0.0021
C l o u d m e d i u m 0.1281 8.57 × 10 5 0.01520.1286 1.20 × 10 5 2.29 × 10 7 0.89890.3235 3.10 × 10 6 0.8898
C l o u d g o o d 0.00670.65130.11100.00150.50450.0615 4.37 × 10 8 3.78 × 10 5 0.2143 7.52 × 10 9
C l o u d e x c e l l e n t 4.03 × 10 8 8.10 × 10 4 2.15 × 10 6 2.91 × 10 9 0.00130.0191 8.66 × 10 14 6.36 × 10 11 0.0078 3.83 × 10 14
Table 19. Similarity between cloud models of Level 4 indexes and each standard cloud model.
Table 19. Similarity between cloud models of Level 4 indexes and each standard cloud model.
Similarity
C l o u d U 111 C l o u d U 112 C l o u d U 121 C l o u d U 131 C l o u d U 132
C l o u d p o o r 0.0011 1.86 × 10 4 9.62 × 10 5 7.28 × 10 4 2.22 × 10 5
C l o u d m e d i u m 0.49500.0025 8.57 × 10 5 0.2114 1.26 × 10 7
C l o u d g o o d 2.23 × 10 4 0.51910.6513 7.81 × 10 4 0.0081
C l o u d e x c e l l e n t 1.34 × 10 9 8.79 × 10 5 8.10 × 10 4 3.30 × 10 9 0.1696
Table 20. Final level of each ability increment index of this scientific researcher.
Table 20. Final level of each ability increment index of this scientific researcher.
Level 1 C l o u d t o t a l
Evaluation levelMedium
Level 2 C l o u d U 1 C l o u d U 2 C l o u d U 3 C l o u d U 4
Evaluation levelGoodGoodMediumGood
Level 3 C l o u d U 11 C l o u d U 12 C l o u d U 13 C l o u d U 21 C l o u d U 22 C l o u d U 23 C l o u d U 31 C l o u d U 41 C l o u d U 42 C l o u d U 43
Evaluation levelMediumGoodGoodMediumGoodGoodMediumMediumGoodMedium
Level 4 C l o u d U 111 C l o u d U 112 C l o u d U 121 C l o u d U 131 C l o u d U 132
Evaluation levelMediumGoodGoodMediumExcellent
Table 21. Similarity between cloud models under different cloud droplet numbers.
Table 21. Similarity between cloud models under different cloud droplet numbers.
Standard Cloud Model C l o u d p o o r C l o u d m e d i u m C l o u d g o o d C l o u d e x c e l l e n t
10,000Similarity 4.4975 × 10 4 0.05210.0149 4.5096 × 10 8
9000Similarity 4.4776 × 10 4 0.05120.0148 4.5333 × 10 8
8000Similarity 4.4877 × 10 4 0.05130.0149 4.9539 × 10 8
7000Similarity 4.5251 × 10 4 0.05320.0136 4.3702 × 10 8
6000Similarity 4.4921 × 10 4 0.05180.0144 4.9090 × 10 8
5000Similarity 4.4576 × 10 4 0.05030.0151 5.3471 × 10 8
4000Similarity 4.5042 × 10 4 0.05210.0160 5.5806 × 10 8
3000Similarity 4.4412 × 10 4 0.04920.0160 5.7882 × 10 8
2000Similarity 4.5229 × 10 4 0.05410.0164 4.8565 × 10 8
1000Similarity 4.4770 × 10 4 0.05060.0158 4.3912 × 10 8
Table 22. The new evaluation situation for the incremental talent cultivation ability indicator.
Table 22. The new evaluation situation for the incremental talent cultivation ability indicator.
Ability Increment in Guiding Professional Theories, U41Ability Increment in Guiding Problem Research, U42Ability Increment in Guiding Achievement Production, U43
Expert 1[73,75][87,89][67,72]
Expert 2[74,77][87,91][68,73]
Expert 3[74,76][86,92][70,72]
Expert 4[73,78][86,90][67,73]
Expert 5[72,76][85,88][66,72]
Expert 6[73,77][87,92][69,74]
Expert 7[74,78][88,93][67,71]
Expert 8[75,77][87,90][68,74]
Expert 9[74,79] [70,73]
Expert 10[73,76] [69,72]
Expert 11[74,76] [70,71]
Expert 12[72,75] [60,62]
Table 23. Comparison results of indicator cloud model before and after.
Table 23. Comparison results of indicator cloud model before and after.
Indicator Evaluation
Change Before
Indicator Evaluation
Change After
Expectation E x Entropy E n Hyper-Entropy H e Expectation E x Entropy E n Hyper-Entropy H e
U4175.201.33090.211575.04171.32440.0762
U4288.601.59760.214088.62501.65880.2425
U4370.351.65100.285969.58332.56121.5419
Table 24. Similarity between the cloud models of each three-level indicator and the standard cloud model.
Table 24. Similarity between the cloud models of each three-level indicator and the standard cloud model.
Similarity (Indicator Evaluation
Change Before)
Similarity (Indicator Evaluation
Change After)
C l o u d U 41 C l o u d U 42 C l o u d U 43 C l o u d U 41 C l o u d U 42 C l o u d U 43
C l o u d p o o r 8.75 × 10 4 5.81 × 10 5 0.0021 9.0271 × 10 4 5.7598 × 10 5 0.0027
C l o u d m e d i u m 0.3235 3.10 × 10 6 0.88980.3458 3.9025 × 10 6 0.7801
C l o u d g o o d 3.78 × 10 5 0.2143 7.52 × 10 9 1.214 × 10 5 0.2088 4.6075 × 10 4
C l o u d e x c e l l e n t 6.36 × 10 11 0.0078 3.83 × 10 14 2.4484 × 10 11 0.0087 4.5008 × 10 8
Table 25. Integrated cloud of incremental indicators for professionalism skills.
Table 25. Integrated cloud of incremental indicators for professionalism skills.
Ability Increment in the Theoretical Technical Level in Requirements Development, U111Ability Increment in the Level of Utilizing Requirements Development Tools, U112Ability Increment in System Construction Planning and Overall Scheme Design, U121Ability Increment in Requirements Sorting and Summarization, U131Ability Increment in Comprehensive Analysis and Evaluation Based on Modeling and Simulation and Multivariate Data Fusion, U132
Lower limit cloud(71.7,2.3813,0.5711)(81.5,1.6293,0.3933)(84.8,1.2533,0.2443)(74.7,1.1280,0.2685)(91.6,1.9050,0.1708)
Upper limit cloud(75.1,2.1306,0.0696)(85,1.5040,0.1993)(87.7,1.2032,0.3212)(77.7,1.2032,0.3212)(94.5,1.6293,0.3933)
Integrated Cloud(73.4,2.2594,0.4068)(83.25,1.5679,0.3118)(86.25,1.2285,0.2854)(76.2,1.1662,0.2961)(93.05,1.7725,0.3032)
Proposed Method(73.4,2.4511,0.8952)(83.25,1.7984,0.4435)(86.25,1.4024,0.4426)(76.2,1.4143,0.4167)(93.05,1.8981,0.5818)
Monte Carlo method(73.399,2.4557,0.8960)(83.25,1.7963,0.4407)(86.249,1.4036,0.4431)(76.199,1.4161,0.4178)(93.051,1.8969,0.5792)
Table 26. Comparative example of indicator evaluation.
Table 26. Comparative example of indicator evaluation.
Expert 1Expert 2Expert 3Expert 4Expert 5
Q1[78,82][79,81][77,83][78,82][79,81]
Q2[78,82][79,81][77,83][60,65][95,100]
Table 27. Similarity between the cloud models of each indicator and the standard cloud model.
Table 27. Similarity between the cloud models of each indicator and the standard cloud model.
C l o u d p o o r C l o u d m e d i u m C l o u d g o o d C l o u d e x c e l l e n t Level
C l o u d Q 1 3.4264 × 10 4 0.01650.0393 9.3645 × 10 8 Good
C l o u d Q 2 0.00240.15480.13560.0480Medium
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, H.; Chen, S.; Wu, Z.; Ding, W. An Assessment Method for Ability Increment of Scientific Researchers Based on Interval Evaluation and Cloud Model Theory. Mathematics 2026, 14, 823. https://doi.org/10.3390/math14050823

AMA Style

Yang H, Chen S, Wu Z, Ding W. An Assessment Method for Ability Increment of Scientific Researchers Based on Interval Evaluation and Cloud Model Theory. Mathematics. 2026; 14(5):823. https://doi.org/10.3390/math14050823

Chicago/Turabian Style

Yang, Hong, Si Chen, Zhengrong Wu, and Wenzhe Ding. 2026. "An Assessment Method for Ability Increment of Scientific Researchers Based on Interval Evaluation and Cloud Model Theory" Mathematics 14, no. 5: 823. https://doi.org/10.3390/math14050823

APA Style

Yang, H., Chen, S., Wu, Z., & Ding, W. (2026). An Assessment Method for Ability Increment of Scientific Researchers Based on Interval Evaluation and Cloud Model Theory. Mathematics, 14(5), 823. https://doi.org/10.3390/math14050823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop