Next Article in Journal
Correction: Kolkiran, A.; Agarwal, G.S. Amplitude Noise Reduction in a Nano-Mechanical Oscillator. Math. Comput. Appl. 2011, 16(1), 290–300
Previous Article in Journal
Some Properties of a Function Originating from Geometric Probability for Pairs of Hyperplanes Intersecting with a Convex Body
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Nonlinear Metrics Model for Information of Individual Research Output and Its Applications

School of Mathematics and Statistics, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2016, 21(3), 26; https://doi.org/10.3390/mca21030026
Submission received: 6 May 2016 / Revised: 17 June 2016 / Accepted: 21 June 2016 / Published: 30 June 2016

Abstract

:
Evaluation on achievement of scientists plays an important role in efficiently mining information of human resources. A metrics model, which is employed to calculate the number of academic papers, research awards and scientific research projects, often significantly affects the degree of fairness as it is used to compare the achievements of more than one scientist. In particular, it often becomes difficult to quantify the achievement for each scientist if there are a lot of participants in the same research output. In this paper, a new nonlinear metrics model, called a credit function, is established to mine the information of the individual research outputs (IRO). An example is constructed to show that different credit functions may generate distinct ranking for the scientists. By the proposed nonlinear methods in this paper, the inequality relation of contribution in the same IRO can be quantified, and the obtained ranking on the scientists is more acceptable than the existing linear method available in the literature. Finally, the proposed metrics model is applied in solving three practical problems, especially combined with the technique for order preference by similarity to an ideal solution (TOPSIS).

1. Introduction

In human resource management of universities and scientific research institutes, it is essential to mine useful information from the individual research outputs (IRO) of professional faculties. Particularly, for the competitors in promotion, tenure and faculty positions, a fair metrics model is required to evaluate the quality and quantity of IRO, such as the academic papers, research awards and scientific research projects. As pointed out in [1], owing to complexity and uncertainty in many decision situations, effective and quantitative models are helpful to the managers’ decision-making and planning.
With regard to the quality of the IRO, two main approaches, the so-called bibliometric measure (objective) and peer review (subjective) approaches, have been proposed for the IRO evaluation in the literature. Because the bibliometric analysis cannot be applied across the board to all departments in a large number of universities and scientific research institutes [2], peer review has become the principal method of assessment [3]. Although the objective evaluation approach represented by citation-based models and bibliometric indicators cannot replace the subjective evaluation based on an in-depth peer-review analysis of scientific products, it is helpful to elaborate large quantities of data when peer reviewing becomes difficult to implement [4]. Actually, the so-called h-index (Hirsch index) has been widely accepted as a measure of individual research achievement, which is advantageous compared to other bibliometric indicators such as the total number of citations or the number of papers published in journals of high impact factor [3,5]. Similar to the h-index in [6], the A-index(average index), R-index(root index), and AR-index(Age-dependent R-index) presented in [7] are also useful measure tools of IRO quality. In addition, The g-index was defined by Leo Egghe in [8] to measure the quality of the published articles. Specifically, given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g2 citations.
However, to the best of our knowledge, there exist few efficient metrics models to calculate the quantity of the IRO. Especially, if there is more than one participant in the same IRO, it becomes difficult to exactly measure the outputs of the competitors such that the inequality relation of contribution in the same IRO can be quantified. For example, if there are p published journal papers for a scientist, all of which are completed by multiple authors, then it is obvious that the number of the papers is less than p for the scientist since the other authors’ contributions should be subtracted. The difficulty lies in calculating the contribution of each author in the same paper unless there is a statement: “All authors contributed equally to all aspects of this work” in the paper. In the case that there are multiple authors in the same journal paper, a linear measurement method was presented. to calculate the number of the paper for each author in [9,10].
Instead of the method in [9], this article intends to formulate the nonlinearity existing in the distribution of the participant’ contribution in the same IRO, including the journal paper. We will first define some nonlinear functions, called credit functions, to calculate the credit of each participant in the same academic paper, research award or scientific research project for the authors, winners or proposers. Then, we will calculate the number of the IRO for each participant by the participant’s share in the total credit. It will be shown that, in virtue of different credit functions, the corresponding quantification methods of achievement and the ranking result may be distinct for the scientists. Especially, by the proposed nonlinear methods in this paper, the obtained rank of the scientists appears more acceptable than the existing linear methods available in the literature.
Since it is common that there are multiple criteria (papers, awards and projects) to evaluate the IRO, the final rank for all the participants is involved in multi-criteria decision-making problems. Specifically, it is required to determine the weight of each criterion such that an integrated evaluation is obtained for all the participants. Therefore, this article will focus on a new ranking method based on the credit functions.
The rest of this paper is organized as follows. In the next section, new nonlinear credit functions will be proposed to quantify the IRO. Section 3 is devoted to the differences of the credit functions in evaluating the number of IRO. In Section 4, an extended TOPSIS method will be developed. In Section 5, the proposed method is applied into solving some practical problems, especially combined with the technique for order preference by similarity to an ideal solution (TOPSIS).

2. New Nonlinear Measure on IRO

In this section, a new nonlinear approach is proposed to quantify the individual research output.
It is noted that, in [9], for an author who has n p papers and is ranked k j among the m j authors in the j-th paper ( 1 j p ), the contribution of the author in the j-th paper is quantified by
n j = 2 ( m j k j + 1 ) / ( m j 2 + m j ) ,
and the total number of papers is calculated by
n ¯ p = j = 1 p n j .
From (1), it is clear that the author’s contribution n j is decreasing in rank k j , and that of all the authors form an arithmetic progression, ( m j , m j 1 , m j 2 , , 1), from the first to the last author. In other words, the authors’ contribution is computed by the following linear function
y = x + m j + 1 , x = 1 , 2 , , m j ,
where x is the rank of the author in the paper, and y is the author’s contribution.
To present new nonlinear measures on the quantity of IRO, we first present a general definition of a credit function to evaluate the share of each participant to the total contribution of the same IRO.
Definition 1. 
f : [ 1 , + ) [ 0 , 1 ] is called a credit function if the following conditions are satisfied: (1) f is continuous and decreasing in the interval x [ 1 , + ) ; (2) lim x + f ( x ) = 0 .
Definition 2. 
Let f : [ 1 , + ) [ 0 , 1 ] be a credit function. If an IRO is involved with m participants, we call the share of the j-th participant:
c j = f ( j ) j = 1 m f ( j )
in the total credits of all the participants in the IRO as a measure of IRO.
By definition, we know that the following functions f 1 , f 2 and f 3 are examples of credit functions:
f 1 ( x ) = b k x 1 x b k , 0 , x > b k ,
where k > 0 and b are given constants such that b k 1 ,
f 2 ( x ) = 1 ( p x ) α , x [ 1 , + ) ,
where α > 0 and p > 0 are given constants,
f 3 ( x ) = 1 / a ( λ x μ ) , x [ 1 , + ) ,
where a > 1 , λ > 0 and μ > 0 are fixed constants.
By virtue of a certain credit function, we can quantify the contribution for each professional faculty even if the article, the research award and the scientific research project are completed by lots of participants. For example, if k = 1 and b = n + 1 in (5), i.e., the credit function is specified by
f 1 ( x ) = ( n + 1 ) x , x n + 1 , 0 , x > n + 1 .
Suppose that the number of authors in a published paper is m. Then, we can calculate the number of paper for each author in this paper by (8). Let c j l i n represent the number of the paper for the j-th author. We calculate c j l i n by
c j l i n = f ( j ) i = 1 m f ( i ) = 2 ( n j + 1 ) m ( 2 n + 1 m ) , j = 1 , 2 , , m .
Actually, in the case that m = n , the credit of each author defined by (9) turns out to be that by the method (1) proposed in [9]. In other words, the method in [9] is just a special case associated with the linear credit function (8).
With the other credit functions, we obtain a series of formulae to quantify the contribution of all the authors in the same paper, of the winners in the same research awards, or of the proposers in the same research project.
In (6), set α = p = 1 . Then, the credit function f 2 is specified by
f 2 ( x ) = 1 x , x [ 1 , + ) .
If there are m winners in a research award, then the credit of the j-th winner is measured by f 2 ( j ) = 1 / j , and the total credit of all the winners in the same research award is expressed by 1 + 1 / 2 + 1 / 3 + + 1 / m . Thus, with the help of (10), the contribution of the j-th winner is quantified by the share of the j-th winner in the total credit:
c j i n v = ( 1 / j ) 1 + 1 / 2 + 1 / 3 + + 1 / m .
Similarly, set a = 2 , λ = μ = 1 in (7). Then, the credit function f 3 is defined by
f 3 ( x ) = ( 1 / 2 ) ( x 1 ) , x [ 1 , + ) .
If there are m participants in a research project, then by (12), the contribution of the j-th participant is quantified by
c j e x p = ( 1 / 2 ) j 1 2 ( 1 / 2 ) m 1 .
Actually, if we regard the credit of the j-th participant as f 3 ( j ) = 1 / 2 ( j 1 ) , then the credits from the first author to the last author form an arithmetic progression: (1, 1 / 2 , , 1 / 2 ( m 1 ) ). The total credit of all the participants is
1 + 1 / 2 + ( 1 / 2 ) 2 + + ( 1 / 2 ) ( m 1 ) .
The contribution of the j-th participant, c j e x p , is defined as the share in the total credit.
On the basis of the above preparation, we finally present a new concept, called the standardized IRO number of the participant (SNP).
Definition 3. 
Let n be the original IRO number of a participant. Let m i be the total number of the participants in the IRO, i = 1 , 2 , , n . o i is the order of the participant in the m i participants, 1 o i m i . We call
s = i = 1 n f ( o i ) j = 1 m i f ( o j )
the standardized IRO number of the participant.

3. Difference of Credit Functions in Evaluation of IRO

In this section, we intend to show the differences of the proposed credit functions in quantifying the IRO.
For simplification, we only focus on computing the number of a published paper with one or multiple authors. The method can be easily extended to quantify the other kinds of individual outputs.
If m = 1 in (9), (11) and (13), then it is easy to see that
c 1 l i n = c 1 i n v = c 1 e x p = 1 .
Obviously, for all the three methods, the result (15) is in accordance with the practical observation on one paper with a sole author.
If there are two authors in a paper, i.e., m = 2 in (9), (11) and (13), then it is obtained that
c 1 l i n = c 1 i n v = c 1 e x p = 0.667 , c 2 l i n = c 2 i n v = c 2 e x p = 0.333 .
Refs. (15) and (16) indicate that there are not any differences among the three credit functions if the number of the authors is one or two. However, if m 2 , we can show their differences.
In Table 1, Table 2, Table 3 and Table 4, we report the distribution of contribution to the same paper for all the authors in the cases m = 3 , 4 , 5 , 6 , respectively.
From the results in Table 1, Table 2, Table 3 and Table 4, it is clear that:
(1) By the credit functions (11) and (13), the contribution of the first author to the paper is more emphasized than that by the first method proposed in [9]. It demonstrates that it is more reasonable and fair in practice if some nonlinear measures are designed to quantify the IRO.
(2) In the share of each author in Table 1, Table 2, Table 3 and Table 4, the largest one for the last author is obtained by the credit function (11) in all the cases of m = 3 , 4 , 5 , 6 . It shows that (11) can reflect the significance of all the signatures in the same paper, especially for the contribution of the last author. In other words, by choosing a suitable nonlinear measure, we can pay great attention to the contribution of all the participants as well as put emphasis on the role of the first participant in the same IRO.
In the end of this section, we give an example to show that the ranking result may be different if we use the different linear or nonlinear measurement methods. In Table 5, we list two teachers’ information (Teachers A and B) about their published academic papers.
It is clear that the paper number of Teacher B (6) is larger than that of Teacher A (2) if we do not take into account the number of the authors and their order. However, by virtue of the three different credit functions (8), (10) and (12), we can quantify the contribution of the author in the same paper such that the two teachers are ranked based on a more precise quantification method. The example we construct in Table 5 demonstrates that different quantification methods may generate distinct ranking results (see Table 6).
From Table 6, it follows that:
  • By (9), Teacher A is good as Teacher B.
  • By (13), Teacher A is better than Teacher B.
  • By (11), Teacher B is more excellent than Teacher A.
Since the nonlinear credit function (11) may pay sufficient respect to the contribution of the last author in the same paper, the obtained results appear more acceptable in practice ( 6 2 ).

4. Combination of Nonlinear Measures with TOPSIS

In this section, we will present a ranking method by combining the new measurement method on the IRO with the TOPSIS method in the multi-criteria decision-making (see, for example, [11,12,13]).
In quantifying the IRO of the scientists, papers, awards and research projects are regarded as three criteria to evaluate the final achievement of each scientist in this paper. Thus, the rank problem of the scientists’ IRO is a multi-criteria decison-making problem. However, for any a multi-criteria decison-making problem, it is a critical step to determine a valid and acceptable original evaluation matrix. In particular, for the rank problem of the scientists’ IRO, it is certain that how to quantify the research output of each scientist directly affects the ranking result.
In the following, we shall present an extended TOPSIS algorithm to obtain the rank of the scientists’ IRO based on the new measurement methods in Section 3.
Algorithm 1. 
(Extended TOPSIS Algorithm) 
Step 1 (Calculation of the evaluation matrix): Calculate the original evaluation matrix x i j by virtue of a credit function such as (8), (11) or (13), where x i j represents the modified output number of the i-th scientists by the j-th criterion. Denote X = ( x i j ) n × 3 .
Step 2 (Normalization): Each component of the evaluation matrix is normalized such that its value is in the interval [ 0 , 1 ] . For example, x i j can be normalized by
y i j = x i j i = 1 n x i j 2 .
Step 3 (Weights of criteria): For the j-th criterion, we first calculate the entropy (see, for example, [14]) by
E n j = λ i = 1 n y i j log ( y i j ) ,
where
λ = 1 / log ( m ) .
Then, the weight vector of the criterion, w = ( w 1 , w 2 , w 3 ) , is determined by
w j = 1 E n j j = 1 m 1 E n j .
Step 4 (Weighted evaluation matrix): With (20), we obtain the weighted and normalized evaluation matrix as follows.
y ^ i j = w j y i j .
Step 5 (Final score of IRO): On the basis of the standard IRO in (21), we calculate the final score of each scientist:
C i + = S i S i + + S i ,
where
S i + = j = 1 3 y ^ i j max i = 1 , 2 , , n . y ^ i j 2 ,
and
S i = j = 1 3 y ^ i j 2 .
Finally, the values of C i + , i = 1 , 2 , , n , in descending order form an IRO rank of all the scientists.
Remark 1. 
In Step 1 of Algorithm 1, the credit functions (8), (11) and (13) are just three examples for the implementation of Algorithm 1. Since different credit function may affect the distribution fairness of contribution, more new suitable credit functions are worthy of further investigation to obtain an acceptable final ranking on the IRO.
Remark 2. 
In Algorithm 1, as well as the original evaluation matrix X, the weights of the criteria in Step 3 also play critical roles in computing the final scores of the scientists (see, for example, [15,16,17]). Thus, if the weights of the criteria can be optimized by other new models, the above extended TOPSIS can be further improved in ranking the IRO of the scientists.

5. Applications

In this section, we apply the metric model proposed in this paper to solve some practical problems from management sciences and finance.

5.1. Forecasting Based on Suitable Credit Function

Markowtz’s mean-variance (M-V) model of investment portfolio in [18] provides a classic method for risk investment. We attempt to improve the applicability of the model based on suitable credit functions.
The simplest investment portfolio model can be written as
min x x T H x r T x , s . t . i = 1 n x i = 1 , 0 x i 1 ,
where r = ( r 1 , r 2 , , r n ) T is the random revenue rate of assets, H is the covariance matrix of r, and x = ( x 1 , x 2 , , x n ) T is the investment portfolio, which represents the investment proportion for all the assets. Since the decision-making must be done before the realization of random revenue rates, an “optimal” solution of Model (25) is often obtained by solving the following approximate optimization model (called Markowtz’s mean-variance model) (see, for example, [18,19,20,21]):
min x x T H ¯ x r ¯ T x , s . t . i = 1 n x i = 1 , 0 x i 1 ,
where
r ¯ i = 1 m j = 1 m r i j , i = 1 , 2 , , n , H ¯ i j = 1 m j = 1 m ( r i j r ¯ i ) ( r i j r ¯ j ) ,
and r i j represents the given revenue rate of Asset i at Period j ( i = 1 , 2 , , n , j = 1 , 2 , , m ).
However, the applicability of the above method seriously depends on accuracy of estimating the random revenue rate r. In (26), r ¯ is the mean of sampling data. It says that the importance of each data in the sample is same. Inspired by the fact that the latest data is often regarded to be more valuable than the earlier ones in the practical decision-making, we try to determine the weights of the sampling data by virtue of the proposed credit functions in this paper, such that the latest data generates greater impact on the investment decision.
For the given m sample values of Asset i, based on a suitable credit function such as (9), (11) and (13), we can calculate the weights of the sample data at different periods. For the given sequence of sampling data, its inverse serial number is equivalent to the order of authors in the same paper. Thus, the latest data will get the maximal weight, and the weights of the other data will decrease as the serial number becomes smaller. Let w j R m be the obtained weight of the j-th sample revenue rate. Then, different from (27), the random revenue rate and its covariance of Asset i can be estimated by
r ^ i = j = 1 m w j r i j , i = 1 , 2 , , n , H ^ i j = 1 m j = 1 m ( r i j r ^ i ) ( r i j r ^ j ) .
For example, if we have collected a revenue sampling of three assets (Asset A 1 , Asset A 2 , Asset A 3 ) in the past four periods (see Table 7). Then, with the credit function f 2 , we obtain the weight vector w = ( 0.120 , 0.160 , 0.240 , 0.480 ) . Thus, the random revenue rate and its covariance of Asset i can be estimated by
r ^ i = j = 1 4 w j r i j , i = 1 , 2 , 3 , 4 , H ^ i j = 1 m j = 1 m ( r i j r ^ i ) ( r i j r ^ j ) .
Consequently,
r ^ = ( 0.518 , 0.4572 , 0.5242 ) T , H ^ = 0.0805 0.0190 0.0171 0.0190 0.0681 0.0142 0.0171 0.0142 0.0813 .
The corresponding optimal investment portfolio x * , the maximal revenue P * and minimal risk R * are given by
x * = ( 0.4257 , 0.1692 , 0.4051 ) , P * = 0.4929 , R * = 0.0255 , R * P * = 0.4674 ,
respectively. In contrast, the optimal investment portfolio given by the mean-variance model are
x ¯ * = ( 0.2974 , 0.2652 , 0.4374 ) , P ¯ * = 0.3569 , R ¯ * = 0.0157 , R ¯ * P ¯ * = 0.3412 .
It is easy to see that the different estimates generate serious impact on the final decision-making of investment.
Our next interest in this paper is to compare the forecast accuracy by (27) and (28) in estimating the revenue rates of all the assets. For this, we have randomly collected the realized daily revenue rates of 30 stocks from the Shanghai Stock Exchange from the 9th to 13th, May, 2016. To test the forecast accuracy of (27) and (28), we estimate the revenue rate on Friday based on the earlier four data by (27) and (28), respectively. The forecast errors are demonstrated in Figure 1.
In Figure 1, “mean” is the error between the realized revenue rate and the estimated value by the mean method (27), and “weight” is the corresponding error by (28). The results show that for 23 stocks out of 30 securities, the prediction accuracy of the method (28) is better than that of (27). It is concluded that construction of suitable credit function can provide an efficient forecast method.

5.2. Evaluation on IRO of Teachers

From the department of human resources in Central South University, we first collect the individual research outputs of three teachers in a school, which are involved in the academic journal papers, the research awards and the scientific research projects in the past five years (see Appendix).
In the implementation of Algorithm 1, we choose the three different measurement methods to obtain the original evaluation matrix in Step 1. In Table 8, we report the standard quantified matrix of papers, awards and projects on the basis of the credit function (9). In addition, the rank of the three teachers is also given according to papers, awards and projects, respectively. With this original evaluation, Algorithm 1 offers us the final contribution scores of the three teachers:
S C 1 = ( 0.4537 , 0.9559 , 0.3308 ) .
Thus, the ranking of Teachers A 1 , A 2 and A 3 is ( 2 , 1 , 3 ) . This result is often different from the ranking obtained only by one evaluation criterion. Actually, from the standardized number of papers in Table 8, the ranking of the three three teachers is ( 1 , 3 , 2 ) , and the ranking in the research awards is ( 1 , 2 , 3 ) .
On the other hand, the results in Table 8 further indicate that the standardized quantity of IRO plays an important role in measuring the contribution of the teachers. Actually, from the original number of the published papers, it is clear that Teacher A 1 is the best (32 papers), whose number of papers is nearly three times that of A 3 and two times that of A 2 . Thus, the ranking based on the original number is ( 1 , 2 , 3 ) . However, by the measurement method (8), the obtained standardized numbers indicate that A 3 is better than A 2 , as well as A 1 is the number one.
In Table 9 and Table 10, we further implement Algorithm 1 to obtain the standard quantified matrix and the ranking with respect to the papers, awards and projects by the credit functions (11) and (13), respectively.
In virtue of the two standardized quantity matrices of IRO, Algorithm 1 gives us the final contribution scores of the three teachers:
S C 2 = ( 0.5221 , 0.9709 , 0.2543 ) , S C 3 = ( 0.6162 , 0.9594 , 0.2316 ) ,
respectively. Thus, the obtained ranking is the same as that from the measurement function (8). It shows consistency of the three different credit functions in some scenarios.

5.3. Evaluation on Enterprises of Engineering Projects

In the end of this section, we will apply Algorithm 1 in evaluating the engineering enterprises on the basis of their finished projects.
Suppose that there are three engineering enterprises A, B and C. The evaluation on the enterprises is associated with the quality level and the total number of their finished projects (experience of enterprises). In practice, owing to complexity of projects, some projects were done by a number of cooperative enterprises. Thus, how to standardize the number of the projects finished by each enterprise is important.
To see the important role of credit functions in ranking, we first suppose that the quality levels of the finished projects are the same. In Table 11, we report the quality level and the original and standardized number of the finished projects by all the three enterprises. The second column represents the quality level (QL) of the finished projects, the third column shows the total number of the finished projects by the enterprises, and the last three columns are the standardized number of projects by the different credit functions, defined by (8), (10) and (12), respectively.
Based on the standardized data in Table 11, Algorithm 1 provides us the final scores of each enterprise and the rank (see Table 12).
In Table 12, the rank of Enterprises A, B and C is ( 3 , 2 , 1 ) with the credit function f 1 . In contrast, with the other two credit functions f 2 and f 3 , we get the same rank of the three enterprises, which is (2, 3, 1).
The above ranking results demonstrate that the different credit functions may generate distinct ranking results. However, all of them can improve the rough ranking result ( 1 , 2 , 3 ) directly from the original data without standardization.
Next, we take into consideration the quality difference of the finished projects. Suppose that the quality levels of the projects are 4, 5 and 3, finished by Enterprises A, B and C, respectively. Then, by Algorithm 1, we obtain a ranking result, shown in Table 13.
From Table 13, it is concluded that quality level plays a critical role in evaluating the engineering enterprises by the proposed method in this paper.

6. Conclusions

In this paper, we have proposed new nonlinear credit functions and nonlinear measurement methods to quantify the IRO, such as the numbers of academic papers, research awards and scientific research projects. An example has been constructed to show that there exist differences among these credit functions in ranking the IRO of the scientists. In virtue of the standardized evaluation matrix obtained from the proposed methods, an extended TOPSIS algorithm has been developed to determine the rank even if the achievement of a scientist is associated with the numbers of the papers, awards and projects in the scientific research. The above research results highlight the following managerial implications for building efficient management systems for human resources:
(1) By suitable choice of a linear or nonlinear credit function, we can obtain a fair contribution distribution for all the participants in the same IRO, as well as put emphasis on the role of the first participant.
(2) By virtue of the extended TOPSIS algorithm, an acceptable fair ranking on the IRO can be obtained, even if it is associated with a multi-criteria decision-making problem.
(3) Construction of suitable credit function can provide an efficient forecast method.
(4) The TOPSIS method can be further improved if the weights of the criteria are optimized by new models.
yes

Acknowledgments

The research work of the authors is supported by the Natural Science Foundation of Guangdong, China (Grant No. 2016A030310105) and the Major Program of the National Social Science Foundation of China (14ZDB136).

Author Contributions

Ming Chen carried out the construction of new credit functions and the numerical computation. Zhong Wan participated in the design of the study, drafted the manuscript and performed the analysis on the numerical results. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare that they have no competing interests.

Appendix Data for the IRO of the Three Teachers

Table A1, Table A2, Table A3, Table A4 and Table A5, we list the data of the teachers. We denote A1 a professor in the three teachers, A2 an associate professor, and A3 a lecturer. In Table A1, Table A2 and Table A3, the numbers of the academic papers are shown.
Table A1. Published papers of A1.
Table A1. Published papers of A1.
NAOrderNAOrderNAOrderNAOrder
42314242
33623242
22533252
31424142
65326253
44434243
44334352
32334261
Table A2. Published papers of A2.
Table A2. Published papers of A2.
NAOrderNAOrderNAOrderNAOrder
42523344
33316132
22314262
316232
655332
444241
Table A3. Published papers of A3.
Table A3. Published papers of A3.
NAOrderNAOrderNAOrderNAOrder
11314265
33623232
225332
3142
Table A4. Research awards.
Table A4. Research awards.
A1A2A3
NWOrderNWOrderNWOrderNWOrder
81513111
62413222
42615273
317284
422164
738187
In Table A4, the numbers of the research awards are shown, where “NW” is the number of the winners in each award, “order” is the order of the teacher in a research award.
Table A5. Research projects.
Table A5. Research projects.
A1A2A3
NPOrderNPOrderNPOrderNPOrder
81819281
91917384
92925195
91915196
In Table A5, the numbers of the research projects are listed, where “NP” is the number of the participants in each project, “order” is the order of the teacher in a research project.

References

  1. Power Daniel, J.; Ramesh, S. Model-driven decision support systems: Concepts and research directions. Decis. Support Syst. 2007, 43, 1044–1061. [Google Scholar] [CrossRef]
  2. Musselin, C. How peer review empowers the academic profession and university managers: Changes in relationships between the state, universities and the professoriate. Res. Policy 2013, 42, 1165–1173. [Google Scholar] [CrossRef]
  3. Nederhof, A.J.; Van Raan, A.F.J. A bibliometric analysis of six economics research groups: A comparison with peer review. Res. Policy 1993, 22, 353–368. [Google Scholar] [CrossRef]
  4. Bini, D.A.; Del Corso, G.M.; Romani, F. A combined approach for evaluating papers. J. Comput. Appl. Math. 2010, 234, 3104–3121. [Google Scholar] [CrossRef]
  5. Fersht, A. The most influential journals: Impact Factor and Eigenfactor. PNAS 2009, 106, 6883–6884. [Google Scholar] [CrossRef] [PubMed]
  6. Bornmann, L.; Mutz, R.; Daniel, H.D. Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 830–837. [Google Scholar] [CrossRef]
  7. Jin, B.; Liang, L.; Rousseau, R.; Egghe, L. The R-and AR-indices: Complementing the h-index. Chin. Sci. Bull. 2007, 52, 855–863. [Google Scholar] [CrossRef]
  8. Egghe, L. Theory and practise of the g-index. Scientometrics 2006, 69, 131–152. [Google Scholar] [CrossRef]
  9. Xu, J.P.; Li, Z.M.; Shen, W.J.; Lev, B. Multi-attribute comprehensive evaluation of individual research output based on published research papers. Knowl.-Based Syst. 2013, 43, 135–142. [Google Scholar] [CrossRef]
  10. Li, Z.M.; Liechty, M.; Xu, J.P.; Lev, B. A fuzzy multi-criteria group decision making method for individual research output evaluation with maximum consensus. Knowl.-Based Syst. 2014, 56, 253–263. [Google Scholar] [CrossRef]
  11. Chen, M.; Wan, Z.; Chen, X. New min-max approach to optimal choice of the weights in multi-criteria group decision-making problems. Appl. Sci. 2015, 5, 998–1015. [Google Scholar] [CrossRef]
  12. Hwang, C.L.; Yoon, K. Multiple Attributes Decision Making Methods and Applications; Springer: Berlin/Heiddberg, Germany, 1981. [Google Scholar]
  13. Yu, L.P.; Chen, Y.Q.; Pan, Y.T.; Wu, Y.S. Research on the evaluation of academic journals based on structural equation modeling. J. Informetr. 2009, 3, 304–311. [Google Scholar]
  14. Shannon, C.E. A mathematical theory of communication. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2001, 5, 3–55. [Google Scholar] [CrossRef]
  15. Carazo, A.F.; Contreras, I.; Gómez, T.; Pérez, F. A project portfolio selection problem in a group decision-making context. J. Ind. Manag. Optim. 2012, 8, 243–261. [Google Scholar] [CrossRef]
  16. Xu, Y.J.; Da, Q.L. Standard and mean deviation methods for linguistic group decision making and their applications. Expert Syst. Appl. 2010, 37, 5905–5912. [Google Scholar] [CrossRef]
  17. Liu, S.; Chan, F.T.S.; Ran, W. Multi-attribute group decision-making with multi-granularity linguistic assessment information: An improved approach based on deviation and TOPSIS. Appl. Math. Model. 2013, 37, 10129–10140. [Google Scholar] [CrossRef]
  18. Markowitz, H. Portfolio selection. J. Financ. 1952, 7, 77–91. [Google Scholar] [CrossRef]
  19. Merton, R.C. Optimum consumption portfolio rules in a continuous time model. J. Econ. Theory 1971, 3, 373–413. [Google Scholar] [CrossRef]
  20. Morey, M.R.; Morey, R.C. Mutual fund performance appraisals: A multihorizon perspective with endogenous benchmarking. Omega 1999, 27, 241–258. [Google Scholar] [CrossRef]
  21. Chen, S.; Li, X.; Zhou, X.Y. Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 1998, 36, 1685–1702. [Google Scholar] [CrossRef]
Figure 1. Prediction error of the revenue rates.
Figure 1. Prediction error of the revenue rates.
Mca 21 00026 g001
Table 1. Distribution of contribution for m = 3.
Table 1. Distribution of contribution for m = 3.
Credit Functions1st2nd3rd
f1(x)0.5000.3330.167
f2(x)0.5450.2730.182
f3(x)0.5710.2860.143
Table 2. Distribution of contribution for m = 4.
Table 2. Distribution of contribution for m = 4.
Credit Functions1st2nd3rd4th
f1(x)0.4000.3000.2000.100
f2(x)0.4800.2400.1600.120
f3(x)0.5330.2670.1330.067
Table 3. Distribution of contribution for m = 5.
Table 3. Distribution of contribution for m = 5.
Credit Functions1st2nd3rd4th5th
f1(x)0.3330.2670.2000.1330.067
f2(x)0.4380.2190.1460.1090.088
f3(x)0.5160.2580.1290.0650.032
Table 4. Distribution of contribution for m = 6.
Table 4. Distribution of contribution for m = 6.
Credit Functions1st2nd3rd4th5th6th
f1(x)0.2860.2380.1900.1430.0950.048
f2(x)0.4080.2040.1360.1020.0820.068
f3(x)0.5080.2540.1270.0640.0320.015
Table 5. Published papers of A and B.
Table 5. Published papers of A and B.
AB
NAOrderNAOrderNAOrderNAOrder
33444444
11444411
In Table 5, the numbers of the papers are shown, where “NA” is the number of the authors in each paper, “order” is the order of the teacher in a paper.
Table 6. Ranking by published papers.
Table 6. Ranking by published papers.
Numberf1f2f3
MNPRankMNPRankMNPRank
A21.511.54521.5711
B61.511.611.3352
In Table 6, “MNP” represents the modified number of the papers for each teacher by the credit function.
Table 7. Sampling on random revenue rate.
Table 7. Sampling on random revenue rate.
AssetsPeriod 1Period 2Period 3Period 4
A10.43000.46980.41640.5896
A20.47020.48380.48880.4376
A30.43570.48280.57030.4625
Table 8. Standardized IRO by f1.
Table 8. Standardized IRO by f1.
TeachersPapers/RankAwards/RankProject/Rank
A18.557/12.758/10.844/2
A25.357/32.203/21.911/1
A37.746/21.877/30.605/3
Table 9. Standardized IRO by f2.
Table 9. Standardized IRO by f2.
TeachersPapers/RankAwards/RankProject/Rank
A17.889/12.840/11.251/2
A25.469/32.265/22.433/1
A37.362/21.709/30.590/3
Table 10. Standardized IRO by f3.
Table 10. Standardized IRO by f3.
TeachersPapers/RankAwards/RankProject/Rank
A18.191/13.140/11.754/2
A25.655/32.536/22.911/1
A37.506/21.587/30.612/3
Table 11. Quality level (QL) and number of the projects.
Table 11. Quality level (QL) and number of the projects.
EnterprisesQLff1f2f3
A583.1223.7383.524
B563.2833.6423.213
C54444
Table 12. Final rank with the same quality level.
Table 12. Final rank with the same quality level.
Enterprisesf1f2f3
ScoreRankScoreRankScoreRank
A0.780530.934520.88102
B0.820720.910530.80323
C1.00111.01111.2121
Table 13. Final rank with the different quality levels.
Table 13. Final rank with the different quality levels.
Enterprisesf1f2f3
ScoreRankScoreRankScoreRank
A0.798720.800420.80212
B0.955810.991110.96691
C0.621030.603430.60983

Share and Cite

MDPI and ACS Style

Chen, M.; Wan, Z. New Nonlinear Metrics Model for Information of Individual Research Output and Its Applications. Math. Comput. Appl. 2016, 21, 26. https://doi.org/10.3390/mca21030026

AMA Style

Chen M, Wan Z. New Nonlinear Metrics Model for Information of Individual Research Output and Its Applications. Mathematical and Computational Applications. 2016; 21(3):26. https://doi.org/10.3390/mca21030026

Chicago/Turabian Style

Chen, Ming, and Zhong Wan. 2016. "New Nonlinear Metrics Model for Information of Individual Research Output and Its Applications" Mathematical and Computational Applications 21, no. 3: 26. https://doi.org/10.3390/mca21030026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop