Next Article in Journal
Digital Watermarking Image Compression Method Based on Symmetric Encryption Algorithms
Previous Article in Journal
Hybrid Algorithms for Variational Inequalities Involving a Strict Pseudocontraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Cross-Efficiency Evaluation Method Based on Evaluation Criteria Balanced on Interval Weights

1
School of Electronic Information Science, Fujian Jiangxia University, Fuzhou 350108, China
2
College Business, Wuchang University of Technology, Wuhan 350116, China
3
Department of mathematics and Physics, Fujian Jiangxia University, Fuzhou 350108, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(12), 1503; https://doi.org/10.3390/sym11121503
Submission received: 21 November 2019 / Revised: 6 December 2019 / Accepted: 9 December 2019 / Published: 11 December 2019

Abstract

:
Cross-efficiency evaluation approaches and common set of weights (CSW) approaches have long been suggested as two of the more important and effective methods for the ranking of decision making units (DMUs) in data envelopment analysis (DEA). The former emphasizes the flexibility of evaluation and its weights are asymmetric, while the latter focuses on the standardization of evaluation and its weights are symmetrical. As a compromise between these two approaches, this paper proposes a cross-efficiency evaluation method that is based on two types of flexible evaluation criteria balanced on interval weights. The evaluation criteria can be regarded as macro policy—or means of regulation—according to the industry’s current situation. Unlike current cross-efficiency evaluation methods, which tend to choose the set of weights for peer evaluation based on certain preferences, the cross-efficiency evaluation method based on evaluation criterion determines one set of input and output weights for each DMU. This is done by minimizing the difference between the weights of the DMU and the evaluation criteria, thus ensuring that the cross-evaluation of all DMUs for evaluating peers is as consistent as possible. This method also eliminates prejudice and arbitrariness from peer evaluations. As a result, the proposed cross-efficiency evaluation method not only looks for non-zero weights, but also ranks efficient DMUs completely. The proposed DEA model can be further extended to seek a common set of weights for all DMUs. Numerical examples are provided to illustrate the applications of the cross-efficiency evaluation method based on evaluation criterion in DEA ranking.

1. Introduction

Data envelopment analysis (DEA) is a practical methodology originally proposed by Charnes et al. [1]. Since that time, DEA has been widely studied and applied all over the world, and the method has been further developed and expanded by many scholars. In recent years, the outstanding studies in DEA can be found in the literature [2,3,4,5,6,7,8]. DEA is used to evaluate the performance of a group of decision making units (DMUs) using multiple inputs to produce multiple outputs. DEA method requires each decision-making unit to evaluate its efficiency and assign the most favorable weight to itself. In addition, efficiency is the optimistic efficiency of DMU, which should not be greater than 1. If the efficiency value of the decision unit is equal to 1, the DMU is called a DEA efficient DMU. Otherwise, DMU is considered to be a non DEA efficient DMU.
The traditional DEA method has two major drawbacks. The first is a lack of discrimination, and the second is the existence of unrealistic weights. The DEA method allows each DMU to evaluate its efficiency with the most favorable weights. In this way, more than one DMU is often evaluated as DEA efficient, and these DMUs cannot be further distinguished. Therefore, the lack of discrimination is one of the main defects of the DEA method. This also leads to another important problem. The input and output that is beneficial to a particular DMU will be weighted heavily, while the input and output that is unfavorable to the DMU will be weighted lightly, or even ignored. As a result, weighting for self-assessment can sometimes be unrealistic.
Studies to overcome the weakness of DEA’s discrimination power are grouped into two trends. One remedy is the cross-efficiency method suggested by Sexton et al. [9], which introduces a secondary goal. The most commonly used methods include the benevolent and aggressive cross-efficiency assessment proposed by Doyle and Green [10], both of which are calculated using the weights that are benevolent or aggressive to peers. Wang & Chin [11] proposed a neutral cross-efficiency evaluation method, in which the attitude of decision makers is neutral and there is no need to make a choice between benevolent and aggressive formulas. Liang et al. [12] put forward the game cross-efficiency evaluation method. Using the idea of game, each DMU is regarded as an independent player, and bargains between the optimistic efficiency. In addition, Jahanshahloo et al. [13] proposed symmetrical weight distribution technology. This technique rewards the DMU with symmetrical selection weights. Wu et al. [6] proposed a DEA model with balanced weight. The second goal is to reduce the number of zero weights and the large differences in weighted data. Ruiz [14] proposed the cross-efficiency evaluation of direction distance function for fractional programming. Cook and Zhu [15] proposed a unit invariant multiplicative DEA model, which can directly obtain the maximum and unique cross-efficiency scores of each DMU. Wu [7] proposed to use the target recognition model as a means to obtain the reachable targets of all DMUs. Several secondary objective models are proposed for weight selection. These models consider the expected and unexpected cross-efficiency goals of all DMUs. Other cross-efficiency evaluation methods are discussed in Wu and Chu [8], Oral et al. [16], Oukil [17], Carrillo [18], and Shi et al. [19].
Another remedy is the common set of weights (CSW) approach in DEA, which was first suggested by Cook et al. [20]. This method utilizes the idea of common weights to measure the relative efficiency of highway maintenance patrols. Years later, this study was further developed by Jahanshahloo et al. [21], Kao & Hung [22], and Liu & Peng [23]. In more recent studies, Amir et al. [24] proposes a novel TCO-based model in which a common set of weights imprecise DEA (CSW-IDEA) is used to address the managerial and technical issues of handling weighting schemes and imprecise data. Hossein et al. [25] suggests a novel method for determining the CSWs in a multi-period DEA. The CSWs problem is formulated as a multi-objective fractional programming problem. Then, a multi-period form of the problem is formulated, and the mean efficiency of the DMUs is maximized, while their efficiency variances are minimized. The CSW approaches have been developed to find a common set of weights for all DMUs, in order to overcome the shortcomings of the weights flexibility method, where each DMU can take its own most desirable weight.
From the literature review above, all the cross-efficiency evaluation methods are formulated so that each DMU chooses one set of weights determined by the CCR model (self-evaluation model proposed by Charnes, Cooper and Rhodes) that has alternate optima solutions. When a DMU evaluates its peers, the DMU selects one set of ideal weights from alternative weights by means of the quadratic optimization of the target function from various angles. In other words, cross-evaluation is an evaluation method, which determines a set of weights for each DMU, in order to rate itself and its peers in consideration of the diversity of DMUs. The common weight evaluation method, which determines a common set of weights as the common evaluation criteria, is used to evaluate each DMU, without considering the flexibility of the DMUs, and belongs to a non-differentiating evaluation method.
In terms of practical applications, differences between homogeneous DMUs still exist, such as scale, history, culture, and region. Common weight evaluation is obviously unfair, because this method does not take into account the differences of the DMUs. However, there are also shortcomings in cross-evaluation, such as the fact that each DMU excessively enlarges the weights of its own superiority indicators and ignores the importance of input–output indicators, thus forming unrealistic and subjective evaluation conclusions. The combination of the two methods discussed above is more meaningful. That is, each DMU can take its own desirable weight, which is obtained under the constraints of an objective criterion. Therefore, we propose the cross-evaluation method based on evaluation criteria in this paper.
In this paper, we propose a series of DEA models for cross-efficiency evaluation based on evaluation criteria. The evaluation criteria may be formed based on the overall situation of the industry, or the performance of some representative enterprises. Accordingly, two evaluation criteria are proposed, which are balanced on the interval weights of input–output variables. One is based on the eclectic decision-making method, which takes the aggregation of the minimum of upper limit of interval weights and maximum of lower limit of interval weights. The harmonic coefficient α is introduced into the eclectic decision-making method to increase the flexibility of evaluation criteria. The other is based on weighted mathematical expectation. Because the importance of each DMU is different in cross-evaluation, we introduce the parameter p j ( j = 1 , 2 , , n ) into the evaluation criteria as a weight that reflects the position of a DMU. Mathematical expectations are weighted and added by p j to form the evaluation criteria. Then, the proposed method based on an evaluation criterion determines one set of input and output weights for each DMU. This is done by minimizing the deviations of input and output weights for peer evaluation from a standard criterion. In this way, aside from reducing zero weights, the weights for peer-evaluation (which are closer and more concentrated) are more realistic to peers.
The rest of the paper is organized as follows. Section 2 describes the cross-efficiency evaluations, mainly including aggressive and benevolent formulations. The evaluation criterion balanced on interval weights is developed in Section 3. The DEA models for cross-efficiency evaluation based on evaluation criteria are extended in Section 4. Numerical examples are demonstrated in Section 5. Conclusions are offered in Section 6.

2. The Efficiency Evaluation

Suppose there are n DMUs to be evaluated against m inputs and s outputs. Denote by x i j ( i = 1 , , m ) and y r j ( r = 1 , , s ) the input and output values of DMUj ( j = 1 , , n ), whose efficiencies are defined as follows. Consider a DMU, say, DMUk, k { 1 , , n } , whose efficiency relative to the other DMUs can be measured by the following CCR model (Charnes et al. [1]):
Maximize   θ k k = r = 1 s u r k y r k i = 1 m v i k x i k ,  
subject   to   θ j k = r = 1 s u r k y r j i = 1 m v i k x i j 1 ,   j = 1 , , n ,
u r k 0 ,   r = 1 , , s ,
v i k 0 ,   i = 1 , , m ,
which aims to find a set of input and output weights that are most favourable to DMUk. The Charnes and Cooper transformation can be equivalently transformed into the linear program (LP) below for the solution:
Maximize   θ k k = r = 1 s u r k y r k ,
subject   to   i = 1 m v i k x i k = 1 ,
r = 1 s u r k y r j i = 1 m v i k x i j 0 ,   j = 1 , , n ,
u r k 0 ,   r = 1 , , s ,
v i k 0 ,   i = 1 , , m .
Let u r k ( r = 1 , , s ) and v i k ( i = 1 , , m ) be the optimal solution to the above model. Then, θ k k = r = 1 s u r k y r k is referred to as the CCR-efficiency of DMUk, which is the best relative efficiency that DMUk can achieve, and reflects the self-evaluated efficiency of DMUk. As such, θ j k = r = 1 s u r k y r j / i = 1 m v i k x i k is referred to as a cross-efficiency of DMUj and reflects the peer evaluation of DMUk to DMUj ( j = 1 , , n ;   j   k ).
Model (2) is solved n times, with a different DMU being solved each time. As a result, n DMUs will have n sets of input and output weights, and each DMU will have (n − 1) cross-efficiency, plus a CCR-efficiency, which together form a cross-efficiency matrix. The average cross-efficiency for DMUk is 1 n k = 1 n θ j k ( j = 1 , , n ), where θ j k ( k = 1 , , n ; j = k ) are the CCR-efficiencies of the n DMUs, that is, θ j k = θ k k , j = k .
Note that model (2) may have multiple optimal solutions. If the input and output weights are not unique, the use of cross efficiency evaluation will be destroyed. In order to solve this problem, a remedy proposed by Sexton et al. [9] is to introduce a secondary goal, one which optimizes the input and output weights while maintaining the CCR-efficiency determined by model (2). Doyle and green [10] proposed the most commonly used secondary goals, as follows:
Minimize   r = 1 s u r k ( j = 1 , j k n y r j ) ,
subject   to   i = 1 m v i k ( j = 1 , j k n x i j )   =   1 ,
r = 1 s u r k y r k θ k k i = 1 m v i x i k = 0 ,
r = 1 s u r k y r j i = 1 m v i k x i j 0 ,   j = 1 , , n ;   j   k ,
u r k 0 ,   r = 1 , , s , v i k 0 ,   i = 1 , , m ,
and
Maximize   r = 1 s u r k ( j = 1 , j k n y r j ) ,
subject   to   i = 1 m v i k ( j = 1 , j k n x i j )   =   1 , r = 1 s u r k y r k θ k k i = 1 m v i x i k = 0 , r = 1 s u r k y r j i = 1 m v i k x i j 0 , j = 1 , , n ;   j     k , u r k 0 ,   r = 1 , , s , v i k 0 ,   i = 1 , , m .
Model (3) is called the aggressive formula of cross-efficiency evaluation, which aims to minimize the cross-efficiency of peers in some way. Instead, model (4) is called the benevolence formula for cross-efficiency evaluation. To some extent, the model improves the cross-efficiency of other EMUs. These two models optimize input and output weights in two different ways. Therefore, there is no guarantee that they can lead to the same efficiency ranking or conclusion of N DMUs.
In addition, other secondary goals or models are mentioned in the DEA literature. For interested readers, please refer to Sexton et al. [9], Liang et al. [12], Wang et al. [26,27], and Jahanshahloo et al. [13]. These secondary goals focus on how to uniquely determine input and output weights. In the next section, we focus on the diversity of input and output weights, and develop some alternative DEA models to minimize the weights differences used to evaluate peers. This enables cros-efficiency to be evaluated with more reasonable input and output weights.

3. Evaluation Criteria Balanced on Interval Weights of N DMUs

From our perspective, when a DMU is given an opportunity to unilaterally decide upon a set of input and output weights for evaluating peers, in addition to being as favourable as possible to itself, the DMU tends have specific preferences when choosing the set of weights. This preferential choice of weights leads to an unfair and arbitrary situation for peers. Therefore, we need to establish some evaluation criteria to ensure the relative consistency of cross-evaluation for peers by eliminating prejudice. That is, we not only look for non-zero weights, but we also seek to ensure that weights for evaluating peers are as close as possible by taking a certain evaluation criterion as a reference point. That means we seek to minimize the difference between the weights of each DMU and the evaluation criteria. Two evaluation criteria are proposed as follows:

3.1. The DEA Modes of Interval Weights

For a DMU, we get a set of maximum weights from among the alternate optima in the CCR model by maximizing the weight of each variable (including input and output variables). By contrast, a set of minimum weights is obtained by minimizing the weight of each variable. Consider an efficient DMU, say, DMUk. The maximum attainable value of u r k or v i k of the DMUk can be obtained by solving the following LP model:
Maximize   δ 1 u r k + δ 2 v i k ,
subject   to   i = 1 m v i k x i k = 1 ,       r = 1 s u r k y r k θ k k i = 1 m v i x i k = 0 , r = 1 s u r k y r j i = 1 m v i k x i j 0 ,   j = 1 , , n ;   j     k , u r k 0 ,   r = 1 , , s , v i k 0 ,   i = 1 , , m . δ 1 + δ 2 = 1 , δ 1 , δ 2 = 0 or 1 ,
where δ 1 , δ 2 are a type of Boolean and δ 1 + δ 2 = 1 . When δ 1 = 1 , the optimization goal is u r k , and when δ 2 = 1 , the optimization goal is v i k . Let u r k + ( r = 1 , , s ) and v i k + ( i = 1 , , m ) be the maximum attainable value (upper bound) obtained by the above model. To obtain the minimum attainable value (lower boundary) of u r k and v i k , one simply changes the objective function to minimizing u r k   o r   v i k , and u r k ( r = 1 , , s ) and v i k ( i = 1 , , m ) is obtained as the minimum attainable value. For u r k (and v i k ) of a DMUk, this method leads to an interval weight [ u r k , u r k + ] ( o r [ v i k , v i k + ] ) , and then n DMUs lead to n interval weights [ u r 1 , u r 1 + ] , [ u r j , u r j + ] , , [ u r n , u r n + ] for u r ( r = 1 , 2 , , s ) , and interval weights of v i ( i = 1 , 2 , , m ) for the same thing.

3.2. Evaluation Criteria Based on the Eclectic Decision-Making Method

There are multiple methods in the criteria for evaluating peers based on the interval weights. The optimistic decision method would consider the upper limit of the interval weights as a peer evaluation criterion. The pessimistic decision method can also take the lower limit as a peer evaluation criterion. However, the decision makers cannot be absolutely optimistic or pessimistic; they are more likely to be between the two. Therefore, the eclectic decision-making method is introduced as peer evaluation criteria. We can use u r + min to express the minimum of the upper limit, and u r max to denote the maximum value of the lower limit. Both u r + min and u r max reflect the idea of the eclectic decision-making method, and their formula is as follows:
u r + min = min ( u r j + ) , j = 1 , 2 , , n , u r max = max ( u r j ) , j = 1 , 2 , , n .
To make the evaluation criteria more realistic and flexible in terms of weights, a harmonic coefficient, namely α , is introduced to combine u r + min and u r max . That is, the parameter α displays a preference for u r + min , and 1 α is introduced as a damping coefficient reflecting the preference for u r max . Then, we use the u ¯ r to express the evaluation criterion of n DMUs in u r j ( j = 1 , 2 , , n ) . The same procedures are used in v i j ( j = 1 , 2 , , n ) , where v ¯ i is denoted as the evaluation criterion. In this way, one set of weights ( u ¯ r , v ¯ i ) (as shown in Formula (6a) and (6b)) is obtained as an evaluation criterion, which in turn is a balance between the maximum and the minimum attainable values of n DMUs in u r j and v i j . This ensures the evaluation criterion meets a variety of application requirements. This is the first evaluation criterion, which is based on the eclectic decision-making method, or ECED for short:
u ¯ r = u r + min α + u r max ( 1 α ) , α 1 ; r = 1 , 2 , s ,
v ¯ i = v i + min α + v i max ( 1 α ) , α 1 ; i = 1 , 2 , , m .

3.3. Evaluation Criterion Based on Weighted Mathematical Expectation

If we denote the optimal solution of the above model (5) by [ u 1 j , u 1 j + ] , , [ u s j , u s j + ] , [ v 1 j , v 1 j + ] , , [ v m j , v m j + ] for corresponding DMUj , j = 1 , 2 , , n , then solving model (5) n times would lead to n sets of optimal solutions available for n DMUs, which in turn would form an interval weight matrix (IWM). An IWM is shown as follows:
I W M = [ [ u 11 , u 11 + ] , , [ u s 1 , u s 1 + ] , [ v 11 , v 11 + ] , , [ v m 1 , v m 1 + ] [ u 12 , u 12 + ] , , [ u s 2 , u s 2 + ] , [ v 12 , v 12 + ] , , [ v m 2 , v m 2 + ] , , , , [ u 1 n , u 1 n + ] , , [ u s n , u s n + ] , [ v 1 n , v 1 n + ] , , [ v m n , v m n + ] ]
In practical applications, it is unfair and subjective to use the upper or lower bounds of the interval weight by the criterion for peer evaluating. Rather, an application should consider the level that most DMUs can reach. However, we cannot obtain sufficient knowledge of the weight distribution information in the interval. The term probability distribution refers to the probability rule used to express the value of random variables. We may assume that the weights satisfy one or other forms of probability distribution (such as normal distribution or uniform distribution). The mathematical expectation of probability distribution can best represent the interval weight used as a criterion for peer evaluating when that weight obeys the probability distribution. According to the central limit theorem, the random variables approximately obey the normal distribution when the sample size is large enough. Supposing that weights satisfy the standard normal distribution in this paper, the mathematical expectation matrix-based interval weight matrix (IWM) is as follows:
( μ ¯ 11 , , μ ¯ s 1 , η ¯ 11 , , η ¯ m 1 μ ¯ 12 , , μ ¯ s 2 , η ¯ 12 , , η ¯ m 2 , , , , μ ¯ 1 n , , μ ¯ s n , η ¯ 1 n , , η ¯ m n ) ,
where μ ¯ r j ( r = 1 , 2 , , s ; j = 1 , 2 , , n ) is the mathematical expectation of the interval weights [ u r j , u r j + ] ( s = 1 , 2 , , s ; j = 1 , 2 , , n ) , and η ¯ i j ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) is the mathematical expectation of interval weights [ v i j , v i j + ] ( i = 1 , 2 , , m ; j = 1 , 2 , , n ) . Where
μ ¯ r j = u r j + u r j + 2 ,
η ¯ i j = v i j + v i j + 2 .
For DMUj, μ ¯ r j and η ¯ i j may not be a real weight, and this represents the compromise decision for the decision-makers, as an objective evaluation criterion. There are n sets of mathematical expectations representing the evaluation criteria for n DMUs. As the importance of each DMU is different in cross-evaluation, let p j be the weight of DMUj, which embodies the position of DMUj. Therefore, the evaluation criterion based on weighted mathematical expectation (ECWME) is calculated as follows:
u ¯ r = u ¯ r 1 p 1 + u ¯ r 2 p 2 + + u ¯ r n p n = j = 1 n u ¯ r j p j , r = 1 , 2 , , s ;
v ¯ r = η ¯ i 1 p 1 + η ¯ i 2 p 2 + + η ¯ i n p n = j = 1 n η ¯ i j p j , i = 1 , 2 , , m ,
where
1 =   p 1   +   p 1   + + p n .

4. DEA Models for Cross-Efficiency Evaluation Based on Evaluation Criteria

It is well known that each DMU personally chooses the profile of weights to be used in the cross-efficiency evaluation. Therefore, the DMU’s choice is often prejudiced. One DMU’s attitude towards its peers may be aggressive, benevolent, indifferent, or something else. Those prejudicial attitudes of DMUs to peers need to be avoided in many applications. This is why the weights that are chosen by each DMU for peer evaluations should be based on an evaluation criterion, as stated in part 3. We propose a method that makes a selection between alternate optima of CCR. This is done by, to the greatest extent possible, reducing the degree of deviation of weights for peer evaluation from the evaluation criteria. In other words, our purpose is to look for the profiles of DMU weights that are closest to the evaluation criterion. To do this, a nonlinear programming model is proposed as follows:
Minimize   r = 1 s | u r k u ¯ r | y r k + i = 1 m | v i k v ¯ i | x i k ,
subject   to   i = 1 m v i k x i k = 1 ,
r = 1 s u r k y r k θ k k i = 1 m v i x i k = 0 ,
r = 1 s u r k y r j i = 1 m v i k x i j 0 ,   j = 1 , , n ; j k ,
u r k 0 ,   r = 1 , , s ,
v i k 0 ,   i = 1 , , m ,
where ( u ¯ 1 , u ¯ 2 , , u ¯ s , v ¯ 1 , v ¯ 2 , , v ¯ m ) are obtained from Formula (6a) and (6b) or Formula (7a) and (7b), which is the evaluation criterion for the evaluating peers. Also, ( u 1 k , u 2 k , , u s k , v 1 k , v 2 k , , v m k ) are the variables that need to be solved. The above model needs to be solved n times, one time for each DMU. The purpose of the model is to minimize the deviation of input and output weights from the evaluation criterion. In other words, each DMU obtains one set of weights that is favorable to that DMU and is also as close to the evaluation criterion of the evaluating peers as possible.
Model (8) is a form of nonlinear programming. To make this nonlinear model (8) become capable of linear programming, we introduce the new decision variables ϕ r , δ i 0 , r = 1 , 2 , , s , i = 1 , 2 , , m , and we add v i k x i k δ i v ¯ i x i k , v i k x i k + δ i v ¯ i x i k , u r k y r k ϕ r u ¯ r y r k , u r k y r k + ϕ r u ¯ r y r k to the set of constraints. Thus, we minimize the linear objective function r = 1 s ϕ r + i = 1 m δ i .
Minimize   r = 1 s ϕ r + i = 1 m δ i ,
subject   to   i = 1 m v i k x i k = 1 ,
r = 1 s u r k y r k θ k k i = 1 m v i x i k = 0 ,
r = 1 s u r k y r j i = 1 m v i k x i j 0 ,   j = 1 , , n ; j k ,
v i k x i k δ i v ¯ i x i k , v i k x i k + δ i v ¯ i x i k
u r k y r k ϕ r u ¯ r y r k , u r k y r k + ϕ r u ¯ r y r k ,
ϕ r , δ i 0 ,
u r k 0 ,   r = 1 , , s ,
v i k 0 ,   i = 1 , , m .
If we denote the optimal solution of model (9) by ( u 1 k , u 2 k , , u s k , v 1 k , v 2 k , , v m k ) for the corresponding DMUk, then the cross-efficiency of a given DMUj with the profile of weights provided by DMUk will be obtained as follows:
E k j = r = 1 s u r k y r j i = 1 m v i k x i j .
Therefore, the cross-efficiency score of DMUj is the average of these cross-efficiencies:
E ¯ j = 1 n k = 1 n E k j .
Besides cross-evaluation, the weights solved by model (9) are similar to their evaluation criteria derived from formulae (67). The weights are relatively concentrated, and their coefficient of variation is small, which can further be extended to be a common set of weights (CSW) based on evaluation criteria for all DMUs. The extended CSW deviates little from the cross weights solved by model (9), which are easily accepted by each DMU.
Let ( u 1 k , u 2 k , , u s k , v 1 k , v 2 k , , v m k ) , solved by model (9), be the optimal weights of efficient DMUk, and let ( u 1 t C C R , u 2 t C C R , , u s t C C R , v 1 t C C R , v 2 t C C R , , v m t C C R ) , solved by model (2), be the optimal weights of inefficient DMUt. Suppose there are n 1 efficient DMUs and n 2 inefficient DMUs. The CSW based on evaluation criteria is obtained as follows:
u r C S W = 1 n 1 + n 2 ( j = 1 n 1 u r j + j = 1 n 2 u r j C C R ) ,
v i C S W = 1 n 1 + n 2 ( j = 1 n 1 v i j + j = 1 n 2 v i j C C R ) .
We consider the inefficient DMU and efficient DMU separately, because, in the case of low efficiency DMU, the weights’ distribution provided by the proposed method is the only optimal solution to the weights in the CCR model. This same train of thought can be seen in Nuria Ramón (2011). This paper, however, differs from the idea proposed in Nuria Ramón (2011). The cross-evaluation proposed in this paper allows for the inefficient DMUs that keep the weights in the CCR model. For efficient DMUs, the weights for evaluating peers are obtained by minimizing deviation from the evaluation criterion, rather than reducing the differences between the weights of any two DMUs. In addition, the model proposed by Nuria Ramón needs to be solved n 2 2 times, and this approach is not suitable for a large number of DMUs. The measure of our approach is different. We focus attention on the deviation of input and output weights from the evaluation criterion. The model in this paper only needs to be solved n times and needs to be more practical.

5. Numerical Examples

Example 1. 
In this section, we provide one numerical example to illustrate the proposed methods detailed above. We consider the numerical examples with the data presented in Table 1. The case of seven academic departments in a university is presented in Table 1, with three inputs and three outputs.
  • Input 1: Total number of academic staff (x1)
  • Input 2: Academic staff salaries in thousands of pounds (x2)
  • Input 3: Support staff salaries in thousands of pounds (x3)
  • Output 1: Total number of undergraduate students (y1)
  • Output 2: Total number of postgraduate students (y2)
  • Output 3: Total number of research papers (y3)
For a DMU, the minimum output and input weights can be obtained by solving model (5). The result is taken as the upper bound of the interval weights (UBIW). The maximum output and input weights can also be obtained by solving model (5). This result is taken as the lower bound of the interval weights (LBIW). Therefore, each DMU will get a set of interval weights, as shown in Table 2.
Firstly, the cross-evaluation efficiency is discussed, based on eclectic decision-making evaluation criterion (ECED). The UBIW and LBIW are obtained by Formulaes (6a) and (6b), as shown in the first and second row of Table 3. To make sure the evaluation criteria fall into the interval weights of all DMUs (as much as possible), the ECED is calculated with α = 0.5, as can be seen in the last row of Table 3. Each DMU attempts to obtain a set of weights that is as close (or as similar as possible) to the cross-evaluation criterion. This is done by minimizing deviation from the cross-evaluation criterion. When the evaluation criterion is the ECED, the cross-evaluation weights of each DMU solved by model (9) are shown in Table 4.
Secondly, another criterion is proposed, which is based on weighted mathematical expectation. There is a mathematical expectation regarding the arbitrary weight interval. Then, n sets of mathematical expectations are computed from Table 2 for n DMUs. These are seen in the second to seventh lines in Table 5. In order to be comparable to the evaluation criterion based on eclectic decision-making (ECED), we assume that each DMU has the same status, that is, p 1 = p 2 = = p 7 . Then, the evaluation criterion based on weighted mathematical expectation is shown in the last line of Table 5. In Table 6, we show the weights for peer evaluation as solved by model (9) for the seven academic departments, which are based on the ECWME.
When comparing Table 4 and Table 6, we can see that the weights of DMU4 have not changed. This is because DMU4, which is an inefficient DMU, retains the weights in the CCR model in the cross-evaluation of this paper. In the coming section, we illustrate how the performance of the proposed approach is better than the classic cross-efficiency evaluation method in reducing zero weights. In Table 7, we show the weights solved by DEA model (2); that is, the CCR model. Cross-evaluation weights under the proposed method are limited by the constraints of the criteria, so the weights in Table 6 fluctuate around the evaluation criteria. The weights of the CCR model have no restriction of evaluation criteria. Each DMU only considers whether the weights are favourable or not, and does not need to consider the reality of the weights. So a large number of zero-value weights often appear in Table 7, which is easy to find by comparing Table 6 and Table 7. In Table 8 and Table 9, we show the weights solved by aggressive and benevolent cross evaluation. It is particularly noticeable that the number of zero weights is sharply reduced in the proposed ECED and ECWME-based approach when comparing Table 4, Table 6, Table 7, Table 8 and Table 9.
The cross-efficiency scores and the rankings based on all the methods mentioned in this paper are provided in Table 10. We find that the proposed ECED and ECWME-based method has more discrimination power than CCR-efficiency, and basically provides different ranking with benevolent and aggressive cross-evaluation. This indicates that the proposed approaches in this paper represent a new method, one which can achieve effective ranking for DMUs. In addition, cross-efficiency scores and the rankings of the proposed approach based on ECED are shown in Table 11 when harmonic coefficient α = 0.2, α = 0.5, and α = 0.8. The proposed approach’s economic meaning is that each DMU has its own inputs and outputs, which are different from others. This leads to different DMU rankings under different evaluation criteria. This may become a macro policy whose value is high or low, according to the evaluation need, by adjusting α = 0.2, 0.5, or any value between 0 and 1.
Evaluation criteria are often formulated according to the present industry, which is not the integration of all enterprises, but the embodiment of some representative enterprises. That is, each DMU has a different role in the formation of evaluation criteria. Therefore, we consider two ECWME situations in this example, when p1 = p2 = p3 = p4 = p5 = p6 = p7 and 3p1 = 2 p2 = 6p3 = 6p4 = 6 p5 = 6 p6 = 6 p7. The cross-efficiency scores and rankings based on ECWME under the two combinations of p are shown in Table 12.
The coefficient of variation (CV) of the weights of each variable is computed and compared with the CCR model. The proposed approach is based on ECED and ECWME, and the result can be seen in Table 13. As can be seen, the mean of CV of the weights provided by the proposed approach is smaller than that of the weights provided by the CCR model. Also, the proposed ECWME-based approach is smaller than the proposed approach based on ECED. The CV of weights based on ECED and ECWME changes in line with changes in coefficient α and combinations of p.
Obviously, the CV of the weights of the two methods is very small in any case. This means that the weights of each DMU are very similar. These similar weights are further extended to be CSW, under which the score and rank based on ECED and ECWME are shown in Table 14. From the results shown in Table 14, the CSW in any case can be used to achieve the effective ranking of DMUs, and the rankings are different from each other.
Example 2: 
In order to illustrate the rationality and feasibility of the method, data pertaining to 27 innovative machinery manufacturing enterprises in Fujian Province in 2015 are collected and evaluated in terms of four inputs and four outputs, which are defined below:
  • x1: R&D (research and development) personnel (ratio of R&D personnel to total personnel);
  • x2: Total expenditure on scientific and technological activities in the current year (in increments of $10,000);
  • x3: Total expenditure on R&D of enterprises (in increments of $10,000);
  • x4: Number of senior technicians and technicians at the end of the year (persons);
  • y1: Sales revenue of new products (services or processes) of enterprises in this year (in increments of 10,000 yuan);
  • y2: Added value of enterprises in this year (in increments of 10,000 yuan);
  • y3: Total profits realized by enterprises in this year (in increments of 10,000 yuan);
  • y4: Total labor productivity of enterprises in this year (in increments of 10,000 yuan/person).
The letters in the enterprise number are the initials of the name of the area in which the enterprise is located. For example, “PT” in “PT 1” stands for Pu Tian City of Fujian Province.
The application example involves 27 innovative machinery manufacturing enterprises in Fujian Province, located in different cities in Fujian Province. The CCR-efficiency of each enterprise is shown in last column of Table 15. According to CCR-efficiency, 12 out of the 27 sampled enterprises are DEA effective. The interval weights of these 12 DEA-effective enterprises are obtained by solving model (5), as shown in Table A1 of the Appendix A. On the basis of the interval weights (Table A1 of the Appendix A), 10 ECEDs are denoted by A, B, C, D, E, F, G, H, I, and J, when the values of α were 1, 0.9, 0.8, 0.7, 0.6, 0.5, 0.3, 0.2, and 0.1, respectively. With any decrease of α, the value of ECED increases gradually. That is, the ECED is the minimum of the 10 evaluation criteria when α = 1 and the highest evaluation criteria when α = 0.1. The 10 ECED are shown in Table A2 of the Appendix A. For the 27 enterprises, if they are evaluated in a non-differentiated way (such as using the common weight evaluation method), then it is not conducive to encouraging enterprises in different regions to carry out innovative activities according to local conditions. However, evaluating these enterprises in a differentiated way (such as using the traditional cross-efficiency evaluation method) allows each enterprise to avoid its own weaknesses and emphasize its own advantages. In this case, the evaluation results will inevitably be unfair. Therefore, differentiated evaluation under evaluation criteria is carried out for these 27 enterprises, that is, cross-efficiency evaluation under the 10 criteria (Table A2 of the Appendix A). The comparison of the evaluation results is shown in Figure 1 and Figure 2.
We can see in Figure 1 that the degree of cross-efficiency of the 27 enterprises varies under different criteria, that is, the cross-efficiency of 27 enterprises is generally higher under the evaluation criterion (minimum criterion) when α = 1, and lower under the evaluation criterion (highest criterion) when α = 0.1. This result shows that the evaluation criteria have a supervisory and regulatory effect on the cross-efficiency of enterprises. When the evaluation criteria are lower, the weights space available for enterprises is larger, and the enterprises can select more favorable weights for themselves. Therefore, the cross-efficiency is generally higher. Conversely, when the weights space is smaller, the cross-efficiency is smaller. This conclusion is consistent with the application. The lower the evaluation criteria, the higher the performance of the enterprises will be.
According to Figure 2, the evaluation criteria also have an impact on the rankings of the 27 enterprises, and the rankings under almost every criterion are different. At the same time, the ranking of the 27 enterprises under different criteria has not changed to any significant degree. In particular, the ranking of the top enterprises and the bottom enterprises is relatively stable. This result shows that the rankings obtained under the proposed method can basically reflect the basic strength of enterprises.
By introducing the parameter p, the ECWME above takes into account the importance of DMUs in the criterion formulation. In order to highlight the changing regularity of cross-efficiency under different ECWME, we try to simplify the evaluation criteria. That is, one evaluation criterion only considers the importance of one DEA-effective enterprise. Therefore, the weighted mathematical expectation of 12 DEA-effective enterprises forms 12 evaluation criteria by adjusting the parameter p (see Table A3 of the Appendix A). Also, 12 ECWMEs are shown in Table A4 of the Appendix A. The cross-efficiency comparison chart of the 27 sampled machinery manufacturing enterprises based on 12 ECWME is shown in Figure 3, and the ranking comparison chart is shown in Figure 4. In Figure 3, the cross-efficiency of each DMU changes to different degrees under different standards, but the trend is basically the same. This shows that the evaluation criteria have a regulatory effect on the cross-efficiency. However, their decisive effect on the cross efficiency is the enterprise’s own performance. In addition, in Figure 3, when DMU-14 is used as the evaluation criterion, the cross efficiency of each DMU is higher. Combined with DMU-14 (see Appendix A Table A4 for bid winning), its value is lower than other evaluation criteria; it can be seen that the lower the evaluation criterion, the higher the cross efficiency of the evaluated DMU. In Figure 4, we use red dots to mark the ranking of 12 DEA-effective enterprises under 12 ECWMEs (see Table A4 of the Appendix A). For example, a small red dot is used to mark the ranking of a DEA-effective enterprise denoted as DMU5 under ECWME, denoted as DMU-5. This only considers the importance of DMU5. From the red dot distribution in Figure 4, it is not difficult to find that the ranking of DMU5 under ECWME DMU-5 is higher than the ranking under other criteria. The conclusion is also valid for other DEA-efficient enterprises, such as DMU6, DMU7, DMU11, and so on. This conclusion is reached because the weights for the enterprises (DMU5, DMU6, DMU7, DMU11, and so on) to be selected are subject to less conditional constraints (under ECWME, such as DMU-5, DMU-6, DMU-7, DMU-11, and so on) in cross-evaluation. That is, there is more weights room for the enterprises to choose, and each enterprise can obtain more conducive weights for themselves. This result is also consistent with the actual application. That is, the enterprise that is used as the evaluation criterion has more advantages in the comprehensive evaluation. Of course, few evaluation criteria in practical application only consider the importance of a certain enterprise. Often, some mainstream enterprises or representative enterprises are taken as evaluation criteria, in order to enhance the objectivity of the evaluation results and avoid the issue of evaluation subjectivity.
From the analysis of the results of these examples, we can see that, firstly, the level of evaluation criteria will affect the ranking of an enterprise’s performance. Low criteria will help to implement enterprise incentive policies, and high criteria will help to macro-control enterprises. Secondly, enterprises ranked higher when considered by evaluation criteria. Taking multiple enterprises as evaluation criteria is conducive to setting up market benchmarks, and taking the vast majority of enterprises as evaluation criteria is conducive to creating a fair and free competitive market environment.

6. Conclusions

Cross-efficiency evaluation is an important method for ranking DMUs. Existing DEA models for cross-efficiency evaluation tend to choose the set of weights for peer evaluating using subjective attitudes, without an objective evaluation criterion as a reference point for peer evaluation. This makes it difficult for the DM to make a subjective DEA ranking.
To resolve these problems, we have proposed in this paper a cross-efficiency evaluation method based on evaluation criterion balanced on interval weights. The DEA model determines one set of weights for each DMU to evaluate its peers. This is done by minimizing the distance from the CCR-weights of the DMU to the evaluation criteria, which is balanced on interval weights. The criteria standard is different from the original intention of the evaluation, and the corresponding ranking based on cross-efficiency evaluation is also different. On the basis of the interval weights, this paper proposes two types of flexible evaluation criteria. One is based on the evaluation criteria that, in turn, are based on the eclectic decision-making method. Also, changing the harmonic coefficient α can be done to adjust the evaluation criteria. The other criterion is based on the evaluation criteria of mathematical expectation, taking into account the importance of each DMU in the standard formulation by introducing the parameter P. As a result, the cross-efficiencies computed using this method are more objective and flexible, thus meeting the requirements of macro regulation. We have also extended the DEA model and proposed a cross-weight evaluation, which seeks a common set of weights for all DMUs. This method’s usefulness has been illustrated with numerical examples.
From the results of the illustrative example, it is particularly noticeable that the number of zero weights is sharply reduced and using cross-evaluation weights is more objective. This method avoids the situation where decision-makers try, for some purpose, to choose weights that are too subjective. In addition, the proposed approach can lead to different DMU rankings under different evaluation criteria and has more discrimination power than the CCR-efficiency method. The cross-evaluation criteria in the manuscript can be regarded as a means of macro-control, which is derived from the real market situation, and which is applied to macro-control market trends at the same time. Under the market environment, whether an industry or a chain enterprise, there should be established industry evaluation criteria or enterprise management objectives, and performance evaluation can be carried out by reference to the evaluation criteria or management objectives. This paper effectively solves these two problems. Firstly, the two evaluation criteria proposed in this paper provide feasible means and methods for industry criteria or enterprise management objectives; then, the cross-evaluation method based on evaluation criteria provides methods and theoretical support for performance evaluation, taking industry criteria or enterprise management objectives as references. Therefore, the proposed approach can be effectively applied to different evaluation problems, such as enterprise performance evaluation, school management, and the macro-control of banks. This work has several limitations that should be improved in future research. Firstly, the proposed approach assumes that DMUs are homogeneous, which limits the method’s application scope. Secondly, the criteria for macro-control should be based on a large sample, but this paper uses a small sample. Therefore, readers interested in this research can expand the approach by combining DEA and statistical methods that consider the heterogeneity of decision-making units (DMUs).

Author Contributions

Conceptualization, H.S. and Y.W.; methodology, H.S.; software, H.S.; validation, H.S., Y.W. and X.Z.; formal analysis, H.S.; investigation, H.S.; resources, H.S.; data curation, H.S.; writing—original draft preparation, H.S.; writing—review and editing, H.S.; visualization, H.S.; supervision, H.S.; project administration, H.S.; funding acquisition, Y.W.

Funding

This work was supported by the National Natural Science Foundation of China (No. 71801048, 71801050, and 71371053), Natural Science Foundation of Fujian Province (No. 2018J01650), and Youth Foundation of Education Department of Fujian Province (JAT170634).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Weights of 12 efficient machinery manufacturing enterprises.
Table A1. Weights of 12 efficient machinery manufacturing enterprises.
DMUv1v2v3v4u1u2u3u4
DMU4(0, 0.0824)(0, 0.0058)(0.029, 0.082)(0, 0.55915)(0, 0.0003)(0, 0.0032)(0, 0.00185)(0, 0.4348)
DMU5(0, 0.4453)(0, 12.676)(0, 0.0157)(0, 0.55335)(0, 0.0016)(0, 0.0083)(0, 0.04655)(0, 1.35135)
DMU6(0, 0.2795)(0.013,0.027)(0, 0.0125)(0, 0.01635)(0, 0.0003)(0, 0.0018)(0.0048, 0.00665)(0, 0.3408)
DMU7(1.116, 1.40)(0, 0.0016)(0.008, 0.016)(0, 0.0995)(0.0005, 0.0008)(0, 0.00115)(0, 0.0008)(0.982, 1.263)
DMU11(0, 0.38425)(0, 0.0086)(0, 0.0084)(0.076, 2.424)(0, 0.00075)(0.0005, 0.0012)(0, 0.0092)(0, 0.5782)
DMU13(0, 0.5)(0, 0.04005)(0, 0.0094)(0, 0.30675)(0, 0.0004)(0, 0.00295)(0, 0.01035)(0, 0.5804)
DMU14(0.332, 0.507)(0, 0.0023)(0, 0.0025)(0, 0.00485)(0, 0.0002)(0, 0.0004)(0, 0.00135)(0.276, 0.404)
DMU15(0, 0.30715)(0, 0.0053)(0, 0.0054)(0, 0.56935)(0, 0.00035)(0, 0.0006)(0, 0.0022)(0, 0.2597)
DMU16(0, 0.4337)(0, 0.025)(0, 0.0248)(0, 1.04525)(0, 0.0009)(0, 0.003)(0, 0.00775)(0.072, 0.439)
DMU19(0, 0.27505)(0, 0.0182)(0, 0.0182)(19.93, 20.03)(0, 0.00175)(0, 0.0138)(0, 0.0641)(0, 1.515)
DMU22(0, 0.9381)(0, 0.0149)(0, 0.12195)(0, 1.235)(0, 0.0009)(0, 0.01005)(0, 0.00505)(0, 1.465)
DMU26(0.444, 0.687)(0, 0.003)(0, 0.003)(0, 0.1071)(0, 0.0003)(0, 0.0009)(0, 0.002)(0, 0.418)
Table A2. Eclectic decision-making evaluation criterion (ECED) under different values of parameter α.
Table A2. Eclectic decision-making evaluation criterion (ECED) under different values of parameter α.
ECEDv1v2v3v4u1u2u3u4
A (α = 1)0.16480.00320.00540.00970.00040.00080.00160.5194
B (α = 0.9)0.25990.012880.007792.011450.000410.000770.001920.56565
C (α = 0.8)0.3550.022560.010184.01320.000420.000740.002240.6119
D (α = 0.7)0.45010.032240.012576.014950.000430.000710.002560.65815
E (α = 0.6)0.54520.041920.014968.01670.000440.000680.002880.7044
F (α = 0.5)0.64030.05160.0173510.018450.000450.000650.00320.75065
G (α = 0.4)0.73540.061280.0197412.02020.000460.000620.003520.7969
H (α = 0.3)0.83050.070960.0221314.021950.000470.000590.003840.84315
I (α = 0.2)0.92560.080640.0245216.02370.000480.000560.004160.8894
J (α = 0.1)1.02070.090320.0269118.025450.000490.000530.004480.93565
Table A3. Combinations of p-value.
Table A3. Combinations of p-value.
Evaluation Criterionp4p5p6p7p11p13p14p15p16p19p22p26
DMU-4100000000000
DMU-5010000000000
DMU-6001000000000
DMU-7000100000000
DMU-11000010000000
DMU-13000001000000
DMU-14000000100000
DMU-15000000010000
DMU-16000000001000
DMU-19000000000100
DMU-22000000000010
DMU-26000000000001
Table A4. Under different combinations of p in Table A3.
Table A4. Under different combinations of p in Table A3.
ECWMEv1v2v3v4u1u2u3u4
DMU-40.08240.00580.082050.559150.00030.00320.001850.4348
DMU-50.445312.67630.01570.553350.00160.00830.046551.35135
DMU-60.27950.02740.01250.016350.00030.00180.006650.3408
DMU-71.399550.00160.016050.09950.00080.001150.00081.26305
DMU-110.384250.00860.00842.423550.000750.00120.00920.5782
DMU-130.50.040050.00940.306750.00040.002950.010350.58035
DMU-140.507250.00230.00250.004850.00020.00040.001350.4041
DMU-150.307150.00530.00540.569350.000350.00060.00220.2597
DMU-160.43370.0250.02481.045250.00090.0030.007750.4391
DMU-190.275050.01820.018220.02720.001750.013750.06411.51515
DMU-220.93810.014850.121951.235250.00090.010050.005051.4647
DMU-260.68650.0030.003050.10710.00030.00090.0020.41845

References

  1. Charnes, A.; Cooper, W.W.; Rhodes, E. Measuring the efficiency of decision making units. Eur. J. Oper. Res. 1978, 2, 429–444. [Google Scholar] [CrossRef]
  2. Banker, R.D.; Podinovski, V.V. Novel theory and methodology developments in data envelopment analysis. Ann. Oper. Res. 2017, 250, 1–3. [Google Scholar] [CrossRef] [Green Version]
  3. Hatami-Marbini, A.; Agrell, P.J.; Fukuyama, H.; Gholami, K.; Khoshnevis, P. The role of multiplier bounds in fuzzy data envelopment analysis. Ann. Oper. Res. 2017, 250, 249–276. [Google Scholar] [CrossRef]
  4. Wei, G.; Chen, J.; Wang, J. Stochastic efficiency analysis with a reliability consideration. Omega 2014, 48, 1–9. [Google Scholar] [CrossRef] [Green Version]
  5. Wei, G.; Wang, J. A comparative study of robust efficiency analysis and data envelopment analysis with imprecise data. Expert Syst. Appl. 2017, 81, 28–38. [Google Scholar] [CrossRef]
  6. Wu, J.; Sun, J.; Liang, L. Cross efficiency evaluation method based on weight-balanced data envelopment analysis model. Comput. Ind. Eng. 2012, 63, 513–519. [Google Scholar] [CrossRef]
  7. Wu, J.; Chu, J.; Sun, J.; Zhu, Q.; Liang, L. Extended secondary goal models for weights selection in DEA cross-efficiency evaluation. Comput. Ind. Eng. 2016, 93, 143–151. [Google Scholar] [CrossRef]
  8. Wu, J.; Chu, J.; Sun, J.; Zhu, Q. DEA cross-efficiency evaluation based on Pareto improvement. Eur. J. Oper. Res. 2016, 248, 571–579. [Google Scholar] [CrossRef]
  9. Sexton, T.R.; Silkman, R.H.; Hogan, A.J. Data envelopment analysis: Critique and extensions. New Dir. Program Eval. 1986, 32, 73–105. [Google Scholar] [CrossRef]
  10. Doyle, J.; Green, R. Efficiency and cross-efficiency in DEA: Derivations, meanings and uses. J. Oper. Res. Soc. 1994, 45, 567–578. [Google Scholar] [CrossRef]
  11. Wang, Y.M.; Chin, K.S. A neutral DEA model for cross-efficiency evaluation and its extension. Expert Syst. Appl. 2010, 37, 3666–3675. [Google Scholar] [CrossRef]
  12. Liang, L.; Wu, J.; Zhu, C.J. The DEA game cross-efficiency model and its Nash equilibrium. Oper. Res. 2008, 56, 1278–1288. [Google Scholar] [CrossRef] [Green Version]
  13. Jahanshahloo, G.R.; Lotfi, F.H.; Jafari, Y.; Maddahi, R. Selecting symmetric weights as a secondary goal in DEA cross-efficiency evaluation. Appl. Math. Model. 2011, 35, 544–549. [Google Scholar] [CrossRef]
  14. Ruiz, J.L. Cross-efficiency evaluation with directional distance functions. Eur. J. Oper. Res. 2013, 228, 181–189. [Google Scholar] [CrossRef]
  15. Cook, W.D.; Zhu, J. DEA Cobb–Douglas frontier and cross-efficiency. J. Oper. Res. Soc. 2014, 65, 265–268. [Google Scholar] [CrossRef]
  16. Oral, M.; Amin, G.R.; Oukil, A. Cross-efficiency in DEA: A maximum resonated appreciative model. Measurement 2015, 63, 159–167. [Google Scholar] [CrossRef]
  17. Oukil, A. Ranking via composite weighting schemes under a DEA cross-evaluation framework. Comput. Ind. Eng. 2018, 117, 217–224. [Google Scholar] [CrossRef]
  18. Carrillo, M.; Jorge, J.M. An alternative neutral approach for cross-efficiency evaluation. Comput. Ind. Eng. 2018, 120, 137–245. [Google Scholar] [CrossRef]
  19. Shi, H.; Wang, Y.; Chen, L. Neutral cross-efficiency evaluation regarding an ideal frontier and anti-ideal frontier as evaluation criteria. Comput. Ind. Eng. 2019, 132, 385–394. [Google Scholar] [CrossRef]
  20. Cook, W.D.; Roll, Y.; Kazakov, A. A DEA model for measuring the relative efficiencies of highway maintenance patrols. Inf. Syst. Oper. Res. 1990, 28, 113–124. [Google Scholar]
  21. Jahanshahloo, G.R.; Memariani, A.; Hosseinzadeh Lotfi, F.; Rezai, H.Z. A note on some of DEA models and finding efficiency and complete ranking using common set of weights. Appl. Math. Comput. 2005, 166, 265–281. [Google Scholar] [CrossRef]
  22. Kao, C.; Hung, H.T. Data envelopment analysis with common weights: The compromise solution approach. J. Oper. Res. Soc. 2005, 56, 1196–1203. [Google Scholar] [CrossRef]
  23. Liu, F.H.F.; Peng, H.H. Ranking of units on the DEA frontier with common weights. Comput. Oper. Res. 2008, 35, 1624–1637. [Google Scholar] [CrossRef]
  24. Amir, S.; Franco, V.; Paolo, B.; Wout, D.; Daniele, V. Reliable estimation of suppliers’ total cost of ownership: An imprecise data envelopment analysis model with common weights. Omega 2019, in press. [Google Scholar]
  25. Hossein, R.H.S.; Hannan, A.M.; Madjid, T.; Sadat, H.S. A novel common set of weights method for multi-period efficiency measurement using mean-variance criteria. Measurement 2018, 129, 569–581. [Google Scholar]
  26. Wang, Y.M.; Chin, K.S.; Jiang, P. Weight determination in the cross-efficiency evaluation. Comput. Ind. Eng. 2011, 61, 497–502. [Google Scholar] [CrossRef]
  27. Wang, Y.M.; Chin, K.S.; Luo, Y. Cross-efficiency evaluation based on ideal and anti-ideal decision making units. Expert Syst. Appl. 2011, 38, 10312–10319. [Google Scholar] [CrossRef]
Figure 1. Comparison of cross-efficiency scores of 27 machinery manufacturing enterprises based on 10 evaluation criteria based on eclectic decision-making (ECED). DMU, decision-making unit.
Figure 1. Comparison of cross-efficiency scores of 27 machinery manufacturing enterprises based on 10 evaluation criteria based on eclectic decision-making (ECED). DMU, decision-making unit.
Symmetry 11 01503 g001
Figure 2. Comparison of rankings of 27 machinery manufacturing enterprises based on 10 ECED.
Figure 2. Comparison of rankings of 27 machinery manufacturing enterprises based on 10 ECED.
Symmetry 11 01503 g002
Figure 3. Comparison of cross-efficiency scores of 27 machinery manufacturing enterprises based on 12 ECWMEs.
Figure 3. Comparison of cross-efficiency scores of 27 machinery manufacturing enterprises based on 12 ECWMEs.
Symmetry 11 01503 g003
Figure 4. Comparison of rankings of 27 machinery manufacturing enterprises, based on 12 ECWMEs.
Figure 4. Comparison of rankings of 27 machinery manufacturing enterprises, based on 12 ECWMEs.
Symmetry 11 01503 g004
Table 1. Data and efficiency of Example 1.
Table 1. Data and efficiency of Example 1.
x1x2x3y1y2y3CCR-Efficiency
DMU112400206035171
DMU2197507013941401
DMU34215007022568751
DMU4156001009012170.819
DMU54520002502531451301
DMU6197305013245451
DMU7412350600305159971
Table 2. Weights of decision-making units (DMUs), solved by model (5).
Table 2. Weights of decision-making units (DMUs), solved by model (5).
v1v2v3u1u2u3
DMU1(0, 79.59)(0, 2.50)(0, 50.00)(0, 15.58)(1.84, 28.57)(0, 39.71)
DMU2(0, 52.33)(0, 1.33)(0, 2.52)(5.24, 7.19)(0, 6.21)(0, 6.79)
DMU3(0, 11.60)(0, 0.37)(6.27, 14.28)(0, 4.44)(0, 3.01)(0, 13.33)
DMU5(0, 22.22)(0, 0.50)(0, 1.69)(0, 1.63)(0, 6.89)(0, 7.69)
DMU6(0, 52.22)(0, 1.37)(0, 14.27)(0, 7.57)(0, 10.99)(0, 22.22)
DMU7(7.32, 24.39)(0, 0.29)(0, 0.53)(0, 3.28)(0, 6.29)(0, 5.41)
Table 3. Eclectic decision-making evaluation criterion (ECED) of all DMUs. UBIW, upper bound of the interval weights; LBIW, lower bound of the interval weights.
Table 3. Eclectic decision-making evaluation criterion (ECED) of all DMUs. UBIW, upper bound of the interval weights; LBIW, lower bound of the interval weights.
v1v2v3u1u2u3
Minimum of UBIW11.6020.30.531.633.015.41
Maximum of LBIW7.3206.285.241.850
Cross-evaluation criterion9.460.153.403.432.432.71
Table 4. Cross evaluation weights of each DMU, solved by model (9), based on the ECED.
Table 4. Cross evaluation weights of each DMU, solved by model (9), based on the ECED.
v1v2v3u1u2u3
DMU10.946030.205870.314930.343472.136930.27055
DMU24.790550.000020.128120.613580.094900.27055
DMU30.946030.005570.741540.343470.035710.27055
DMU46.41500.006200.91000
DMU52.064880.000010.028280.156250.174460.27055
DMU64.433790.000020.314930.343470.094901.11982
DMU72.127990.000010.021240.228030.094900.15837
Table 5. Expectation of each DMU and the evaluation criterion based on weighted mathematical expectation (ECWME) of all DMUs.
Table 5. Expectation of each DMU and the evaluation criterion based on weighted mathematical expectation (ECWME) of all DMUs.
v1v2v3u1u2u3
DMU139.7951.25257.7915.20519.855
DMU226.1650.6651.266.2153.1053.395
DMU35.80.18510.2752.221.5056.665
DMU511.110.250.8450.8153.4453.845
DMU626.110.6857.1353.7855.49511.11
DMU715.8550.1450.2651.643.1452.705
evaluation criterion20.810.537.463.745.327.93
Table 6. Evaluation weights of each DMU solved by model (9), based on the ECWME.
Table 6. Evaluation weights of each DMU solved by model (9), based on the ECWME.
v1v2v3u1u2u3
DMU12.081000.150270.746000.374001.830830.79300
DMU24.931310.000020.089910.562050.532000.00157
DMU31.137350.000010.746000.361620.000020.24848
DMU46.41500.006200.91000
DMU51.620160.000010.1083700.532000.17585
DMU62.081000.031730.746000.305870.532000.79300
DMU72.081000.000010.0244500.532000.15889
Table 7. Weights of each DMU solved by CCR model.
Table 7. Weights of each DMU solved by CCR model.
v1v2v3u1u2u3
DMU102.50028.57140
DMU201.333307.194200
DMU300.37386.27624.444400
DMU464.15040.062909.108200
DMU500.5004.31652.8777
DMU601.27561.37597.575800
DMU79.94030.2521006.28930
Table 8. Weights of each DMU solved by benevolent cross evaluation.
Table 8. Weights of each DMU solved by benevolent cross evaluation.
v1v2v3u1u2u3
DMU11.990.0800.330.910.28
DMU22.870.0700.560.650
DMU300.030.7400.241.04
DMU45.38000.7600
DMU52.470.1000.411.120.35
DMU62.070.0800.340.940.30
DMU72.530.1000.421.150.36
Table 9. Weights of each DMU solved by aggressive cross evaluation.
Table 9. Weights of each DMU solved by aggressive cross evaluation.
v1v2v3u1u2u3
DMU1000.8800.500
DMU200.110.120.6800
DMU3000.920.2900
DMU44.980.0100.7600
DMU54.2900.40002.26
DMU60.9700.75001.24
DMU76.580001.700
Table 10. Cross-efficiency scores and rankings based on all the methods mentioned in this paper.
Table 10. Cross-efficiency scores and rankings based on all the methods mentioned in this paper.
ECWMERankECED (α = 0.5)RankCCRRankBenevolentRankAggressiveRank
DMU10.9130.853110.9720.971
DMU20.9420.912110.9330.724
DMU30.8150.795110.860.773
DMU40.5970.5770.8970.5870.397
DMU50.8440.824110.9140.665
DMU60.9910.971110.9910.842
DMU70.7960.756110.950.526
Table 11. Scores and rankings based on ECED when α = 0.2, α = 0.5, and α = 0.8.
Table 11. Scores and rankings based on ECED when α = 0.2, α = 0.5, and α = 0.8.
Score (α = 0.2)RankScore (α = 0.5)RankScore (α = 0.8)Rank
DMU10.88 30.81 40.77 6
DMU20.93 20.92 20.94 2
DMU30.80 50.79 50.80 5
DMU40.58 70.59 70.62 7
DMU50.82 40.82 30.87 3
DMU60.99 10.97 10.99 1
DMU70.77 60.77 60.83 4
Table 12. Scores and rankings based on ECWME using two combinations of p.
Table 12. Scores and rankings based on ECWME using two combinations of p.
p1 = p2 = , …, = p7p1 = 0.2, p2 = 0.3, p3 = …, = p7 = 0.1
ScoreRankScoreRank
DMU10.8920.923
DMU20.8430.932
DMU30.7650.805
DMU40.4570.567
DMU50.8240.834
DMU60.9310.991
DMU70.7260.796
Table 13. Coefficient of variation (CV) of multiple methods.
Table 13. Coefficient of variation (CV) of multiple methods.
CV Based on ECEDCV Based on ECWMECCR
α = 0.8α = 0.5α = 0.2p1 = , …, = p7p1 = 0.2, p2 = 0.3, p3 = …, = p7 = 0.1
DMU10.861.111.700.761.742.26
DMU20.070.200.0010.120.00040.96
DMU30.430.280.340.330.312.14
DMU50.100.070.080.190.091
DMU61.431.580.760.570.631.87
DMU70.360.330.230.330.572.65
Mean of CV0.540.600.520.380.561.81
sum of CV3.253.573.122.33.3410.87
Table 14. Score and rank under the common set of weights (CSW) based on ECED and ECWME.
Table 14. Score and rank under the common set of weights (CSW) based on ECED and ECWME.
Score and Rank Under ECWME
α = 0.8α = 0.5α = 0.2p1 = , …, = p7p1 = 0.2, p2 = 0.3, p3 = …, = p7 = 0.1
ScoreRankScoreRankScoreRankScoreRankScoreRank
DMU10.7550.7850.7850.8820.805
DMU20.9420.9620.9620.8630.942
DMU30.7460.7660.7660.7650.814
DMU40.6770.6770.6770.4770.617
DMU50.7840.7930.7930.7840.823
DMU60.9510.9710.9710.9410.991
DMU70.7930.7840.7840.5860.776
Table 15. Application example.
Table 15. Application example.
Enterprise NumberInputsOutputsCCR Efficiency
x1x2x3x4y1y2y3y4
PT1 (DMU1)151361222273012926.51894.20.44
SM4 (DMU2)12.2520435205214411463185.60.46
QZ15 (DMU3)16.54226.31226.3172799.761118.97158.638.40.67
LY1 (DMU4)4059574.1671981554.96220.3511.51
SM1 (DMU5)11.180.396317.393100603.6107.373.71
ND1 (DMU6)12.61224.6224.81193436.8581.611774.91
NP1 (DMU7)4.913491876380114041934.31
PT6 (DMU8)100273285198253317161678.70.69
QZ1 (DMU9)12.22398.89398.8985192.851955.78125.798.80.78
QZ8 (DMU10)11532.86532.863913472.983550.89528.229.10.90
FZ14(DMU11)11.875665662293751974000.21
QZ22 (DMU12)15.72788.73488.731513,287.542517.071682.58110.98
ND1 (DMU13)101004851512,93216884835.61
PT7 (DMU14)14.661044.5961.6223211,1424362827.1418.81
FZ7 (DMU15)16.28886886815,3448327144210.21
LY1 (DMU16)11.48173.71173.7419751508.9643.812.41
SM2 (DMU17)24.56346.04223.3112605.94771173.6713.50.80
LY4 (DMU18)21.18179.39179.3921858.3504705.90.88
LY3 (DMU19)18.1827527502841364783.31
ZZ4 (DMU20)10.32454.71454.71313300.331382.91532.364.90.60
QZ9 (DMU21)14.62275.7252.6574435.231301111.557.60.73
ND94 (DMU22)5.333284145549.023689872.51
ND14 (DMU23)24.6446246118661017481296.30.56
QZ35 (DMU24)10.78547.99518.681158372619.39285.456.40.69
QZ30 (DMU25)15.48613.72573.81134820.712621.4404.9410.40.67
QZ36 (DMU26)10.76825.62825.6216105025389.64717.3810.51
QZ12 (DMU27)12.55615.88316.79119018.71384.2264.991.50.66

Share and Cite

MDPI and ACS Style

Shi, H.; Wang, Y.; Zhang, X. A Cross-Efficiency Evaluation Method Based on Evaluation Criteria Balanced on Interval Weights. Symmetry 2019, 11, 1503. https://doi.org/10.3390/sym11121503

AMA Style

Shi H, Wang Y, Zhang X. A Cross-Efficiency Evaluation Method Based on Evaluation Criteria Balanced on Interval Weights. Symmetry. 2019; 11(12):1503. https://doi.org/10.3390/sym11121503

Chicago/Turabian Style

Shi, Hailiu, Yingming Wang, and Xiaoming Zhang. 2019. "A Cross-Efficiency Evaluation Method Based on Evaluation Criteria Balanced on Interval Weights" Symmetry 11, no. 12: 1503. https://doi.org/10.3390/sym11121503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop