Next Article in Journal
Centrifugation of Digestate: The Effect of Chitosan on Separation Efficiency
Previous Article in Journal
Revisiting the Granger Causality Relationship between Energy Consumption and Economic Growth in China: A Multi-Timescale Decomposition Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Evaluation of Research and Business Development: A Case Study of Korean Public Organizations

1
Division of Cosmetic Science and Technology, Daegu Haany University, Gyeongsan, Gyeongsangbuk-do 38610, Korea
2
Defense Agency for Technology and Quality, Jinju, Gyeongsangnam-do 52851, Korea
3
Department of Industrial and Management Engineering, Inje University, Gimhae, Gyeongsangnam-do 50834, Korea
*
Author to whom correspondence should be addressed.
Sustainability 2017, 9(12), 2297; https://doi.org/10.3390/su9122297
Submission received: 17 October 2017 / Revised: 26 November 2017 / Accepted: 8 December 2017 / Published: 11 December 2017

Abstract

:
This study presents data envelopment analysis (DEA)-based systematic and simultaneous performance measures for research and business development (R&BD) and internal processes. In particular, we focus on the relationship between the research and development (R&D) process and business development (BD) process in evaluating R&BD overall performance, and provide effective guidelines for relatively underperforming decision-making units to help them establish an optimal strategy to increase their performance. To that end, we develop a two-stage network DEA model based on performance measures by defining variables and transforming R&D and BD performances into a serial network structure. In addition, based on the proposed method, we present a practical application of R&BD performance measurement for Korean public organizations, specifically public research institutes and universities.

1. Introduction

Technological innovation is an essential element for evaluating an organization’s national and business superiority in modern society, in which new products and services are widely developed through technological innovation activities such as development and the procurement of new technologies (Mojaveri et al. [1] and Roxas et al. [2]). In this environment, domestic and overseas research and development (R&D) investments have been gradually increasing to pursue more reading technological innovations. Companies’ R&D strategies maximize the return on investment in R&D by managing individual projects as a portfolio. However, due to the rapid development of technology and customer needs, product requirements are becoming more advanced, and product life is rapidly decreasing. As a result, research and business development (R&BD) has emerged for overcoming the current status of R&D. R&BD is a system that examines the feasibility of a project from the initial stage of R&D and sets up and adjusts the research direction for each stage until a product can finally be commercialized. Before the introduction of the R&BD concept, there were the concepts of the early market and the mass market. Early adapters who emphasized innovation led the early market, and businesses developed advanced technologies and products based on the customer’s perspective. On the other hand, general customers who valued practicality led the mass market. Before entering a market, companies first look at whether there is an unfulfilled demand. In the research market, for example, even the countless existing technologies cannot help companies fulfill the demand. Therefore, it is necessary to establish an R&BD strategy that reflects customer-oriented needs from a company’s technology center.
The development of R&D performance evaluation is one of many important areas for effective and efficient R&D investments, and a variety of performance evaluation indexes and techniques have been developed and utilized (Kerssens-van Drongelen et al. [3]). Researchers have widely studied R&D performance evaluation based on multiple R&D inputs and outputs, and to this end, widely utilized Charnes et al.’s concept of data envelopment analysis (DEA) [4]. DEA is a linear programming methodology that evaluates the relative performances of decision-making units (DMUs) using a set of inputs to produce a set of outputs. With DEA, each DMU is evaluated by comparing its performance with those of other DMUs in its peer group. Several DEA-based R&D performance evaluations have been studied. Anderson et al. [5] proposed a DEA-based relative performance assessment model and benchmarking methodology for UK university technology transfer offices. Wang and Huang [6] used the production framework associated with DEA to measure the relative efficiency of R&D, with R&D capital stocks and manpower as input variables, and patents and publications as output variables. Sharma and Thomas [7] provided a DEA-based relative R&D performance model using input variables such as domestic expenditure and the number of researchers, and output variables such as patents granted to residents. In their study, Japan, Korea, and China were selected as the most efficient countries under the constant return to scale (CRS) model, and Japan, Korea, China, India, Slovenia, and Hungary were selected as the most efficient countries under the variable return to scale (VRS) model. Hsu and Hsueh [8] provided a three-stage DEA model that measures R&D productivity at the national level to provide R&D policy implications that reflect the operational level of DMUs. In order to measure R&D productivity for the R&D policies of Asian countries, the research categorized the 27 countries into four groups: inventors, merchandisers, academicians, and duds, and identified the characteristics with respect to R&D performance. Lee et al. [9] measured and compared the performances of national R&D programs utilizing a network DEA model. They provided policy implications for national R&D programs by removing the bias caused by organization scale to identify the strengths and weaknesses of each organization. For discrimination, the centrality value for each frontier organization provided the base. Kim [10] analyzed the productivity of universities in technology transfer using multiple input–output combinations and DEA, and found that the universities’ technology transfer rate was relatively efficient in terms of input-to-output quantities. The data indicated that an increasing number of commercial outputs had made a positive shift in their productivity, and therefore, universities may stimulate commercial activities.
However, the above-mentioned studies on DEA-based R&D and R&BD performance evaluation have a limitation in that they did not consider the economic value that the technology transfer created and business development (BD) (which refers to commercialization), which is the ultimate goal of R&D. Rather, the studies focused primarily on visible outputs such as patents and papers in evaluating R&D performance (Jeon and Lee [11]). Sexton et al. [12] mentioned that the entire period’s evaluation, from the R&D to the BD phase, is very much required for R&D performance evaluation, and Bozeman et al. [13] emphasized that R&D performance measurement may be extended to include technology transfer and BD owing to their significance and the demand for them. In particular, Min [14] mentioned that the stabilization of a BD system is quite important due to limited R&D resources, and therefore, BD performance evaluation, including R&D results, is essential when considering the overall R&D process from research to commercialization. Commercialization in R&D is defined as the activities that develop and procure new technologies and launch a new business to make products and/or charge engineering fees; this entire process is the so-called R&BD.
Kim and Lim [15] defined R&BD as a process that monitors the diffusion speed of research results, determines whether additional R&D is required, and are the R&D and BD-related activities that focus on business improvement. To evaluate R&BD, the indexes must include the general indexes of R&D such as papers and patents, as well as the evaluation indexes, according to the BD point of view. Of course, the literature on the technology transfer and the performance evaluation of BD has attempted to analyze efficiency according to economic performance from the R&BD point of view, which includes technical fees and new business launches. Jeon and Lee [11] proposed a DEA-based three-stage model of R&BD performance that captures commercialization outcomes and conventional R&D performance and covers the following three stages: the efficiency stage, the effectiveness stage, and the productivity stage. Although Jeon and Lee [11] attempted to analyze R&BD performance through considering technological transfers and commercialization, they did not use a systematic approach that considered technological transfers and commercialization simultaneously when evaluating R&BD performance; instead, they simply considered them individually.
As Kim and Lim [15] mentioned, R&BD generally consists of two internal processes (i.e., a R&D process as technology securement, and a BD process as commercialization), and thus, a systematic approach that considers the relationship between the internal processes simultaneously can facilitate a more practical R&BD performance evaluation. To address this issue, this study provides DEA-based systematic and simultaneous performance measures for R&BD and internal processes. In particular, it considers the relationship between the R&D and BD processes in evaluating overall R&BD performance, and provides effective performance improvement guidelines for relatively underperforming DMUs to help them establish an optimal strategy in order to increase their performance. Additionally, it shows how efficiently R&D inputs contribute to R&BD performance in terms of the R&D process, and how efficiently R&D outputs contribute to R&BD performance in terms of the BD process. To this end, this study develops a two-stage network DEA-based performance measure by defining variables and transforming R&D and BD performance into a serial network structure. Note that the aim of this research is to suggest the application of the network DEA model for simultaneous performance measures for R&BD and internal processes, rather than study DEA methodology. Using data from Korean public organizations, specifically public research institutes and universities, we show how to measure R&BD performance in a way that considers the relationship between the R&D and BD processes simultaneously. Although Jeon and Lee [11] dealt with DEA-based R&BD performance evaluation, they did not address the relationship between R&D and BD performance. Especially, although Guan and Chen [16] provided a model that considered both R&D and commercialization processes simultaneously when evaluating China’s high-tech innovations, they did not measure the contributions of R&D inputs and outputs in terms of the R&D and BD processes, respectively, and did not provide effective performance improvement guidelines for relatively underperforming DMUs to help them establish an optimal strategy to increase their performance. The rest of this paper is organized as follows. Section 2 presents the R&BD performance measurement model, which has input, output, and intermediate variables based on the two-stage network DEA. Section 3 shows a practical application of the R&BD performance measurement for Korean public organizations, specifically public research institutes and universities, using the proposed method. Section 4 summarizes our work.

2. R&BD Performance Measurement Model

2.1. Two-Stage Structure of R&BD Performance Measure

As Guan and Wang [17], Hsu and Hseuh [8], and Liu and Lu [18] noted, R&D performance evaluation focuses on how efficiently R&D inputs (researchers, budget, etc.) are allocated to secure R&D outputs (patents, papers, etc.). In addition, as Jeon and Lee [11] noted, R&BD focuses on how well the R&D outputs are commercialized, such as for profit or royalty generation (hereafter referred to as BD outcome). Accordingly, there are correlations among the R&D inputs, R&D outputs, and BD outcomes. Moreover, from the performance evaluation perspective, fewer R&D inputs and higher R&D outputs are the preferred means of improving R&D performance, while fewer R&D outputs and higher BD outcomes are the preferred means of improving BD performance. In this respect, although allocating more R&D inputs can result in more R&D outputs, it will worsen R&D performance. Further, more R&D outputs can result in higher BD outcomes, but will worsen BD performance.
This study constructed an R&BD diagram, as shown in Figure 1, as a conceptual framework for interpreting the R&BD performance evaluation. As Figure 1 describes, R&BD focuses on the upstream technology securement and downstream commercialization sub-processes from the performance perspective. Specifically, the upstream technology securement sub-process from the R&D inputs to the R&D outputs (i.e., intermediate outputs in terms of the whole R&BD process) is the first stage (stage one). The stage one operation is related to such activities as researching, developing, learning by doing, or importing, and it is linked with the second stage (stage two), that is, the downstream commercialization sub-process from the R&D outputs to the BD outcomes. The stage two operation supports such economic activities as marketing and manufacturing. Note that the two sub-processes are related, rather than independent, since they are connected by the R&D outputs. This means that the intermediate R&D outputs have a double identity in their role in R&BD as the outputs in the first sub-process and the inputs in the second sub-process (Guan and Chen [19]). In this sense, the integrated conceptual framework not only leads us to assign primary importance to the upstream indigenous R&D performance, but also reminds us to attach particular importance to the downstream commercialization performance in the economic sense.
Based on Figure 1, we argue that R&BD performance aims to increase R&D outputs at lower R&D inputs, and at the same time, increase BD outcomes for the given R&D outputs. Thus, in terms of performance, the performance in stage one is evaluated by how efficiently the researchers or budgets are utilized to secure new technologies such as papers and patents, and the performance in stage two is evaluated by how well the new technologies from the technology securement are commercialized. In other words, stage one deals with the performance issue of R&D inputs and intermediate R&D outputs for the securement of technology, and stage two deals with the performance issue of intermediate R&D outputs and BD outcomes for commercialization.
R&BD concentrates on the transformation performance from the R&D inputs directly to the BD outcomes. Note that technology securement is only an upstream sub-process, so a better performance in technology securement alone cannot guarantee a better performance in R&BD. The R&BD performance can improve only when both sub-processes work well. Thus, a performance measure model must simultaneously describe the inclusive relationship between the R&BD process and the two sub-processes, as well as the relationship between the two sub-processes.
The term performance is used in a variety of ways. In order to evaluate the R&BD performance, this study utilized DEA, which was initially developed by Charnes, Cooper, and Rhodes [4]. DEA and the stochastic frontier are two popular approaches to the evaluation of technical efficiency. DEA is a linear programming methodology that evaluates the relative efficiencies of DMUs using a set of inputs to produce a set of outputs. The technique envelopes observed production possibilities to obtain an empirical frontier, and measures efficiency as the distance to the frontier. The primary advantage of this approach is its non-parametric nature; therefore, it is not necessary to have strict normality assumptions about the data and errors when trying parametric analysis. DEA can handle multiple inputs and outputs, and does not need to predefine the functional relationship between inputs and outputs. Nevertheless, econometricians have argued that the approach produces biased estimates in the presence of measurement error and other statistical noise [20]. The principal reasons for skepticism regarding the traditional standard DEA on the part of economists are follows. Traditional standard DEA is a non-parametric method; no production, cost, or profit function is estimated from the data. This precludes evaluating marginal products, partial elasticities, marginal costs, or elasticities of substitution from a fitted model. Consequentially, one cannot derive the usual conclusions about the technology, which are possible from a parametric functional form. Most importantly, being non-statistical in nature, the linear programing (LP) solution of a DEA problem produces no standard errors and leaves no room for hypothesis testing. In traditional standard DEA, any deviation from the frontier is treated as inefficiency, and there is no provision for random shocks. By contrast, the far more popular stochastic frontier model explicitly allows the frontier to move up or down because of random shocks. Additionally, a parametric frontier yields elasticities and other measures of the technology that are useful for marginal analysis. As mentioned in Ray [21], the lack of standard errors for the DEA efficiency measures stems from the stochastic properties of inefficiency-constrained estimators not being well established in the econometric literature. However, there are several research studies underway to address the weakness of the DEA model. Banker [22] conceptualized a convex and monotonic non-parametric frontier with a one-sided disturbance term, and showed that the DEA estimator converges in distribution with the maximum likelihood estimators. Simar [23] and Simar and Wilson [24,25] combined bootstrapping with DEA to generate empirical distributions of the efficiency measures of individual firms. This has generated a lot of interest in the profession, and one may expect the DEA model to incorporate the bootstrapping option in the near future. Simar and Wilson [26] described a data-generating process (DGP) that is logically consistent with the regression of non-parametric DEA efficiency estimates on some covariates in a second stage. They then proposed single and double bootstrap procedures; both permit valid inference, and the double bootstrap procedure improves statistical efficiency in the second-stage regression. The use of traditional standard DEA in evaluating a more realistic and accurate efficiency may be controversial among researchers of parametric and non-parametric methods due to the characteristics of DEA. Nevertheless, due to its advantages, DEA has been theoretically extended and very widely applied in various fields, including: education (Ahn [27]; Beasley [28]), aviation (Schefezyk [29]), health care (Pina and Torres [30]), inventory and manufacturing (Park et al. [31]; Park et al. [32]; Park et al. [33]), etc. In addition, as mentioned earlier, there are quite a number of studies that utilize the DEA model in evaluating R&D and R&BD performance. For more details on the DEA model, refer to Charnes et al. [4].
However, the traditional standard DEA model treats DMUs as a black box, and cannot incorporate multiple sub-processes into an integrated measurement framework. The divisional and independent measurement of process performance using traditional standard DEA destroys the integrity and linkage between sub-process performances. Thus, this study utilized the network DEA, which takes intermediate factors into account when evaluating performance. The network DEA model is an extension of the traditional standard DEA model in which each DMU is comprised of two or many sub-DMUs (or internal processes) connected in parallel, as shown in Figure 1. The objective of the network DEA model is to evaluate the relative performance of each DMU and each of its sub-DMUs. In particular, this study applied the two-stage relational network DEA model introduced by Kao and Hwang [34] to evaluate R&BD performance. The main reason for applying the two-stage relational network DEA model is that this model can help measure the technical performance of the R&BD and the two sub-processes in an integrated analytical framework, in the sense that the overall and sub-process performances are estimated simultaneously. In Figure 1, the R&D input and BD outcome are regarded as the input and output, respectively, and the R&D output is treated as an intermediate factor. For the performance evaluation, we assumed that all of the R&D inputs are positively related to R&D output production, and that all of the R&D outputs are positively related to the BD outcome generation. We show that the R&DB performance of DMU k, which has m R&D inputs, xi (i = 1, 2, …, m), t R&D outputs, lq (q = 1, 2, …, t), and s BD outcomes, yr (r = 1, 2, …, s), is measured based on how efficiently the R&D inputs are used to produce the BD outcomes, and, at the same time, how efficiently the R&D inputs are used to provide a high quality and quantity of R&D outputs, and how efficiently the R&D outputs contribute to the commercialization. Here, ur is the weight given to the r-th BD outcomes, vi is the weight given to the i-th R&D input, wq is the weight given to the q-th R&D outputs, and n is the number of DMUs.
Based on the two-stage relational network DEA model, the R&BD performance evaluation is calculated as
θ k = max r = 1 s u r y r k s . t . i = 1 m v i x i k = 1 , r = 1 s u r y r j i = 1 m v i x i j + s j = 0 , j = 1 , , n q = 1 t w q l q j i = 1 m v i x i j + s j ( C 1 ) = 0 , j = 1 , , n r = 1 s u r y r j q = 1 t w q l q j + s j ( C 2 ) = 0 , j = 1 , , n v i , u r , w q ε
where Sj*, Sj(C1)*, and Sj(C2)* are the slack variables associated with second, third and fourth constraints, respectively. Model (1) is based on the input-oriented CCR (Charnes, Cooper and Rhodes) model in DEA. As in Kao and Hwang’s [34] two-stage relational network DEA model, the same weights were assigned to the R&D output variables, lq. Second constraint represents the R&BD performance of the j-th DMU. Third and fourth constraints represent the R&BD performance in the technology securement and commercialization stages, respectively. As in the two-stage relational network DEA model, we can validate the performance decomposition principle, according to which the multiplication of third and fourth constraints is equal to second constraint, that is, sk* = sk(C1)* + sk(C2)*. Note that we utilized this performance decomposition principle to analyze the influences of technology securement and commercialization on R&DB performance. Let vi*, ur*, and wq* be the optimal weights assessed by Model (1). The performance of the k-th DMU in the R&DB, technology securement stage, and commercialization stage can be revised as Models (2)–(4), respectively.
E k = r = 1 s u r y r k / i = 1 m v i x i k = 1 s k
E k ( C 1 ) = q = 1 t w q l q k / i = 1 m v i x i k = 1 ( s k ( C 1 ) / i = 1 m v i x i k )
E k ( C 2 ) = r = 1 s u r y r k / q = 1 t w q l q k = 1 ( s k ( C 2 ) / q = 1 t w q l q k )
If the performance score of Ek(C1)* is 1, the DMU is considered to have achieved a best-practice status in the technology securement stage. If the performance score of Ek(C2*) is 1, the DMU is said to have obtained a best-practice status in the commercialization stage. By contrast, ( s k ( C 1 ) / i = 1 m v i x i k ) and ( s k ( C 2 ) / q = 1 t w q l q k ) indicate underperformance in the technology securement and commercialization stages, respectively. Based on the performance decomposition principle in the two-stage relational network DEA model, we obtained two equations, Ek* = Ek(C1) × Ek(C2) and sk*= sk(C1)* + sk(C2)*, respectively. Ek = Ek(C1) × Ek(C2) signifies that the multiplication of Models (3) and (4) equals Model (2), and sk* = sk(C1)* + sk(C2)* signifies that the sum of third and fourth constraints equals second constraint. From Models (2)–(4), we substituted ( 1 E k ) , ( 1 E k ( C 1 ) ) i = 1 m v i x i k , and ( 1 E k ( C 2 ) ) q = 1 t w q l q k for sk*, sk(C1)*, and sk(C2)*, respectively. Since sk* = sk(C1)* + sk(C2)*, it can be said that the underperformance of the k-th DMU can be distributed to the technology securement and commercialization stages in (sk(C1)*/sk*) and (sk(C2)*/sk*) proportions, respectively. If (sk(C1)*/sk*) and (sk(C2)*/sk*) are assumed to be IA(C1) and IA(C2), respectively, IA(C1) and IA(C2) can be obtained as
I A k ( C 1 ) = ( ( 1 E k ( C 1 ) ) i = 1 m v i x i k / ( 1 E k ) )
I A ( C 2 ) = ( ( 1 E k ( C 2 ) ) q = 1 t w q l q k / ( 1 E k ) )
The performance proportions signify relative values, which can be utilized as the evaluation ratings for the underperformance of R&D inputs in technology securement and the underperformance of R&D outputs in commercialization. For example, using the proportion (1 − IA(C1)), we can determine how well the R&D inputs contribute to R&DB performance in terms of technology securement, and likewise, using the relative proportion (1 − IA(C2)), we can determine how well the R&D outputs contribute to R&BD performance in terms of commercialization. A proportion that is close to 1 indicates a significant contribution to R&BD performance. In other words, since R&BD performance is calculated as the product of performances in the technology securement and commercialization stages, it is more effective to improve the R&BD performance by focusing on improving the process that has a lower contribution.

2.2. R&BD Performance Improvement

Since the approach of this study is based on DEA, we can provide information on whether the underperforming DMUs improve their performance in each stage. For a simple illustration of this approach, we calculated the R&BD performance and performance improvement using the sample data shown in Table 1. The sample data consist of data for seven DMUs, and each DMU consumes two R&D inputs and yields one R&D output and one BD outcome. The R&BD performance results, which include the R&BD performance by the proposed method, and include Ej(C1), Ej(C2), IA(C1), and IA(C2), are summarized starting from the sixth column in Table 1. We find that the R&BD performance is calculated by the multiplication of the technology securement stage (Ej(C1)) and commercialization stage (Ej(C2)). Only two DMUs, A and B, are best-practice DMUs in the technology securement stage, and two others, A and C, are best-practice DMUs in the commercialization stage. We see that DMU A, which has a relatively high performance in both stages, also achieved the highest R&BD performance. Although DMUs B and C are best-practice DMUs in the technology securement and commercialization stages, respectively, they cannot be best-practice DMUs in R&BD performance, because they have a relatively low performance in the commercialization and technology securement stages, respectively.
For example, DMU C has a relatively lower performance in the technology securement stage. However, it can achieve best-practice status by either (a) increasing its R&D output; or (b) decreasing its R&D inputs. If DMU C chooses to improve its performance by (a), it might no longer be a best-practice DMU in the commercialization stage because the R&D output, which is the input of the commercialization stage, has increased. Note that the R&D inputs and BD outcome are linked by the R&D output. Thus, in order to improve the performance in both stages, we focused on reducing the R&D inputs and increasing the BD outcome while maintaining the R&D output. In other words, we can calculate how much R&D inputs and BD outcome need to improve by using the R&D output given by Model (7), which is the envelopment-type version of Model (1). l q j is obtained by w q j i q j , where w q j is the q-th optimal weights for the BD outcome calculated in Model (1), τ k is the efficiency score, and λ j , ω j , and π j are the dual variables.
τ k = min τ k s . t . j = 1 n ( λ j + ω j ) y r j S R r + = y r k , r = 1 , , s j = 1 n ( λ j + π j ) x i j + S R i = τ x i k , i = 1 , , m ; j = 1 n ( π j ω j ) l q j = 0 , q = 1 , , t ; π j , ω j , λ j 0 j = 1 , , n
Through Model (7), we can obtain guidelines on how much R&D inputs have to decrease or BD outcomes have to increase in order to improve the performances of underperforming DMUs, specifically in the setting Δ x i k = x i k ( 1 τ ) + S R i , Δ + y r k = y r k + S R r + , where Δ and Δ + indicate the negative and positive amounts of inputs and outputs, respectively, that need to be improved. If the inefficient DMU G reduces each of its R&D inputs (X1 and X2) by 40, and increases its BD outcome (Q2) by 20, it can achieve best-practice status in R&BD performance with a score of 1. As we mentioned in Section 2.1, (1 − IA(C1)) and (1 − IA(C2)) are relative values that contribute to the R&BD performance, and it is more effective to improve the R&BD performance by preferentially improving the performance of the process with the smaller value. Consider DMU G, where (1 − IA(C1)) is relatively smaller than (1 − IA(C2)) in Table 1. Table 2 shows the sensitivity analysis for DMU G to improve its performance with a BD outcome that is fixed and an R&D input that decreases by 10, or an R&D input that is fixed and a BD outcome that increases by 10. We found no performance improvement from the point where R&D inputs decrease by 40 from the existing value, and the BD outcomes increase by 20 from the existing value. This result is the same as the above-mentioned result that DMU G reduces R&D inputs by 40 and increases BD outcomes by 20 to achieve best-practice status in R&BD performance. In addition, we saw that the R&BD performance improvement is greater when R&D inputs improve more than the BD outcomes, which suggests that it is more effective to improve R&BD performance by preferentially improving R&D input performance.

3. Empirical Study

3.1. Objective and Variables

For the case study, we applied our proposed method to 49 research organizations, specifically public research institutes and universities in the Republic of Korea (we used the public technology transfer and commercialization data collected by the Ministry of Trade, Industry and Energy in 2013 as inputs, intermediates, and outputs). In the case study, we considered 34 universities and 15 research institutes. Of the 34 universities included in the case study, 13 were national universities and 21 were private universities. Of the 15 research institutes, 10 were managed by the Ministry of Science, ICT, and Future Planning, three were managed by the Ministry of Trade, Industry, and Energy, and two were managed by the Ministry of Oceans and Fisheries. Each research institute was regarded as a DMU. Generally, researchers and research expenses are invested for R&D, and technologies are secured in the form of patents and papers; the secured technologies create economic values, such as royalty income, technology transfer—which is called transfer of technology (TOT)—and commercialization. Referring to the studies of Bonaccorsi et al. [35], Meng et al. [36] and Liu and Lu [18], we selected the number of researchers and R&D costs as R&BD inputs, the number of secured technologies as intermediates, and the number of technology transfers, royalties, and foundations as R&BD outputs. The data reflect the following selected performance measures.
  • Inputs:
    (a)
    Number of Researchers (NR): Total number of researchers involved in R&D
    (b)
    R&D Costs (RC): Total cost of R&D, including labor costs, material purchases, and activity costs (₩)
  • Outputs:
    (a)
    Number of Technology Transfers (NTT): Total number of technology transfers or contracts for the transfer of secured technology
    (b)
    Royalties (RT): Total amount of royalty income
    (c)
    Number of Foundations (NF): Total number of start-ups
  • Intermediate:
    (a)
    Number of Secured Technologies (NST): Total number of secured technologies such as patents, utility models, and papers
Since DEA analysis is a relative evaluation among DMUs, a sufficient number of DMUs compared to the number of input and output factors have to be considered for a more accurate evaluation. In the study of Bonaccorsi et al. [35], the minimum number of DMUs recommended for reasonable efficiency evaluation are nm × s, and n ≥ max{3 × (m + s), m × s}, where n is the minimum number of DMUs required, m is the number of input factors, and s is the number of output factors. This research takes into account two inputs, three outputs, and one intermediate. Thus, more than 15 DMUs should be considered to meet the recommended number of DMUs. The number of DMUs for accurate evaluation is satisfied with 49 DMUs.
The input, intermediate, and output data are presented in Table A1, and their descriptive statistics are listed in Table A2 of Appendix A.

3.2. Analysis Result

We first compared the R&BD performance scores from the conventional DEA model and the proposed model, and the performance results are shown in Table 3. Performances A and B signify the performance scores from the conventional DEA model—in which NR and RC were used as inputs and NTT, RT, and NF were used as outputs without intermediates—and the proposed model, respectively. To calculate Performance A, NR and RC were used as inputs, whereas NTT, RT, and NF were used as outputs. In Performance A, DMUs 20, 27, 28, 31, and 32 were identified as the relatively most overperforming research organizations (performance score is 1), and the remaining 44 DMUs were identified as relatively underperforming. On the other hand, in Performance B, there were no best-practice DMUs (for which the performance score is 1), but DMUs 27 and 31 had relatively higher performance scores, and they could be classified as a relatively overperforming group (scores for Performance B were over 0.900). In particular, DMUs 31 and 32 could be classified as best-practice DMUs in Performance A. However, although DMU 31 also had a relatively higher R&BD performance for Performance B, with a score of 0.945, DMU 32 had a relatively lower R&BD performance for Performance B, with a score of 0.091 due to inefficient internal processes.
Figure 2 shows the difference between performances A and B. Although performances A and B had similar performance patterns, the scores for Performance B were generally lower than those for Performance A (the average scores for performances A and B were 0.375 and 0.166, respectively). In addition, five DMUs were considered to have achieved a best-practice status, which has the performance score of 1, in Performance A, but no DMUs were classified as best-practice in Performance B. The reason for the results shown in Figure 2 is that the proposed model measures performance more strictly than the conventional DEA model, through simultaneously considering both the technology securement and commercialization stages. More specifically, Performance B is obtained by multiplying the performance of the technology securement stage by the performance of the commercialization stage. Performance B can be 1 only if both of the performance scores of the technology stage and the commercialization stage are 1. Although performances A and B had similar performance patterns, several DMUs had significant changes in performance scores. For example, although DMUs 28 and 32 were regarded as best-practice DMUs in Performance A, with performance scores of 1, they had relatively low scores for Performance B, of 0.360 and 0.091, respectively.
Table 4 shows the performances scores from the technology securement and commercialization stages and their contributions to the R&BD performance. For example, with regard to DMU 4, which had a relatively lower R&BD performance (with a score of 0.115), Ej(C1) is greater than Ej(C2), and consequently, we can say that performance improvement in technology securement is important, but the strategy for performance improvement in the commercialization stage needs to be prioritized as well. We then looked at the R&BD performance scores of DMUs 31 and 32 again. DMU 31 had a relatively higher R&BD performance score with 0.945, and DMU 32 had a relatively lower R&BD performance score with 0.091; nonetheless, both DMUs 31 and 32 were classified as best-practice DMUs in Performance A, as shown in Table 3. The reason for this difference in their R&BD performances is that DMU 31 had relatively higher performance scores in both the technology securement and commercialization stages, but DMU 32 had relatively lower performance scores in the technology securement stage, with 0.091, even though it had the highest performance score in the commercialization stage, with 1. That is, DMU 32 had a low R&BD performance due to a relatively lower performance in the technology securement stage. For DMU 32 to improve its R&BD performance, it should focus on developing an effective strategy to increase R&D performance. The average performance scores of the technology securement and commercialization stages were 0.385 and 0.391, respectively. From the performance proportions (1 − IA(C1) and 1 − IA(C2)) in Table 4, we found that the average performance in the commercialization stage contributed to the R&BD performance slightly more than the average performance in the technology securement stage.
Figure 3 shows the DMUs located on a two-dimensional plane with two axes according to their technology securement and commercialization performances. DMUs were classified as high and low based on the average performances in the technology securement and commercialization stages (the average performance scores of the technology securement and commercialization stages for x axis and y axis were 0.385 and 0.391, respectively). The A area is a set of DMUs with relatively high performances in the commercialization and relatively low performances in technology securement. The B area is a set of DMUs with high performances in both technology securement and commercialization. The C area is a set of DMUs with low performances in both technology securement and commercialization. The D area is a set of DMUs with relatively high performances in technology securement and relatively low performances in commercialization. Only eight DMUs (16.3% of the total DMUs) had performance scores greater than the average in both the technology securement and commercialization stages, and 25 DMUs (51% of the total DMUs) had performance scores less than the average in both the technology securement and commercialization stages. In other words, more than 50% of the DMUs fall into the underperforming group, which has relatively low performances in both technology securement and commercialization. If DMUs in the A area try to improve their technology securement performances first, their R&DB performance can improve effectively. In contrast, the DMUs in the D area can improve their R&DB performance effectively by focusing on improving their commercialization performance.
Underperforming DMUs are able to improve their performance in each stage. Through Model (7), we can obtain guidelines on how much inputs have to decrease or outputs have to increase in order to improve the performance of underperforming DMUs. Table 5 provides guidelines for the 10 relatively most underperforming DMUs. As indicated in the table, if DMU 39 reduces its inputs (NR and RC) by two and 10, respectively, and increases its outputs (NTT and RT) by two and 273, respectively, it will achieve best-practice status in its R&BD performance.

4. Conclusions

R&BD generally consists of two internal processes, that is, the R&D process as technology securement and the BD process as commercialization. For a practical R&BD performance evaluation, a systematic approach that considers the relationship between the R&D and BD processes in evaluating R&BD performance is crucial. To this end, in this study, we proposed a two-stage network DEA-based systematic and simultaneous performance evaluation for R&BD that considered the relationship between its internal processes, such as R&D and BD processes, and provided effective performance improvement guidelines for relatively underperforming DMUs to help them establish an optimal strategy to increase their performance. We selected the number of researchers and R&D costs as R&BD inputs, the number of technology transfers, royalties, and foundations as R&BD outputs, and the number of secured technologies as intermediates (outputs of the R&D process and inputs of the BD process). We then compared the R&BD performance scores from the conventional DEA model and the proposed model, and showed that although both performance scores have similar patterns, the performance scores from the proposed model were generally lower than those from the conventional DEA model, as the proposed model measures performance more strictly than the conventional DEA model by simultaneously considering the technology securement and commercialization stages. In addition, we showed that the average performance in the commercialization stage of the BD process contributes to the overall R&BD performance slightly more than the average performance in the technology securement stage of the R&D process, and we provided effective performance improvement guidelines for underperforming DMUs. Consequently, we showed that the proposed method can be used as the basic approach for establishing public strategies for fostering R&BD.
Despite its valuable contributions, this study also has several limitations, which can be addressed in future research. First, we analyzed public research institutes and universities through an integrated evaluation of R&BD performance; however, this study lacks a specific and strategic plan for achieving improved performances. This study focused on the applicability of the two-stage network DEA, and we hope to present specific benchmark objectives in future research. Second, we did not perform a comparative analysis of performance differences among the groups based on the characteristics of the individual public research institutions and universities. Finally, we were unable to take into account the probabilistic volatility of the resource data or analyze the factors affecting R&BD performance for more robust performance evaluation and benchmarking; future research can address this issue by securing additional data.

Author Contributions

All of the authors discussed and proposed the statistical method, and designed the test plan. Jaehun Park and Joonyoung Kim accomplished the computation and wrote the paper; Si-Il Sung established the mathematical model and performed the validation of the analysis.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DEAData Envelopment Analysis
R&DResearch and Development
BDBusiness Development
R&BDResearch and Business Development

Appendix A

Table A1. The input, intermediate, and output data for 49 research organizations. NR: Number of researchers; RC: R&D costs; NST: Number of secured technologies; NTT: Number of technology transfers; RT: Royalties; NF: Number of foundations.
Table A1. The input, intermediate, and output data for 49 research organizations. NR: Number of researchers; RC: R&D costs; NST: Number of secured technologies; NTT: Number of technology transfers; RT: Royalties; NF: Number of foundations.
DMUsInputsIntermediateOutputs
NRRCNSTNTTRTNF
185357,458329161422
2144558,178497285651
385049,844283401291
4116971,33765281255310
53615268,472276218430552
623488755810231
72949120,537611109102220
8532156,40769811732661
937688,0641726239411
1045043,453130342801
116419312011041
12164587,39369414510968
13547123,9452864619681
142742175,19572611916992
1570419,35917115686
16223258,658356523611
17345472,719478766201
1812310,260227463901
19639106,092141023540221
202256541,1834984165436,3645
21152865,69448710914214
221331131,787438464331
231180143,14177876117910
2485367,5435135411629
254713467,6571675187391142
267566128,58173611614522
2789558168153
28276102,32940910272642
29613283,4714815414172
306746795481251
31417179,700145071671001
329114,0531810285
33563116,0999091251,5461
344467182,47686116847874
353718104,900664718435
3644431,60529814102
3754450,921392752892
38233836470141092
3973062,62331016994
402662114,10284511311062
4124143,285236872102
4221937,697160564951
4331115,46516659441
4429415,742111232182
4525942,877302789222
46491246,05489612547881
47705211,9985184519881
481698179,77493116035634
493538237,000105019028982
Table A2. The descriptive statistics of input, intermediate and output data for 49 research organizations.
Table A2. The descriptive statistics of input, intermediate and output data for 49 research organizations.
ResourceMax.Min.Ave.Std. Dev.
InputsNR75666413741539.93
RC541,183558110,442110,654.27
IntermediateNST498416639806.40
OutputsNTT16540122247.86
RT36,364522655290.35
NF42146.54

References

  1. Mojaveri, H.S.; Nosratabadi, H.E.; Farzad, H. A new model for overcoming technology transfer barriers in iranian health system. Int. J. Trade Econ. Financ. 2011, 2, 280–284. [Google Scholar] [CrossRef]
  2. Roxas, S.A.; Piroli, G.; Sorrentino, M. Efficiency & evaluation analysis of a network of technology transfer brokers. Technol. Anal. Strateg. Manag. 2011, 23, 7–24. [Google Scholar]
  3. Kerssens-van Drongelen, I.; Nixon, B.; Pearson, A. Performance measurement in industrial R&D. Int. J. Manag. Rev. 2000, 2, 111–143. [Google Scholar]
  4. Charnes, A.; Cooper, W.; Rhodes, E. Measuring the efficiency of decision making units. Eur. J. Oper. Res. 1978, 2, 429–444. [Google Scholar] [CrossRef]
  5. Anderson, T.R.; Daim, T.U.; Lavoie, F.F. Measuring the efficiency of university technology transfer. Technovation 2007, 27, 306–318. [Google Scholar] [CrossRef]
  6. Wang, E.C.; Huang, W. Relative efficiency of R&D activities: A cross-country study accounting for environmental factors in the dea approach. Res. Policy 2007, 36, 260–273. [Google Scholar]
  7. Sharma, S.; Thomas, V. Inter-country R&D efficiency analysis: An application of data envelopment analysis. Scientometrics 2008, 76, 483–501. [Google Scholar]
  8. Hsu, F.M.; Hsueh, C.C. Measuring relative efficiency of government-sponsored R&D projects: A three-stage approach. Eval. Progr. Plan. 2009, 32, 178–186. [Google Scholar]
  9. Lee, H.; Park, Y.; Choi, H. Comparative evaluation of performance of national R&D programs with heterogeneous objectives: A dea approach. Eur. J. Oper. Res. 2009, 196, 847–855. [Google Scholar]
  10. Kim, Y.H. The ivory tower approach to entrepreneurial linkage: Productivity changes in university technology transfer. J. Technol. Transf. 2013, 38, 180–197. [Google Scholar] [CrossRef]
  11. Jeon, I.; Lee, H. Performance evaluation of R&D commercialization: A dea-based three-stage model of R&BD performance. J. Korean Inst. Ind. Eng. 2015, 41, 425–438. (In Korean) [Google Scholar]
  12. Sexton, M.G.; Barrett, P.; Aouad, G. Diffusion mechanisms for construction research & innovation into small to medium sized construction firms. In CRISP-99/7; CRISP: London, UK, 1999. [Google Scholar]
  13. Bozeman, B.; Rimes, H.; Youtie, J. The evolving state-of-the-art in technology transfer research: Revisiting the contingent effectiveness model. Res. Policy 2015, 44, 34–49. [Google Scholar] [CrossRef]
  14. Min, J.W. Analysis of Success Factors About Technology Transfer and Commercialization of Public Institutes and University. Ph.D. Thesis, Korea University, Seoul, Korea, 2015. [Google Scholar]
  15. Kim, Y.H.; Lim, H.J. A study on the creative economy and diffusion of R&D. Korea Product. Assoc. 2013, 27, 285–307. (In Korean) [Google Scholar]
  16. Guan, J.; Chen, K. Measuring the innovation production process: A cross-region empirical study of China’s high-tech innovations. Technovation 2010, 3, 348–358. [Google Scholar] [CrossRef]
  17. Guan, J.; Wang, J. Evaluation and interpretation of knowledge production efficiency. Scientometrics 2004, 59, 131–155. [Google Scholar]
  18. Liu, J.S.; Lu, W.-M. Dea and ranking with the network-based approach: A case of R&D performance. Omega 2010, 38, 453–464. [Google Scholar]
  19. Guan, J.; Chen, K. Modeling the relative efficiency of national innovation systems. Res. Policy 2012, 41, 102–115. [Google Scholar] [CrossRef]
  20. Schmidt, P. Production frontier functions. Econ. Rev. 1985, 4, 289–328. [Google Scholar] [CrossRef]
  21. Ray, S.C. Data Envelopment Analysis-Theory and Techniques for Economics and Operations Research; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  22. Banker, R.D. Maximum likelihood, consistency, and data envelopment analysis: A statistical foundation. Manag. Sci. 1993, 39, 1265–1273. [Google Scholar] [CrossRef]
  23. Simar, L. Estimating efficiencies from frontier models with panel data: A comparison of parametric, non-parametric and semi-parametric methods with bootstrapping. In International Applications of Productivity and Efficiency Analysis; Springer: Dordrecht, The Netherlands, 1992; pp. 167–199. [Google Scholar]
  24. Simar, L.; Wilson, P.W. Sensitivity analysis of efficiency scores: How to bootstrap in nonparametric frontier models. Manag. Sci. 1998, 44, 49–61. [Google Scholar] [CrossRef]
  25. Simar, L.; Wilson, P.W. A general methodology for bootstrapping in non-parametric frontier models. J. Appl. Stat. 2000, 27, 779–802. [Google Scholar] [CrossRef]
  26. Simar, L.; Wilson, P.W. Estimation and inference in two-stage, semi-parametric models of production processes. J. Econ. 2007, 136, 31–64. [Google Scholar] [CrossRef]
  27. Ahn, T.S. Efficiency and Related Issues in Higher Education: A Data Envelopment Analysis Approach. Ph.D. Thesis, University of Texas at Austin, Austin, TX, USA, 1987. [Google Scholar]
  28. Beasley, J.E. Comparing university departments. OMEGA Int. J. Manag. Sci. 1990, 18, 171–183. [Google Scholar] [CrossRef]
  29. Shcefczyk, M. Operational performance of airlines: An extension of traditional measurement paradigms. Strateg. Manag. J. 1993, 14, 301–317. [Google Scholar] [CrossRef]
  30. Pina, V.; Torres, L. Evaluating the efficiency of nonprofit organizations: An application of data envelopment analysis to public health service. Financ. Account. Manag. 1992, 8, 213–224. [Google Scholar] [CrossRef]
  31. Park, J.; Lee, D.; Zhu, J. An integrated approach for ship block manufacturing process performance evaluation: Case from a Korean shipbuilding industry. Int. J. Prod. Econ. 2014, 156, 214–222. [Google Scholar] [CrossRef]
  32. Park, J.; Bae, H.; Dinh, T.-C.; Ryu, K. Operator allocation in cellular manufacturing system by integrated genetic algorithm and fuzzy-data envelopment analysis. Int. J. Adv. Manuf. Technol. 2014, 75, 464–477. [Google Scholar] [CrossRef]
  33. Park, J.; Bae, H.; Bae, J. Cross-evaluation based multi-criteria ABC inventory classification. Comput. Ind. Eng. 2014, 76, 40–48. [Google Scholar] [CrossRef]
  34. Kao, C.; Hwang, S.N. Efficiency decomposition in two-stage data envelopment analysis: An application to non-life insurance companies in Taiwan. Eur. J. Oper. Res. 2008, 196, 1107–1112. [Google Scholar] [CrossRef]
  35. Bonaccorsi, A.; Daraio, C. A robust nonparametric approach to the analysis of scientific productivity. Res. Eval. 2003, 12, 47–69. [Google Scholar] [CrossRef]
  36. Meng, W.; Zhang, D.; Qi, L.; Liu, W. Two level DEA approaches in Research Evaluation. Omega 2008, 36, 950–957. [Google Scholar] [CrossRef]
Figure 1. Two-stage structure of research and business development (R&BD) performance. R&D: research and development; BD: business development.
Figure 1. Two-stage structure of research and business development (R&BD) performance. R&D: research and development; BD: business development.
Sustainability 09 02297 g001
Figure 2. Comparison of the average performance scores of A and B.
Figure 2. Comparison of the average performance scores of A and B.
Sustainability 09 02297 g002
Figure 3. Technology securement and commercialization performances on a two-dimentional plane.
Figure 3. Technology securement and commercialization performances on a two-dimentional plane.
Sustainability 09 02297 g003
Table 1. Sample data. DMU: decision-making units.
Table 1. Sample data. DMU: decision-making units.
DMUR&D InputsR&D OutputBD OutcomeResults
X1X2Y1Q2R&BD PerformanceEj(C1)Ej(C2)(1 − IA(C1))(1 − IA(C2))
A20401001701.001.001.001.001.00
B40201001200.711.000.711.000.00
C30601001700.670.671.000.001.00
D50601001500.480.550.880.120.88
E60301001000.390.670.590.450.55
F60401001300.460.600.760.260.74
G70701001500.380.430.880.080.92
Table 2. Sensitivity analysis for DMU G to improve performance.
Table 2. Sensitivity analysis for DMU G to improve performance.
X1X2R&BD PerformanceQ2R&BD Performance
70700.3781500.378
60600.4411600.403
50500.5291700.429
40400.6621800.429
30300.8821900.429
20200.8822000.429
10100.8822100.429
Table 3. Comparison of the performance scores of A and B.
Table 3. Comparison of the performance scores of A and B.
DMU12345678910
Performance A0.0730.1460.1620.5270.1800.2020.2100.3250.6320.173
Performance B0.0240.0440.0650.1150.0620.0890.0760.1710.1790.075
DMU11121314151617181920
Performance A0.8170.3270.2280.1660.2360.1450.1640.9730.6391.000
Performance B0.1750.1390.0820.0600.0610.0710.0830.3830.3340.599
DMU21222324252627282930
Performance A0.3640.0790.2430.3390.2440.1861.0001.0000.1490.170
Performance B0.1440.0340.0640.0760.0430.0730.9000.3600.0550.122
DMU31323334353637383940
Performance A1.0001.0000.2800.3840.1340.1300.3250.3060.1330.200
Performance B0.9450.0910.1790.0900.0570.0360.1320.1380.0220.084
DMU414243444546474849
Performance A0.4790.3520.7400.3050.4610.3990.1390.2960.202
Performance B0.2830.2140.3050.1240.2620.1730.0550.1020.072
Table 4. Performance scores in the technology securement and commercialization stages.
Table 4. Performance scores in the technology securement and commercialization stages.
DMU12345678910111213
Ej(C1)0.2570.3760.2540.4090.4640.2870.2230.4630.1790.1510.4490.3540.208
Ej(C2)0.0920.1170.2550.2810.1350.3100.3400.3691.0000.4940.3910.3920.393
1 − IA(C1)0.5440.5750.4830.5190.6020.4680.440.4930.0000.3560.4770.4480.413
1 − IA(C2)0.4560.4250.5170.4810.3980.5320.560.5071.0000.6440.5230.5520.587
DMU14151617181920212223242526
Ej(C1)0.1860.3810.2610.2771.0000.9880.8560.3280.1720.3260.3430.1850.236
Ej(C2)0.3250.1600.2730.3000.3830.3380.6990.4400.2000.1960.2210.2330.308
1 − IA(C1)0.4380.560.4780.471.0000.9730.4560.4160.4820.5280.5230.4740.457
1 − IA(C2)0.5620.440.5220.530.0000.0270.5440.5840.5180.4720.4770.5260.543
DMU27282930313233343536373839
Ej(C1)1.0000.4640.2260.2721.0000.0910.6670.2080.2730.4240.3810.3670.229
Ej(C2)0.9000.7760.2440.4500.9451.0000.2690.4320.2080.0850.3480.3760.096
1 − IA(C1)1.0000.2110.4800.3991.0000.0000.6430.3950.5070.6050.4780.4590.534
1 − IA(C2)0.0000.7890.5200.6010.0001.0000.3570.6050.4930.3950.5220.5410.466
DMU40414243444546474849AVG.
Ej(C1)0.3270.4260.3230.4770.3140.5230.5250.2560.2810.1990.385
Ej(C2)0.2560.6640.6640.6400.3950.5020.3310.2140.3630.3620.391
1 − IA(C1)0.5030.2960.2810.3240.4360.4350.5380.5000.4430.4250.489
1 − IA(C2)0.4970.7040.7190.6760.5640.5650.4620.5000.5570.5750.511
Table 5. Guidelines for performance improvement of the 10 most underperforming DMUs.
Table 5. Guidelines for performance improvement of the 10 most underperforming DMUs.
DMU Δ N R Δ R C Δ + N T T Δ + R T Δ + N F
392102273-
117-200-
22214-574-
3616-288-
2516103-0-
219-46-
29321110-
47427460-
35426-695-
14636-909-

Share and Cite

MDPI and ACS Style

Park, J.; Kim, J.; Sung, S.-I. Performance Evaluation of Research and Business Development: A Case Study of Korean Public Organizations. Sustainability 2017, 9, 2297. https://doi.org/10.3390/su9122297

AMA Style

Park J, Kim J, Sung S-I. Performance Evaluation of Research and Business Development: A Case Study of Korean Public Organizations. Sustainability. 2017; 9(12):2297. https://doi.org/10.3390/su9122297

Chicago/Turabian Style

Park, Jaehun, Joonyoung Kim, and Si-Il Sung. 2017. "Performance Evaluation of Research and Business Development: A Case Study of Korean Public Organizations" Sustainability 9, no. 12: 2297. https://doi.org/10.3390/su9122297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop