Next Article in Journal
Double Dose: High Family Conflict Enhances the Effect of Media Violence Exposure on Adolescents’ Aggression
Previous Article in Journal
Afterword: Embodiment, Social Order, and the Classification of Humans as Waste
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Science Evaluation in the Czech Republic: The Case of Universities

University of West Bohemia, Department of Computer Science and Engineering, Univerzitní 8, 30614 Plzeň, Czech Republic
Societies 2013, 3(3), 266-279; https://doi.org/10.3390/soc3030266
Submission received: 30 May 2013 / Revised: 19 June 2013 / Accepted: 21 June 2013 / Published: 26 June 2013

Abstract

:
In this paper, we review the current official methodology of scientific research output evaluation in the Czech Republic and present a case study on twenty-one Czech public universities. We analyze the results of four successive official research assessment reports from 2008 to 2011 and draw the following main conclusions: (a) the overall research production of the universities more than doubled in the period under investigation, with virtually all universities increasing their absolute research output each year, (b) the total research production growth is slowing down and (c) Charles University in Prague is still the top research university in the Czech Republic in both absolute and relative terms, but its relative share in the total research performance is decreasing in favor of some smaller universities. We also show that the rankings of universities based on the current methodology are quite strongly correlated with established indicators of scientific productivity. This is the first time ever that the official present-day Czech science policy and evaluation methodology along with the results for the Czech university system has been communicated to the international public.

1. Introduction and Related Work

The evaluation of scientific research output has become crucial in recent years, as the budgets of science funding bodies (governments, foundations, etc.) have become tight, but the need for research and innovations has been ongoing or even growing. Therefore, it has become clear that it is absolutely necessary to identify high quality research that should be prioritized in receiving funding and also poor quality research whose funding is no more effective. The key concept here is to promote the advancement of science as efficiently as possible, i.e., to maximally increase the effort/award rate from the point of view of financing science. This is why many countries have introduced various research performance evaluation systems (especially for institutions), some of which are the well-known Research Assessment Exercise (RAE) in the United Kingdom or Excellence in Research for Australia (ERA) in Australia. Science evaluation has also been a hot topic in the Czech Republic in recent years. The Czech government (or more precisely, the Research, Development and Innovation Council—an advisory body to the government) published an official methodology of research output evaluation that later changed several times within a few years. We will review the current methodology (from May 2011) in the following sections and show the results of the last four official research evaluation reports based on this methodology in the context of twenty-one Czech public universities. Although the official methodology should only serve as an input into the process of research budget creation, its application inevitably leads to university rankings, which are part of this paper’s results section. (There are no official university rankings in the Czech Republic.)
The Czech Republic is little covered in science and technology literature. Some of the few studies devoted exclusively to the Czech Republic include bibliometric analyses of Czech research publications [1], patents [2] or European framework program results [3]. Other scientometric studies usually observe the Czech Republic in the context of a larger group of (Central) European countries, e.g., [4] or [5]. As far as the official evaluation of scientific research output in the Czech Republic is concerned, it seems that the Czech research evaluation system is (almost) unknown to the rest of the world: neither [6] nor, more recently, [7] make an explicit mention of the Czech Republic in their comprehensive overviews of university research evaluation and funding systems in different countries. Country-specific research evaluation at the university level is currently a lively topic for scientometricians, as is well documented by the recent studies for Colombian [8], Spanish [9], Chinese [10], South African [11] or Taiwanese [12] universities. Many papers (e.g., [13,14,15,16]) are also concerned with the use of peer review and bibliometric indicators in national university research evaluation and funding systems and argue why the former or latter approach is better, but this is not the intent of this article. We solely present the currently used research evaluation methodology and a case study for universities.

2. Data and Methods

In this study, we concentrated on a set of twenty-one public universities (see Table 1) run by the Ministry of Education, Youth and Sports of the Czech Republic and, in one case, by the Ministry of Defense of the Czech Republic (University of Defense). These universities are also the most highly ranked in the 2011 Research Evaluation Report (the most recent evaluation). Other public universities in the Czech Republic do not conduct research in the fields of science and technology (such as colleges of arts or police academies) and are discarded from this study.
Table 1. List of universities and their acronyms.
Table 1. List of universities and their acronyms.
University name in EnglishAcronym
University of South Bohemia in České BudějoviceBudějovice
Czech Technical University in PragueČVUT
Czech University of Life Sciences PragueČZU
University of Hradec KrálovéHradec
Technical University of LiberecLiberec
Masaryk UniversityMU
Mendel University in BrnoMZLU
Palacký University, OlomoucOlomouc
Silesian University in OpavaOpava
University of OstravaOstrava
University of PardubicePardubice
University of West BohemiaPlzeň
Charles University in PragueUK
University of DefenseUO
Jan Evangelista Purkyně University in Ústí nad LabemÚstí
University of Veterinary and Pharmaceutical Sciences BrnoVFU
VŠB-Technical University of OstravaVŠB-TUO
University of Economics, PragueVŠE
Institute of Chemical Technology, PragueVŠCHT
Brno University of TechnologyVUT
Tomas Bata University in ZlínZlín

2.1. Scores

The official methodology for the evaluation of research output has been slightly modified a few times since 2008, the first year in a series of successive comparable research evaluation reports. (There were research evaluation methodologies and reports before 2008, but they differed from the current methodology to such extent that it would make no sense to compare those evaluations to the current ones. For instance, the reports only considered research results related to completed grant projects, etc. In contrast, the current methodology considers all results.) In the following sections, we will present a short summary of the current methodology (available in Czech [17]) defined by the Czech government in May 2011. In general, the methodology is based on assessing scientific production, i.e., it counts publications and other research results produced and only indirectly (in some cases) on assessing the quality of research output. No citations are counted, but, in the case of journal articles, the journal impact factor is taken into account, which is a de facto cheap estimate of potential citation counts. In this methodology, all research results yielded in the five years preceding the evaluation year are assigned the scores shown in Table 2. For instance, all journal articles indexed in the Web of Science (WoS) database by Thomson Reuters that were published in journals with a nonzero impact factor in the Journal Citation Reports (JCR, edited in the publication year) from 2006 to 2010 will be assigned a score between ten and 305 in the 2011 Evaluation. The score is computed according to the following formula:
Jimp = 10 + 295((1 – N) / (1 + (N/0.057)),
where N is the normalized journal rank obtained from JCR when the journals in its category are sorted by their impact factor (IF) in descending order: N = (P – 1)/(Pmax – 1), where P is the journal rank and Pmax is the number of journals in the category. If the journal belongs to two or more categories, N is the average normalized rank from all categories. However, there are two cases in which this formula is not needed: if an article is published in the prestigious multidisciplinary journals, Nature or Science, it is assigned a score of 500 without any computation. Articles published in refereed journals without IF (Jnoimp) can also get scores, provided they are indexed by the well-known databases, Scopus and/or ERIH (European Reference Index for the Humanities—categories A, B, C). For Scopus, there is a unique score of twelve, whereas for ERIH, there is a distinct score for each journal category, and in addition, articles in journals on “nation-specific” topics, such as history or linguistics, have more weight than articles in other journals. There is also a category for articles that appear in Czech refereed journals (Jref), whose list is pre-defined and which can also be classified into “national” fields and other fields subcategory. In the case a journal article happens to belong to two or more categories (or subcategories), the highest possible score is considered for that article. Books (B) are rewarded with scores of forty or twenty, depending on the publication language (English, Chinese, French, German, Russian and Spanish are considered “world” languages) and scientific field. Book chapters receive scores proportional to the score of the entire book based on the chapter’s scope within the book. The last result category in basic research are conference proceedings papers (D) indexed in WoS that score eight points each. In addition, any of the above results whose presence in WoS is required must be one of the following document types: article, review, proceedings paper or letter.
Table 2. Research result categories and their scores. ERIH, European Reference Index for the Humanities. EPO, European Patent Office.
Table 2. Research result categories and their scores. ERIH, European Reference Index for the Humanities. EPO, European Patent Office.
Result category“National” fieldsOther fields
Jimpimpacted journal article10–305
Nature or Science article500
Jnoimprefereed journal articleScopus12
ERIHA3012
B2011
C1010
JrefCzech refereed journal articlelist of refereed journals104
Bbook or book chapterworld language4040
other languages20
Dconference proceedings paper8
PpatentEPO, USA, Japan500
license-exploited Czech or national patent200
other patents40
Zpilot plant, certified technology, variety, breed100
Futility model40
industrial design40
Gprototype, functional sample40
Hresults implemented by funding body40
Ncertified methodologies and procedures, specialized maps40
Rsoftware40
Vresearch report with confidential information50
The other result categories in Table 2 comprise applied research results such as patents (P), pilot plants, certified technologies, varieties, and breeds (Z), utility models and industrial designs (F), prototypes and functional samples (G), results implemented by funding body (H, e.g., results implemented in legal documents), certified methodologies and procedures and specialized maps (N), software (R), and research reports with confidential information (V). The highest score here (500) can be assigned to a patent granted by the European Patent Office or by the US or Japanese patent offices. The second highest score (200) is achieved by a national patent (granted by patent offices other than the three above offices), provided the patent is commercially exploited based on a valid license. All other patents receive a unified score of forty. The other applied research results equally obtain forty points each, except for categories Z (100) and V (fifty). The result categories, H and N, are further split into subcategories (with the same score) whose descriptions are not shown in Table 2.

2.2. Renormalization

The scores in Table 2 are given for a full research result—they are further distributed to individual universities (or, more generally speaking, to research institutes) according to their share in the result. In principle, outputs are fractionally allocated to universities based on their share of authors. However, domestic and foreign affiliations are weighted differently. Finally, the current methodology employs a score renormalization process, whose goals are the following: (a) prevent excessive growth of results whose existence and quality is difficult to verify, (b) retain the funding proportion between basic and applied research and (c) retain the funding proportion among various disciplinary research areas. The renormalization steps must be taken exactly in the following order:
(a)
115% reduction of excessive growth of results of a certain type. Let X2009 be the total score of results of type X yielded in 2009 and X2010 be the total score of results of type X yielded in 2010. If X2010/X2009 > 1.15, then the scores of all results of type X from 2010 shall be multiplied by factor cx:cx = 1.15(X2009/X2010). This step does not concern Jimp results.
(b)
Correction of the proportion between basic and applied research results to eighty-five: fifteen. Let SB = J + B + D be the total score of basic research results and SA = P + Z + F + G + H + N + R + V be the total score of applied research results. (Previous methodologies also included result categories, C—basic research—and L, S, and T—applied research.) Let a85 = 0.85(SB + SA)/SB be the correction factor for basic research results and a15 = 0.15(SB + SA)/SA be the correction factor for applied research results. Then, all results of categories J, B and D shall be multiplied by factor, a85, and all results of categories P, Z, F, G, H, N, R, and V shall be multiplied by factor, a15.
(c)
Setting of the proportion among various disciplinary research areas. Let ax = px(SB + SA)/X be the correction factor of research area, X, where SB and SA are defined above, X is the total score of results in research area, X, after the corrections described in the two previous steps, and px is the (desired) research area share from Table 3. The results in each research area shall be multiplied by the corresponding correction factor.
The final scores achieved by universities after renormalization are used by the Czech government in the creation of the budget for the support of research institutions. Officially, the scores are not used to rank research institutions in any way.
Table 3. Disciplinary areas and their desired shares.
Table 3. Disciplinary areas and their desired shares.
Disciplinary areapx
1. social sciences7.85
2.engineering15.60
3.mathematics and computer science5.16
4.physics15.08
5.chemical sciences15.80
6.Earth sciences5.06
7.biological sciences12.00
8.agriculture4.96
9.medicine10.74
10.arts and humanities7.75
100.00

3. Results and Discussion

From 2008 to 2011, the universities under investigation more than doubled their overall research output, achieving a total score of 0.73, 1.20, 1.56 and 1.75 million points in the respective years (see Table 4). Thus, there is an increase of 140% in scientific productivity between 2008 and 2011. This can be documented by the year-by-year growth in 2009, 2010 and 2011, which is 65%, 30% and 12%, respectively. Therefore, research productivity is still growing, but the growth is slowing down. As far as the absolute scores of the individual universities are concerned, all of the universities (but two) managed to increase their research output compared to the previous year, sometimes quite remarkably, e.g., Hradec by 131% in 2009 and by 114% in 2010 or Ostrava by 101% in 2009; other times, only modestly, e.g., MU by 3% in 2011, VŠCHT by 5% in 2010 or Charles University (UK) by 5% in 2011. The only exceptions to the “ever-growing” research productivity are VŠE, dropping by 6% in 2011, and the University of Defense (UO) in 2011, which declined by 2%. Note, however, that because of some methodological changes in the research assessment between 2008 and 2011, a 100% score growth does not necessarily mean a twofold productivity.
Now, let us have a look at how the relative shares of universities in the overall research output (produced by twenty-one public science and technology universities) changed between 2008 and 2011. In Figure 1, we can see that Charles University (UK) was the leading institute, with 34% in 2008, followed by ČVUT and MU (other “big” universities), with 12% and 11%, respectively. In 2011 the top three universities remained the same, but UK’s share dropped by five percentage points (see bottom chart in Figure 1). On the other hand, some “small” universities managed to raise their shares, e.g., Olomouc, Budějovice or Plzeň. In Figure 2, the pie charts are quite similar, even though they are based on the number of publications indexed in Web of Science in 2003–2007 (for 2008) and in 2006–2010 (for 2011) that were affiliated with the Czech universities under study. (The publication counts were retrieved in April 2013 using the “Organization-Enhanced” advanced search feature, including all document types from the five main citation databases of the Web of Science by Thomson Reuters.)
Table 4. Absolute and relative university scores in 2008–2011.
Table 4. Absolute and relative university scores in 2008–2011.
University2008%2009%2010%2011%Δ09Δ10Δ11
Budějovice21,4402.9539,0823.2655,5863.5565,2443.7482%42%17%
ČVUT87,63112.06155,58712.96194,54713.20211,79612.1378%25%9%
ČZU11,5611.5919,0231.5830,0971.8639,2612.2565%58%30%
Hradec1,5670.223,6230.307,7390.4210,5060.60131%114%36%
Liberec10,2001.4014,1491.1821,2181.4525,6531.4739%50%21%
MU78,60810.82122,39210.20191,66711.76197,25611.3056%57%3%
MZLU17,0242.3423,0581.9230,7221.8537,0762.1235%33%21%
Olomouc40,3325.5572,4856.04101,7086.44122,8357.0480%40%21%
Opava4,0650.567,0620.5911,6490.6512,7960.7374%65%10%
Ostrava5,1350.7110,3180.8618,6831.0823,4171.34101%81%25%
Pardubice21,6702.9839,5243.2949,0983.0456,9253.2682%24%16%
Plzeň20,9562.8829,4952.4649,0363.3062,4303.5841%66%27%
UK246,36633.90429,26135.77487,22731.25513,33829.4174%14%5%
UO11,8701.6318,0331.5021,4261.4320,9931.2052%19%−2%
Ústí5,1130.707,7530.6510,7940.6513,9990.8052%39%30%
VFU8,0801.1113,4231.1216,5991.0918,8381.0866%24%13%
VŠB-TUO12,9121.7820,6701.7235,2872.2752,3083.0060%71%48%
VŠE12,1261.6714,7501.2325,5291.3124,0301.3822%73%−6%
VŠCHT41,7345.7462,1645.1865,1744.2279,5564.5649%5%22%
VUT62,1008.5588,6677.39115,8828.10134,9347.7343%31%16%
Zlín61,690.859,7010.8117,8231.0622,5291.2957%84%26%
726,6581001,200,2201001,557,4901001,745,72010065%30%12%
Figure 1. Relative university scores in 2008 and 2011.
Figure 1. Relative university scores in 2008 and 2011.
Societies 03 00266 g001
Figure 2. Relative university publication output in 2008 and 2011 by Web of Science (WoS).
Figure 2. Relative university publication output in 2008 and 2011 by Web of Science (WoS).
Societies 03 00266 g002
The difference between the absolute and relative research output can be seen by comparing the two charts in Figure 3. In the top chart, all universities improve their absolute research performance (except VŠE and UO in 2011), but in the bottom chart, only some of them increase their relative research output, while others decline it. Speaking in relative terms, Charles University (UK) is still the top research university, but its lead is diminishing, other big universities stagnate (ČVUT and MU) and small universities are catching up (the trend is definitely positive for Olomouc and Budějovice). As for the rankings themselves, they are very highly correlated with Spearman’s rho, varying from 0.961 between 2008 and 2011 to 0.992 between 2008 and 2009 (both statistically significant at the 0.01 level two-tailed). However, let us underline again that the scores we are comparing here are not officially meant to be used to create university rankings—they are merely input into the process of research budget creation in the Czech Republic. As for the scientific production of Czech universities as measured by their publication counts in Web of Science in the five years preceding the census years, let us have a look at Figure 4. The growth of absolute publication output is still quite evident (see top chart) and so is (to a smaller extent) the relative production increase of some smaller universities (see bottom chart). Furthermore, the relative decline of Charles University (UK) is less steep. Nevertheless, the rankings of universities based on the methodology described in this paper and those grounded in the productivity indicators from Web of Science in a particular year are very highly positively correlated with Spearman’s correlation coefficients between 0.884 in 2008 and 0.935 in 2011 (always significant at the 0.01 level, two-tailed). For complete information on WoS-indexed publication output, see Table 5, in which we can see that productivity increased by about 49% between 2008 and 2011 and grew by only 13% in the last year.
Figure 3. Absolute and relative university scores in 2008–2011.
Figure 3. Absolute and relative university scores in 2008–2011.
Societies 03 00266 g003
Figure 4. Absolute and relative university publication output in 2008–2011 by WoS.
Figure 4. Absolute and relative university publication output in 2008–2011 by WoS.
Societies 03 00266 g004
Table 5. University publication output in 2008–2011 by WoS.
Table 5. University publication output in 2008–2011 by WoS.
University2008%2009%2010%2011%Δ09Δ10Δ11
Budějovice1,2164.171,4174.211,7154.471,9294.4417%21%12%
ČVUT2,8469.763,2999.813,6979.634,1119.4516%12%11%
ČZU6002.067562.259322.431,1862.7326%23%27%
Hradec1570.541720.512290.603060.7010%33%34%
Liberec3401.174311.285071.325721.3227%18%13%
MU3,45711.853,90211.614,34711.324,88311.2313%11%12%
MZLU6132.106972.078162.139682.2314%17%19%
Olomouc1,6425.631,9265.732,2275.802,6436.0817%16%19%
Opava1830.631800.542060.542230.51−2%14%8%
Ostrava3631.244221.265011.316081.4016%19%21%
Pardubice8652.979832.921,1362.961,2522.8814%16%10%
Plzeň6632.277822.339622.511,1522.6518%23%20%
UK10,78736.9812,24236.4113,57135.3514,90934.2913%11%10%
UO70.0270.0270.0260.010%0%−14%
Ústí1290.441500.451910.502520.5816%27%32%
VFU6812.337802.328952.339942.2915%15%11%
VŠB-TUO7142.458912.651,1492.991,4983.4525%29%30%
VŠE2750.943431.023730.974501.0325%9%21%
VŠCHT1,8056.191,9455.782,0635.372,2205.118%6%8%
VUT1,4765.061,8555.522,3166.032,6496.0926%25%14%
Zlín3531.214421.315461.426711.5425%24%23%
29,17210033,62210038,38610043,48210015%14%13%

4. Conclusions and Future Work

The evaluation of scientific research output at the level of institutions has become extremely important in recent years, due to the increasing effort of national governments (and other research funding bodies) to support research, development and innovations as efficiently as possible. In this study, we concentrate on the science evaluation policy in the Czech Republic (which is hardly known in science and technology literature) and present the results of the most recent official assessments (2008–2011) of the research output of twenty-one Czech public universities. The key findings are the following:
The overall research output of the universities under study more than doubled from 2008 and 2011, with virtually all universities increasing their absolute research production each year.
The production growth seems to be slowing down.
Charles University in Prague is still the leading research university in both absolute and relative terms, but its relative share in the total research production is decreasing in favor of smaller universities.
In addition, we have shown that although the current evaluation methodology places some emphasis on applied research, the rankings of universities that can be generated using these assessment reports are very strongly correlated with the rankings based on publication counts from Web of Science. Even if the total production increase between 2008 and 2011 was 240% based on the official methodology and only 49% based on Web of Science publication data, the trends of university research output remained similar. The difference in the overall production growth may be caused by taking into account also non-WoS publications and applied research results, such as patents or prototypes by the official methodology, as well as by the way the points for research results are normalized and distributed to individual institutions in the national assessment. In spite of this, university rankings grounded in Web of Science publication data seem to be a good approximation to the national assessment results. However, there are no official university rankings in the Czech Republic, and even the results of the annual research evaluations are only used to help allocate research funds. Therefore, the rankings presented in this article should be considered “unofficial”, even though they are based on an analysis of official and publicly available data. In our future work, we would like to focus on the updates and modifications of the official science assessment methodology, as well as on other types of research institutions, as well, such as the institutes of the Academy of Sciences of the Czech Republic and on the comparison of the research evaluation systems and university performance in Central European countries.

Acknowledgments

This work was supported by the European Regional Development Fund (ERDF), project “NTIS—New Technologies for Information Society”, European Centre of Excellence, CZ.1.05/1.1.00/02.0090. Many thanks are due to the anonymous reviewers for their useful comments.

Conflict of Interest

The author declares no conflict of interest.

References

  1. Vaněček, J. Bibliometric analysis of the Czech research publications from 1994 to 2005. Scientometrics 2008, 77, 345–360. [Google Scholar] [CrossRef]
  2. Vaněček, J. Patenting propensity in the Czech Republic. Scientometrics 2008, 75, 381–394. [Google Scholar] [CrossRef]
  3. Vaněček, J.; Fatun, M.; Albrecht, V. Bibliometric evaluation of the FP-5 and FP-6 results in the Czech Republic. Scientometrics 2010, 83, 103–114. [Google Scholar] [CrossRef]
  4. Gorraiz, J.; Reimann, R.; Gumpenberger, C. Key factors and considerations in the assessment of international collaboration: A case study for Austria and six countries. Scientometrics 2011, 91, 417–433. [Google Scholar]
  5. Radosevic, S.; Auriol, L. Patterns of restructuring in research, development and innovation activities in Central and Eastern European countries: An analysis based on S&T indicators. Res. Pol. 1999, 28, 351–376. [Google Scholar] [CrossRef]
  6. Geuna, A.; Martin, B.R. University research evaluation and funding: An international comparison. Minerva 2003, 41, 277–304. [Google Scholar] [CrossRef]
  7. Hicks, D. Performance-based university research funding systems. Res. Pol. 2012, 41, 251–261. [Google Scholar] [CrossRef]
  8. Bucheli, V.; Díaz, A.; Calderón, J.P.; Lemoine, P.; Valdivia, J.A.; Villaveces, J.L.; Zarama, R. Growth of scientific production in Colombian universities: An intellectual capital-based approach. Scientometrics 2012, 91, 369–382. [Google Scholar] [CrossRef]
  9. Buela-Casal, G.; Paz Bermúdez, M.; Sierra, J.C.; Quevedo-Blasco, R.; Castro, A.; Guillén-Riquelme, A. Ranking 2010 in production and research productivity in Spanish public universities. Psicothema 2011, 23, 527–536. [Google Scholar]
  10. Li, F.; Yi, Y.; Guo, X.; Qi, W. Performance evaluation of research universities in mainland China, Hong Kong and Taiwan: Based on a two-dimensional approach. Scientometrics 2012, 90, 531–542. [Google Scholar] [CrossRef]
  11. Matthews, A.P. South African universities in world rankings. Scientometrics 2012, 92, 675–695. [Google Scholar] [CrossRef]
  12. Wu, H.Y.; Chen, J.K.; Chen, I.S.; Zhuo, H.H. Ranking universities based on performance evaluation by a hybrid MCDM model. Measurement 2012, 45, 856–880. [Google Scholar] [CrossRef]
  13. Abramo, G.; Cicero, T.; D'Angelo, C.A. A sensitivity analysis of research institutions’ productivity rankings to the time of citation observation. J. Informetrics 2012, 6, 298–306. [Google Scholar] [CrossRef]
  14. Abramo, G.; D'Angelo, C.A.; Costa, F.D. National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics 2011, 89, 929–941. [Google Scholar] [CrossRef]
  15. Franceschet, M.; Costantini, A. The first Italian research assessment exercise: A bibliometric perspective. J. Informetrics 2011, 5, 275–291. [Google Scholar] [CrossRef]
  16. Vanclay, J.K.; Bornmann, L. Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative. Scientometrics 2012, 91, 751–771. [Google Scholar] [CrossRef]
  17. Research and Development in the Czech Republic. Available online: http://www.vyzkum.cz/ (accessed on 1 April 2013).

Share and Cite

MDPI and ACS Style

Fiala, D. Science Evaluation in the Czech Republic: The Case of Universities. Societies 2013, 3, 266-279. https://doi.org/10.3390/soc3030266

AMA Style

Fiala D. Science Evaluation in the Czech Republic: The Case of Universities. Societies. 2013; 3(3):266-279. https://doi.org/10.3390/soc3030266

Chicago/Turabian Style

Fiala, Dalibor. 2013. "Science Evaluation in the Czech Republic: The Case of Universities" Societies 3, no. 3: 266-279. https://doi.org/10.3390/soc3030266

Article Metrics

Back to TopTop