Why an Unbiased External R&D Evaluation System is Important for the Progress of Social Sciences—the Case of a Small Social Science Community
2. The Use of External R&D Evaluations by Slovenian Policy Decision-Makers
3. The Reliability and Validity of the Three Metrics in External R&D Evaluation at the Slovenian Research Agency
|DIMENSIONS OF SCIENTIFIC PERFORMANCE||BIBLIOMETRIC TOOLS FOR MEASURING SCIENTIFIC PERFORMANCE|
|scientific productivity||the number of publications within the 5 last years|
|scientific impact||the number of citations within Web of Science in the last 10 years|
|efficiency in obtaining financial means||third-party funding within the last 5 years|
3.1. Reliability of the Three Metrics Used in External R&D Evaluations
3.2. Validity of the Three Metrics Used in External R&D Evaluations
|Type of publication||Engineering Sciences||Social Sciences and Humanities||Natural Sciences||Life Sciences|
Conflicts of Interest
- Warren Hagstrom. The Scientific Community. New York: Basic Booksi, 1965. [Google Scholar]
- Norman Storer. The Social System of Science. New York: Reinhart and Winston Press, 1966. [Google Scholar]
- Robert Merton. The Sociology of Science. Chicago: University Press, 1973. [Google Scholar]
- Richard Whitley. “Changing Governance of the Public Science.” In The Changing Governance of the Science. Edited by Jochen Glaeser and Richard Whitley. Dordrecht, Heidelberg, London, New York: Springer, 2007, pp. 3–27. [Google Scholar]
- Ed Noyonsed, and Clara Calero-Medina. “Applying bibliometric mapping in a high level science policy context.” Scientometrics 79, no. 2 (2009): 261–75. [Google Scholar] [CrossRef]
- Stefan Hornbostel. Wissenschaftsindikatoren. Bewrtungen in der Wissenschaft. Opladen: Westdeutscher Verlag GmbH., 1997. [Google Scholar]
- Susan Cozzens. “What do citations count? The rhetoric-first model.” Scientometrics 15, no. 5 (1989): 437–47. [Google Scholar] [CrossRef]
- Erik Ernø-Kjølhede, and Finn Hansson. “Measuring research performance during a changing relationship between science and society.” Research Evaluation 20, no. 2 (2011): 131–43. [Google Scholar] [CrossRef]
- Giovanni Abramo, and Ciriaco Andrea D’Angelo. “Evaluating research: From informed peer review to bibliometrics.” Scientometrics 87, no. 3 (2011): 499–514. [Google Scholar] [CrossRef]
- Remi Barre. “Towards socially robust S&T indicators: Indicators as debatable devices, enabling collective learning.” Research Evaluation 19, no. 3 (2010): 227–31. [Google Scholar]
- Susan Cozzens. “Death by Peer Review? The Impact of Results-Oriented Management in U.S. Research.” In The Changing Governance of the Science. Edited by Jochen Glaeser and Richard Whitley. Dordrecht, Heidelberg, London, New York: Springer, 2007, pp. 225–42. [Google Scholar]
- David Guston. “The Expanding Role of Peer Review Processes in the United States.” In Learning from Science and Technology Policy Evaluation. Edited by Philip Shapira and Stefan Kuhlmann. Proceedings from the 2000 US-EU Workshop on Learning from Science and Technology Policy Evaluation; Karlsruhe: Fraunhofer Institute for Systems and Innovations Research, 2001, pp. 4–31. [Google Scholar]
- Ross Cagan. “San Francisco Declaration on Research Assessment.” In Disease Models & Mechanisms; 2013, 6, no. 4, pp. 869–70. Available online: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3701204/ (accessed on 24 November 2013). [Google Scholar]
- Thorsteindottir. “Public sector research in small countries: Does size matter? ” Science and Public Policy 27, no. 6 (2000): 433–42. [Google Scholar] [CrossRef]
- Deborah Bräutigam, and Michael Woolcock. “Small States in a Global Economy. The Role of Institutions in Managing Vulnerability and Opportunity in Small Developing Countries.” World Institute for Development Economic Research, Discussion Paper No. 2001/37. Available online: http://www.wider.unu.edu/publications/working-papers/discussion-papers/2001/en_GB/dp2001-37/ (accessed on 15 November 2013).
- David Hess. “Social reporting and new governance regulation: The prospects of achieving corporate accountability through transparency.” Business Ethics Quarterly 17, no. 3 (2007): 453–76. [Google Scholar] [CrossRef]
- Thomas Schott. “World Science: Globalization of Institutions and Participation.” Science, Technology & Human Values 18, no. 3 (1993): 196–208. [Google Scholar]
- Raimo Vayrynen. “Small States in the New European Context.” In Small States and Security Challenge in New Europe. Edited by Werner Bauwens and Olav Knudsen. London: Brasey, 1996, pp. 1–41. [Google Scholar]
- Franci Demsar. “Ni prave demokracije brez zadostne transparetnosti.” Delo. Available online: http://www.delo.si/zgodbe/sobotnapriloga/franci-demsar-ni-prave-demokracije-brez-zadostne-transparentnosti.html (accessed on 22 November 2013).
- Franci Demsar, and Tomaz Boh. “Uvajanje načel transparentnosti v delo javne uprave: primer Javne agencije za raziskovalno dejavnost Republike Slovenije.” Druzboslovne razprave 24, no. 2 (2008): 89–105. [Google Scholar]
- Dietmer Braun. Die politische Steuerung der Wissenschaft. Frankfurt, New York: Campus Verlag, 1997. [Google Scholar]
- Eugene Garfield. “The history and meaning of the journal impact factor.” Journal of the American Medical Association 295, no. 1 (2006): 90–93. [Google Scholar] [CrossRef]
- Marta Seljak, and Tomaz Seljak. “COBISS: National union catalogue, online bibliography and gateway to other resources.” New Library World 101, no. 1153 (2000): 12–20. [Google Scholar]
- Franc Mali. “Policy issues of the international productivity and visibility of the social sciences in Central and Eastern European countries.” Sociology and Space 48, no. 3 (2011): 415–35. [Google Scholar]
- Michael Gibbons, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow. The New Production of Knowledge. The Dynamics of Science and Research in Contemporary Societies. London: Sage Publications, 1994. [Google Scholar]
- Loet Leydesdorff, and Henry Etzkowitz. A Triple Helix of University-Industry-Government Relations: The Future Location of Research. New York: Science Policy Institute of State University of New York, 1998. [Google Scholar]
- Benedetto Lepori, and Carole Probst. “Using curricula vitae for mapping scientific fields: A small-scale experience for Swiss communication sciences.” Research Evaluation 18, no. 2 (2009): 125–34. [Google Scholar] [CrossRef]
- Daniel Torres-Salinas, and Henk Moed. “Library catalog. Analysis as a tool in studies of social sciences and humanities: An exploratory study of published book titles in economics.” Journal of Informetrics 3, no. 1 (2009): 9–26. [Google Scholar] [CrossRef]
- Mu-hsuan Huang, and Yu-wei Chang. “Characteristics of research output in social sciences and humanities: From a research evaluation perspective.” Journal of the American Society for Information Science and Technology 59, no. 11 (2008): 1819–28. [Google Scholar] [CrossRef]
- Anton Nederhof. “Bibliometric monitoring of research performance in the Social Sciences and the Humanities: A Review.” Scientometrics 66, no. 1 (2006): 81–100. [Google Scholar] [CrossRef]
- Penelope Earle, and Brian Vickery. “Social science literature use in the UK as indicated by citations.” Journal of Documentation 25, no. 2 (1969): 123–41. [Google Scholar] [CrossRef]
- Diana Hicks. “The Four Literatures of Social Sciences.” In Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems. Edited by Henk F. Moed, Wolfgang Glaenzel and Ulrich Schmoch. Dordrecht, Boston, London: Kluwer Academic Publishers, 2004, pp. 473–97. [Google Scholar]
- Ulrich Schmoch, Torben Schubert, Dorothea Jansen, Richard Heidler, and Regina von Görtz. “How to use indicators to measure scientific performance: A balanced approach? ” Research Evaluation 19, no. 1 (2010): 2–18. [Google Scholar] [CrossRef]
- David Pontille, and Didier Torny. “The controversial policies of journal ratings: Evaluating social sciences and humanities.” Research Evaluation 19, no. 5 (2010): 347–60. [Google Scholar] [CrossRef]
- Kayvan Kousha, and Mike Thelwall. “Google Book Search: Citation analysis for social science and the humanities.” Journal of the American Society for Information Science and Technology 60, no. 8 (2009): 1537–49. [Google Scholar] [CrossRef]
- Jochen Gläser. “Why are the most influential books in Australian sociology not necessarily the most highly cited ones? ” Journal of Sociology 40, no. 3 (2004): 261–82. [Google Scholar] [CrossRef]
- Jorge Hirsh. “An Index to Quantify an Individual’s Scientific Research Output.” Proceedings of the National Academy of Sciences of the United States of America 102, no. 46 (2005): 16569–72. [Google Scholar] [CrossRef][Green Version]
- Thed Van Leeuwen, Martijn Visser, Henk Moed, Ton Nederhof, and Anthony Van Raan. “The Holy Grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence.” Scientometrics 57, no. 2 (2003): 257–80. [Google Scholar] [CrossRef]
- Steve Montague, and Rodolfo Valentim. “Evaluation of RT&D: From ‘prescriptions for justifying’ to ‘user-oriented guidance for learning’.” Research Evaluation 19, no. 4 (2010): 251–61. [Google Scholar]
- Stefan Hornbostel. “Third party funding of German universities: An indicator of research activity? ” Scientometrics 50, no. 3 (2001): 523–37. [Google Scholar] [CrossRef]
- Robert Tijsen. “Measuring and Evaluating Science-Technology Connections and Interactions.” In Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems. Edited by Henk F. Moed, Wolfgang Glaenzel and Ulrich Schmoch. Dordrecht, Boston, London: Kluwer Academic Publishers, 2004, pp. 695–717. [Google Scholar]
- Franc Mali, Luka Kronegger, and Anuska Ferligoj. “Co-authorship trends and collaboration patterns in the Slovenian sociological community.” Corvinus Journal for Sociology and Social Policy 1, no. 2 (2010): 29–50. [Google Scholar]
- Luka Kronegger, Franc Mali, Anuska Ferligoj, and Patric Doreian. “Collaboration structures in Slovenian scientific communities.” Scientometrics 90, no. 2 (2012): 631–47. [Google Scholar] [CrossRef]
- Franc Mali, Luka Kronegger, Patric Doreian, and Anuska Ferligoj. “Dynamic scientific co-authorship networks.” In Models of Science Dynamics: Encounters between Complexity Theory and Information Sciences. Edited by Andrea Scharnhorst, Katy Boerner and Peter Bessaler. Dordrecht, Heidelberg, London, New York: Springer, 2012, pp. 195–232. [Google Scholar]
- 1Whitley defined external research evaluation systems as “…organized sets of procedures for assessing the merit of research undertaken in publicly-funded organizations that are implemented on a regular basis, usually by state or state-delegated agencies” (, p. 6).
- 2In Slovenian R&D funding agencies, there are two-stage phases in the evaluation and selection of R&D proposals. In the first phase, applicants only have to prepare short descriptions of proposed projects and, in the second phase, very detailed documentation with all relevant information is required by the agencies′ administration. In both phases, a much more important role is played by quantitative assessments of applicants’ past achievements than peer review of the R&D proposal.
- 3Senior bureaucrats from the Slovenian Research Agency argue that the (long) duration of the 7 to 10 year research programmes is important for the stability of research groups working at universities and governmental institutes.
- 4All researchers (research teams) in Slovenia can write a project proposal and ask for a grant. As a consequence, these (quasi-expert) bodies evaluate the submitted project proposals in the framework of public tenders.
- 5During the evaluation procedures, relative weight factors are attached to each type of publication. For example, articles published in journals with an impact factor are considered as substantial contributions and for that reason are assigned a greater weight than contributions in national (Slovenian) conference proceedings, etc.
- 6The citations are collected from Web of Science for the period of the last 10 years. Self-citations are excluded. The main goal of excluding self-citations is to avoid the short-term effect of an author citing their own work in subsequent articles. In the procedure of “normalisation”, third-party citations received by authors from Slovenia are further divided by the average impact factor of ISI for the particular scientific field in which the article was published.
© 2013 by the author; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Mali, F. Why an Unbiased External R&D Evaluation System is Important for the Progress of Social Sciences—the Case of a Small Social Science Community. Soc. Sci. 2013, 2, 284-297. https://doi.org/10.3390/socsci2040284
Mali F. Why an Unbiased External R&D Evaluation System is Important for the Progress of Social Sciences—the Case of a Small Social Science Community. Social Sciences. 2013; 2(4):284-297. https://doi.org/10.3390/socsci2040284Chicago/Turabian Style
Mali, Franc. 2013. "Why an Unbiased External R&D Evaluation System is Important for the Progress of Social Sciences—the Case of a Small Social Science Community" Social Sciences 2, no. 4: 284-297. https://doi.org/10.3390/socsci2040284