The Normalization of Citation Counts Based on Classification Systems
Abstract
:1. Introduction
2. The Use of Reference Sets
- (1)
- Indicators represent incentives for academics to shape their publication behaviour in a particular way [5]. Academics are therefore guided by the indicators which are used in research evaluations. The normalization based on individual journals rewards publications in journals of little reputation: in these journals it is easier for individual publications to be above the reference value [6]. The use of an indicator which is normalized on the basis of individual journals therefore encourages academics to publish in journals of lesser reputation.
- (2)
- In general, reference values should be used to take account of (or to disregard) factors in the citation analysis which may have an impact on citations but are not related to research quality. The year of publication affects the citation impact of a publication, for example, although the year of publication has no bearing on quality. We can assume that a publication from 2000 is not of a higher quality per se than a publication from 2005—even if the older publication is usually cited more often than the more recent one. We also know (see above) that the research field has an influence on citations. The different citation rates between the research fields do not reflect differences in quality between the papers in the research fields, however. Whereas mean citation impact values for different subject categories reflect only the different citation behaviours within different research fields, the values for different individual journals reflect not only the different behaviours, but also the different journal qualities. We know that certain journals publish (on average) higher quality papers than other journals [7]. Thus, the citation impact score for a journal is also quality driven, but not the citation impact score for a research field. This feature leads to the fact that results on the basis of standards which are based on individual journals are not meaningful without accompanying indicators.
- (3)
- Indicators on the basis of a normalization based on individual journals must therefore always be accompanied by indicators which provide information on the quality of the journals where the research under evaluation has been published. The normalized score on its own is not meaningful: if two institutions A and B have the same above-average score, it is not clear whether the score is based on normalization to journals with high or low citation impact. Institution A, which was normalized to a high citation impact would have published in reputational journals and at the same time achieved more citations. This institution would therefore be successful in two respects. Institution B, which was normalized to a low citation impact, would have published in unimportant journals and exceeded only this low standard. Institution B has in fact a worse position (in two respects), a fact which is not expressed by the normalized score [8]. Only the quality of the journals enables an assessment to be made as to whether an institution has truly achieved a high impact with its publications when it has a comparably low normalized citation impact (because it has published mainly in reputational journals with a high citation impact), or whether it has truly achieved a low impact (because it has published mainly in journals of little reputation with a low citation impact).
- (4)
- In bibliometric evaluations the mean normalized citation impact (of institutions, for example) is often shown as a function of the individual years of publication. Since individual journals usually enter into the calculation of normalized impact scores with significantly smaller publication sets than do journal sets, this leads to the normalized scores based on individual journals exhibiting greater variations over the publication years than the normalized scores based on the publications in a specific research field. The variations often make it almost impossible to recognize a true trend over the publication years for normalized scores based on individual journals.
3. Determination of Research Fields
- “Each CA section covers only one broad area of scientific inquiry
- Each abstract in CA appears in only one section
- Abstracts are assigned to a section according to the novelty of the process or substance that is being reported in the literature
- If abstract information pertains to a section(s) in addition to the one assigned, a cross-reference is established”
Section Heading | Number of Sections |
---|---|
BIOCHEMISTRY (BIO/SC) | 20 |
ORGANIC (ORG/SC) | 14 |
21. General Organic Chemistry | |
22. Physical Organic Chemistry | |
23. Aliphatic Compounds | |
24. Alicyclic Compounds | |
25. Benzene, Its Derivatives, and Condensed Benzenoid Compounds | |
26. Biomolecules and Their Synthetic Analogs | |
27. Heterocyclic Compounds (One Hetero Atom) | |
28. Heterocyclic Compounds (More Than One Hetero Atom) | |
29. Organometallic and Organometalloidal Compounds | |
30. Terpenes and Terpenoids | |
31. Alkaloids | |
32. Steroids | |
33. Carbohydrates | |
34. Amino Acids, Peptides, and Proteins | |
MACROMOLECULAR (MAC/SC) | 12 |
APPLIED (APP/SC) | 18 |
PHYSICAL, INORGANIC, AND ANALYTICAL (PIA/SC) | 16 |
3. Percentiles of Citation Counts
4. Conclusions
Conflict of Interest
References
- Waltman, L.; van Eck, N.J. A systematic empirical comparison of different approaches for normalizing citation impact indicators. Available online: http://arxiv.org/abs/1301.4941 (accessed on 6 February 2013).
- Aksnes, D.W. Citation rates and perceptions of scientific contribution. J. Am. Soc. Inf. Sci. Technol. 2006, 57, 169–185. [Google Scholar] [CrossRef]
- Schubert, A.; Braun, T. Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics 1986, 9, 281–291. [Google Scholar] [CrossRef]
- Schubert, A.; Braun, T. Cross-field normalization of scientometric indicators. Scientometrics 1996, 36, 311–324. [Google Scholar] [CrossRef]
- Bornmann, L. Mimicry in science? Scientometrics 2010, 86, 173–177. [Google Scholar] [CrossRef]
- Vinkler, P. The case of scientometricians with the “absolute relative” impact indicator. J. Informetr. 2012, 6, 254–264. [Google Scholar] [CrossRef]
- Bornmann, L.; Mutz, R.; Marx, W.; Schier, H.; Daniel, H.-D. A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high-profile journal select manuscripts that are highly cited after publication? J. R. Stat. Soc. Ser. A 2011, 174, 857–879. [Google Scholar] [CrossRef]
- Van Raan, A.F.J. Measurement of central aspects of scientific research: Performance, interdisciplinarity, structure. Measurement 2005, 3, 1–19. [Google Scholar]
- Council of Canadian Academies, Informing Research Choices: Indicators and Judgment: The Expert Panel on Science Performance and Research Funding; Council of Canadian Academies: Ottawa, Canada, 2012.
- Neuhaus, C.; Daniel, H.-D. A new reference standard for citation analysis in chemistry and related fields based on the sections of chemical abstracts. Scientometrics 2009, 78, 219–229. [Google Scholar] [CrossRef]
- Bornmann, L.; Mutz, R.; Neuhaus, C.; Daniel, H.-D. Use of citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics Sci. Environ. Polit. 2008, 8, 93–102. [Google Scholar] [CrossRef]
- Van Leeuwen, T.N.; Calero Medina, C. Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. Res. Evaluat. 2012, 21, 61–70. [Google Scholar] [CrossRef]
- Chemical Abstracts Service, Subject Coverage and Arrangement of Abstracts by Sections in Chemical Abstracts; Chemical Abstracts Service (CAS): Columbus, OH, USA, 1997.
- Bornmann, L.; Schier, H.; Marx, W.; Daniel, H.-D. Is interactive open access publishing able to identify high-impact submissions? A study on the predictive validity of Atmospheric Chemistry and Physics by using percentile rank classes. J. Am. Soc. Inf. Sci. Technol. 2011, 62, 61–71. [Google Scholar] [CrossRef]
- Braam, R.R.; Bruil, J. Quality of indexing information: Authors views on indexing of their articles in chemical abstracts online ca-file. J. Inf. Sci. 1992, 18, 399–408. [Google Scholar] [CrossRef]
- Bornmann, L.; Daniel, H.-D. Selecting manuscripts for a high impact journal through peer review: A citation analysis of communications that were accepted by Angewandte Chemie—International Edition, or rejected but published elsewhere. J. Am. Soc. Inf. Sci. Technol. 2008, 59, 1841–1852. [Google Scholar] [CrossRef]
- Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; van Eck, N.J.; van Leeuwen, T.N.; van Raan, A.F.J.; Visser, M.S.; Wouters, P. The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. J. Am. Soc. Inf. Sci. Technol. 2012, 63, 2419–2432. [Google Scholar] [CrossRef]
- Leydesdorff, L.; Bornmann, L.; Mutz, R.; Opthof, T. Turning the tables in citation analysis one more time: Principles for comparing sets of documents. J. Am. Soc. Inf. Sci. Technol. 2011, 62, 1370–1381. [Google Scholar] [CrossRef]
- Bornmann, L.; Leydesdorff, L.; Mutz, R. The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits. J. Informetr. 2013, 7, 158–165. [Google Scholar] [CrossRef]
- Rousseau, R. Basic properties of both percentile rank scores and the I3 indicator. J. Am. Soc. Inf. Sci. Technol. 2012, 63, 416–420. [Google Scholar] [CrossRef]
- Schreiber, M. Uncertainties and ambiguities in percentiles and how to avoid them. J. Am. Soc. Inf. Sci. Technol. 2013, 64, 640–643. [Google Scholar] [CrossRef]
- Waltman, L.; Schreiber, M. On the calculation of percentile-based bibliometric indicators. J. Am. Soc. Inf. Sci. Technol. 2013, 64, 372–379. [Google Scholar] [CrossRef]
- Schubert, T.; Michels, C. Placing articles in the large publisher nations: Is there a “free lunch” in terms of higher impact? J. Am. Soc. Inf. Sci. Technol. 2013, 63, 596–611. [Google Scholar] [CrossRef]
- Leydesdorff, L.; Radicchi, F.; Bornmann, L.; Castellano, C.; de Nooy, W. Field-normalized impact factors: A comparison of rescaling versus fractionally counted IFs. J. Am. Soc. Inf. Sci. Technol. 2013, in press. [Google Scholar]
- Leydesdorff, L.; Bornmann, L. How fractional counting of citations affects the Impact Factor: Normalization in terms of differences in citation potentials among fields of science. J. Am. Soc. Inf. Sci. Technol. 2011, 62, 217–229. [Google Scholar] [CrossRef]
- Radicchi, F.; Fortunato, S.; Castellano, C. Universality of citation distributions: Toward an objective measure of scientific impact. Proc. Natl. Acad. Sci.USA 2008, 105, 17268–17272. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Share and Cite
Bornmann, L.; Marx, W.; Barth, A. The Normalization of Citation Counts Based on Classification Systems. Publications 2013, 1, 78-86. https://doi.org/10.3390/publications1020078
Bornmann L, Marx W, Barth A. The Normalization of Citation Counts Based on Classification Systems. Publications. 2013; 1(2):78-86. https://doi.org/10.3390/publications1020078
Chicago/Turabian StyleBornmann, Lutz, Werner Marx, and Andreas Barth. 2013. "The Normalization of Citation Counts Based on Classification Systems" Publications 1, no. 2: 78-86. https://doi.org/10.3390/publications1020078
APA StyleBornmann, L., Marx, W., & Barth, A. (2013). The Normalization of Citation Counts Based on Classification Systems. Publications, 1(2), 78-86. https://doi.org/10.3390/publications1020078