Introduction from the Guest Editor of Special Issue “Modern Problems of Scientometric Assessment of Publication Activity”

In any branch of intellectual activity that claims to be called the word “science”, there are two approaches to describe the phenomena and objects associated with it; namely, qualitative analysis and quantitative analysis. In such an interdisciplinary science as modern analytical chemistry, these concepts are generally fundamental in the systematics of methods for establishing the composition and structure of matter. A similar state of affairs should take place in the branch of knowledge that studies the functioning of science as such—Sciencelogy (science of science), where one of the key problems is an adequate assessment of scientific activity and the contribution to a particular branch of science of an individual scientist and scientific teams. Consequently, in the general methodology of such an assessment, two components must be present—a qualitative assessment, based mainly on opinions about this very activity on the part of other people (mainly those who have the moral right to consider themselves representatives of the scientific community), and a quantitative assessment, on the basis of which is no longer public opinion, but some objective indicators of scientific activity, which no longer depend on this opinion. The problem of assessing the quality of the scientific activity of both an individual specific scientist and scientific collectives takes its origins almost from the moment of the birth of science itself as such, and at all times was one of the most urgent—and at the same time the most difficult—of its problems. For a long time, in any branch of science, only a qualitative component was used to evaluate scientific activity and achievements in this field of any scientist–researcher in the scientific world, the mechanism of which was actually unknown. The dominance of the qualitative assessment of scientific activity and scientific achievements has existed for centuries. However, it could not be eternal, because ignoring its quantitative component inevitably made this assessment itself, firstly, one-sided, and secondly—and this is the main thing—subjective, regardless of who carried it out. For it to acquire an objective character, it is necessary to find some quantitative indicators based on which it would be possible to evaluate the scientific activity, and above all such an important factor in it as publication activity, since any scientist leaves a memory of themselves primarily due to their published scientific works. (Therefore, we would not have known anything about the outstanding Roman astronomer Claudius Ptolemy, if not for his epoch-making work “Almagest” (from Arabic ù ¢j. ÖÏ @ H. A JoË@, al-kitabu-l-mijisti)

In any branch of intellectual activity that claims to be called the word "science", there are two approaches to describe the phenomena and objects associated with it; namely, qualitative analysis and quantitative analysis. In such an interdisciplinary science as modern analytical chemistry, these concepts are generally fundamental in the systematics of methods for establishing the composition and structure of matter. A similar state of affairs should take place in the branch of knowledge that studies the functioning of science as such-Sciencelogy (science of science), where one of the key problems is an adequate assessment of scientific activity and the contribution to a particular branch of science of an individual scientist and scientific teams. Consequently, in the general methodology of such an assessment, two components must be present-a qualitative assessment, based mainly on opinions about this very activity on the part of other people (mainly those who have the moral right to consider themselves representatives of the scientific community), and a quantitative assessment, on the basis of which is no longer public opinion, but some objective indicators of scientific activity, which no longer depend on this opinion.
The problem of assessing the quality of the scientific activity of both an individual specific scientist and scientific collectives takes its origins almost from the moment of the birth of science itself as such, and at all times was one of the most urgent-and at the same time the most difficult-of its problems. For a long time, in any branch of science, only a qualitative component was used to evaluate scientific activity and achievements in this field of any scientist-researcher in the scientific world, the mechanism of which was actually unknown. The dominance of the qualitative assessment of scientific activity and scientific achievements has existed for centuries. However, it could not be eternal, because ignoring its quantitative component inevitably made this assessment itself, firstly, one-sided, and secondly-and this is the main thing-subjective, regardless of who carried it out. For it to acquire an objective character, it is necessary to find some quantitative indicators based on which it would be possible to evaluate the scientific activity, and above all such an important factor in it as publication activity, since any scientist leaves a memory of themselves primarily due to their published scientific works. (Therefore, we would not have known anything about the outstanding Roman astronomer Claudius Ptolemy, if not for his epoch-making work "Almagest" (from Arabic , al-kitabu-l-mijisti) in 13 books, culminating in the creation of a geocentric system of the world, because neither the lifetime appearance of this scientist nor even the years of his birth and death are reliably known). Nevertheless, quantitative criteria for assessing scientific activity by the scientific community began to be developed only starting from the second half of the 20th century, when the pursuit of science became a fairly widespread phenomenon and an urgent requirement of the time became the need for its objective assessment using certain quantitative parameters that did not depend on any subjective factors. Such an assessment is of particular importance when it comes to certain "marks of distinction" of an individual scientist or research team, whether it is funding scientific research in the form of grant support, awarding scientific prizes, medals, academic degrees and titles, etc. Without it, already in the near future, science is threatened by the fact that those researchers who are creating it now and are able to create it in the future, but whose talent and achievements do not receive a due assessment from both their colleagues and the scientific community generally, will begin to leave science and stop coming in it. In principle, various options for assessing any type of creative activity are possible; however, in all its spheres, the most objective assessment is based on the final result, and not on the procedure for achieving it and the efforts expended on it. Ideally, an objective assessment of the quality of the scientific activity, it seems to us, should resemble the procedure for identifying winners in sports competitions, when the best are determined, focusing on certain specific objective quantitative indicators achieved by athletes (run time at an appropriate distance, range of a javelin or hammer throw, height in pole vaulting, etc.). In such cases, as a rule, there are no problems (for example, the advantage of one runner over another at a 100-m distance will be objectively recorded by a stopwatch). Let us emphasize: precisely objective, because when quantitative but subjective indicators are used to determine the winners in any kind of sport, arbitrariness is inevitable to one degree or another, as it has been the case to this day, in particular, in figure skating, artistic and rhythmic gymnastics. Of course, science at all times is not a sport, and whether we like it or not, it was based, is based and, apparently, will always be based on the domination of the authority of scientists, but this is not a reason to exclude the quantitative component of evaluating scientific activity. However, before putting into effect this very quantitative assessment of scientific activity, it is first necessary to solve the fundamental question of which digital indicator (or totality of indicators) should be used as the basis for such an assessment to make it really objective, and not some kind of "game into numbers". When choosing such an indicator, one should remember the sayings of two completely different scientists, namely the great physicist and Nobel laureate Albert Einstein: "Everything should be made as simple as possible, but not too simple" and one of the largest economists of the 20th-21st centuries, Charles Goodhart, Professor Emeritus of London School of Economics and Political Science: "When a target becomes a goal, it stops being a good indicator" (which is nothing more than a direct consequence of the so-called Goodhart Law: "Any observed statistical regularity will tend to collapse once the pressure is placed upon it for control purposes"), which, in our opinion, are also directly related to the problem of a quantitative assessment of scientific activity.
The most objective criterion for evaluating the activities of any scientist, probably, should be the worth of those scientific works that were created with his personal participation, for the development of both the corresponding separate branch of science and science as a whole. However, it is extremely difficult (if it is possible at all) to determine this value on a quantitative level. To a large extent (although not always), the value is related to the demand for the scientist's work, which is determined by the measure of interest in them from their other colleagues (and not only them). However, the demand is also difficult (if not impossible) to characterize with any quantitative parameter; on the other hand, not one or two cases are known when the demand and value of the scientist's work for the development of the corresponding branch of science ultimately turned out to be almost different "poles" (such as, for example, the phlogiston theory of the German physicist G. Stahl, which for more than a hundred years was very much in demand by chemists for the interpretation of many experimental data, but already in the 19th century it had no scientific value). The next step in the search for a common criterion for the quantitative assessment of scientific activity was the appeal of creative thought to the phenomenon of citation, which in one way or another characterizes the degree of mention (citing) of the works of the corresponding scientist in the media and, above all, in the scientific press. The correlation between the value, demand and citation of scientific papers can be schematically shown as follows:

WORTH ↓
Correlation: rather yes than no DEMAND ↓ Correlation: rather no than yes

CITATION
To some extent, the correlation between demand and citation is akin to such a statement: "The taller a person is in stature, the stronger he is." On average, this is probably true, but a comparison of the strength of two randomly chosen people solely by their height is often erroneous. Be that as it may, in the second half of the 20th century, citation and related parameters (known as "bibliometric indices") began to attract increased attention, and not only from specialists in the field of science and the sociology of science but even among those scientists and researchers whose activities, it would seem, are very distant from the just indicated branches of science. In this regard, it is interesting to note that the term "bibliometric index" (which is often also known as "scientometric index"), despite its wide distribution and use in the scientific community, has not yet received a generally accepted interpretation in the scientific literature; moreover (that is also quite remarkable), and there are essentially no discussions about its definition either. At least two options for its interpretation-in the "narrow" sense and in the "wide" may be given, however. Within the framework of the first of them, a bibliometric index should be understood as any quantitative parameter which in one way or another characterizes the publication activity of an author or a group of authors, but which at the same time is necessarily (directly or indirectly) related to the citation of their works; within the framework of the second, there is a quantitative parameter, which characterizes the publication activity of an author or a group of authors, but in the determination of which, citation already is not taken into account explicitly. Wherein, in both in that and in the other variant, some bibliometric parameters can be an integer (such as, for example, the Hirsch index or the total number of publications for a certain period of time), while others can be non-integer (in particular, the sum of share citations for all publications of the author, the average number of co-authors in his publications). At present, the number of bibliometric indices proposed by various authors is already several dozen; at the same time, that is very symptomatic, the overwhelming majority of them appeared after the publication of the work of J.E. Hirsch [1] which served as a kind of "catalyst" for the process of both creating new and improving existing indices (the author of this article also made a modest contribution to this process).
In modern sciencelogy, it is customary to distinguish the following two categories of fundamental indicators related to the citation: • personal citation of the researcher, which is usually characterized by three parameters: the total number of works (articles) of the researcher in the database of various scientometric systems and institutions (Web of Science, Scopus, etc.); -the number of links to specific articles in the relevant database and the total number of links to these articles; -Hirsch index (h-index), also known as "hirsch".
Namely, these parameters, by the way, are key parameters in each of these two most authoritative international citation databases-Web of Sciences and Scopus; • citation of scientific publications (mainly periodicals, and primarily scientific journals), where the researcher's work is published, which is characterized by the following three parameters, too: the impact factor of the publication (I F in Web of Science, CiteScore in Scopus); -time of "cited half-life" of the publication (cited half-life); -the total number of references to articles in this edition.
The basic parameter, based on the values of which the personal citation of a researcher is assessed, is the total number of references to the works (publications) of a given person in various scientific editions (both periodic and non-periodic). Being the earliest in terms of the time of its introduction into "scientific turnover" in comparison with the others, it remained the only one for quite a long time (in fact, until the beginning of the 21st century). However, in itself, the personal citation of the works of both a particular researcher and any research team, no matter how great it may be, cannot yet serve as evidence of the significance and worth of the scientific works they have performed and even their demand by any part of the scientific community. The only obvious advantage of a researcher's personal citation index is the following: if his works are cited very little or not at all cited, then they are most likely of little interest or even unnecessary to anyone. Along with this lack of the personal citation index of the researcher, several others can also be noted, such as: the personal citation index does not take into account the personal contribution of a particular author (i.e., whether the cited publication has one author or ten co-authors), and for a wide range of researchers, who are usually published in rather numerous coauthors, it gives an inadequate assessment of their actual scientific activities; when calculating the personal citation index of a given author, even those links are taken into account where articles of this author are subjected to serious criticism and the results contained in them are recognized as erroneous or unreliable; some pioneering works are undeservedly forgotten, and secondary works are cited, sometimes published much later; several important, but rather difficult to understand works often begin to be cited only many years after their publication.
The second category of citation parameters is associated with the citation of those editions (journals, books, collections of scientific papers and other publication sources) in which the works of this author were published. The situation here is more complicated, because to this species of citation, contributions are made by not only and not so much the author of the article in a particular journal, as other authors whose articles are published in the same journal. Moreover, in many cases, a much larger contribution to the citation of this edition is made by authors of publications in other scientific journals citing articles from the given journal. The citation parameters here can also be different. Additionally, here it remains to be understood how adequately they can assess the citation rate of both individual authors of articles and scientific editions where they were published.
Despite the very significant advances in this specific area of sociology, the problem of an adequate and objective assessment of scientific activity and publication activity is still rather far from being solved. Additionally, although, it should be noted that the totality bibliometric indicators, of course, are far from ideal for assessing the scientific activity of a human individual, but in the general case it is still better than an assessment only subjective, whoever carried it out. That is why the key task of the given Special Issue is to familiarize its readers with the latest achievements both in the search for new, more advanced bibliometric indicators and in the improvement of existing ones.
In connection with the aforesaid, this Special Issue includes mainly original articles and communications devoted to improving the quantitative assessment of scientific and publication activity of researchers and research teams using various bibliometric indices (both new, original ones and those already proposed earlier). Along with this, it contains articles that contain proposals for the development and improvement of indices characterizing the authority of scientific periodicals (journals) and articles of a critical character related to the application of these indices in practice.