Next Article in Journal
Which Are the Tools Available for Scholars? A Review of Assisting Software for Authors during Peer Reviewing Process
Previous Article in Journal
Third Mission Activities at Austrian Universities of Applied Sciences: Results from an Expert Survey
 
 
Article
Peer-Review Record

Scholarly Journals’ Publication Frequency and Number of Articles in 2018–2019: A Study of SCI, SSCI, CSCD, and CSSCI Journals

Publications 2019, 7(3), 58; https://doi.org/10.3390/publications7030058
by Xiaotian Chen
Reviewer 1: Anonymous
Reviewer 3: Anonymous
Publications 2019, 7(3), 58; https://doi.org/10.3390/publications7030058
Submission received: 12 August 2019 / Revised: 29 August 2019 / Accepted: 4 September 2019 / Published: 8 September 2019

Round 1

Reviewer 1 Report

The conclusions could be expanded and clarified: in particular, the issue of "mega journals" is only touched upon but it would deserve a wider discussion.

Also the last sentence - "The author fees in China for non-OA journals might play a role in elevated publishing frequency and number of articles in China, as APC (article processing charges) might for some OA journals in the world" - requires further development.  

Author Response

Reviewer 1: The conclusions could be expanded and clarified: in particular, the issue of "mega journals" is only touched upon but it would deserve a wider discussion.

Author’s response: Added a new study on mega journals in Literature Review. Mega journals are discussed mainly in Literature Review section.

Reviewer 1: Also the last sentence - "The author fees in China for non-OA journals might play a role in elevated publishing frequency and number of articles in China, as APC (article processing charges) might for some OA journals in the world" - requires further development. 

Author’s response: This is addressed in Results instead of Conclusions: “There may be various reasons why CSCD/CSSCI journals publish more than their international counterparts. Feng and Yuan [6], Ji [8], and Li [18] have pointed out that author fees charged by non-OA journals in China lead to higher publication frequency and more articles for some journals.” Also added new sentences right after the above: “Other possible reasons may include: Many Chinese universities require that PhD students publish before they can graduate [19] and many Chinese hospitals requirement that medical doctors publish in order to get promotion [20].” The literature Review section also include more details on author fees for non-OA journals in China.

Reviewer 2 Report

Comments for authors:

The abstract does not correctly express what is going to be offered in the article.
The objectives are offered twice in theabstract or what you want to achieve.
Theabstract does not indicate that a sample of 5% of thejournals list of each clarivate database and 10% is used for the databases of Chinese journals.

Methodology
The authors sort the list of journals in an excel, but how is the order? Alphabetical ?, by countries ?, regions? etc. This information is important because the authors do not use the all the sample.
It is not known why different sample percentages are used (5% and 10%)
It is not known why the English journals of Chinese lists are despised. Or why only Chinese journals published in China are used.
I believe that the percentage of error found in the methodology should be offered. For example, having found erroneous periodicities, journals that do not publish what they offer or journals that in January 2019 had not yet published the last number of 2018. In this case, they would be considered to have published it because they have published the previous ones. But all this should be considered as a percentage of error, and extrapolated to the rest of the sample (95% or 90%).
In the case of SCI and SSCI, clarivate offers roughly 2% of SCI and SSCI journals either are marked “Irregular” or do not have any frequency information on their lists. This data could be compared with the errors found by the author.
This information would enrich the paper.

Some of the information offered in the methodology could be included in the results section. For example, finding three samples are not regular journals. And these three samples are replaced by a journal listed below them on the journal lists.
This specific information is a result of the investigation. There are more cases.
I consider that the information related to the authors fees is underdeveloped. It is true that information is offered in the introduction section but the specific methodology that will be followed to arrive at a correct result is not expressed.

Results
The results are interesting but show the total number of journals in each list, not the percentages used in the research: 5% and 10%
It does not offer Results tables of the Chinese lists CSCD, and CSSCI.

The author offers the trimmed average in the results section, but this search does not indicate that he will do it in the objectives or in the methodology.

The bibliographic references are correct but it seems that the numbering is not adequate according to the publication norm of the journal.

Author Response

Reviewer 2: The abstract does not correctly express what is going to be offered in the article.
The objectives are offered twice in the abstract or what you want to achieve.
The abstract does not indicate that a sample of 5% of the journals list of each clarivate database and 10% is used for the databases of Chinese journals.

Author’s response: Added sampling info to Abstract.

Reviewer 2: Methodology
The authors sort the list of journals in an excel, but how is the order? Alphabetical ?, by countries ?, regions? etc. This information is important because the authors do not use the all the sample.


Author’s response: Added: In the order of the original lists: SCI and SSCI lists are in English alphabetical order, CSCD list’s Chinese journals in Pinyin’s alphabetical order, and CSSCI journals in subject order.

Reviewer 2: It is not known why different sample percentages are used (5% and 10%)

Author’s response: This is explained in Methods section: “since these two lists are much smaller than SCI and SSCI, 10% samples were taken.”

Reviewer 2: It is not known why the English journals of Chinese lists are despised. Or why only Chinese journals published in China are used.

Author’s response: This is explained in Methods that the CSCD English language journals are published by international publishers such as Wiley, so only CSCD journals published by Chinese publishers are studied, to compare with international (SCI/SSCI) journals.

Reviewer 2: I believe that the percentage of error found in the methodology should be offered. For example, having found erroneous periodicities, journals that do not publish what they offer or journals that in January 2019 had not yet published the last number of 2018. In this case, they would be considered to have published it because they have published the previous ones. But all this should be considered as a percentage of error, and extrapolated to the rest of the sample (95% or 90%).

Author’s response: The only sample of real “erroneous periodicities” is Zeitschrift für Psychologie (0323-8342), which is listed by SSCI as monthly but actually has always published 4 issues per year. This is pointed out in Methods. Journals that did not have the last issue of 2018 as of Jan 2019 should not be considered as errors.

Reviewer 2: In the case of SCI and SSCI, clarivate offers roughly 2% of SCI and SSCI journals either are marked “Irregular” or do not have any frequency information on their lists. This data could be compared with the errors found by the author.
This information would enrich the paper.

Author’s response: Each of the six SCI/SSCI “irregular” samples, which are among the 2% SCI and SSCI journals marked “Irregular” or not having any frequency information on their lists, has been introduced in Results in a bullet list one by one, with their own featured described. The one real error on Zeitschrift für Psychologie (0323-8342) has been pointed out in Methods. It is different from the irregular journals. Zeitschrift für Psychologie does have regular frequency (quarterly). It is just different from what SSCI marks (monthly), or, SSCI info is incorrect on this sample.

Reviewer 2: Some of the information offered in the methodology could be included in the results section. For example, finding three samples are not regular journals. And these three samples are replaced by a journal listed below them on the journal lists.
This specific information is a result of the investigation. There are more cases.
I consider that the information related to the authors fees is underdeveloped. It is true that information is offered in the introduction section but the specific methodology that will be followed to arrive at a correct result is not expressed.

Author’s response: Moved (although reluctantly, because I thought they are better in Methods) the three samples info from Methods to Results (above Table 3). The author fees info for non-OA journals in China is from existing literature. Because those Chinese journals do not post fee info on journal websites, surveys and investigations (sometimes done by journalists posed as potential authors) are the best sources for this kind of info. More details on author fees for non-OA Chinese journals are in Literature Review.

 

Reviewer 2: Results. The results are interesting but show the total number of journals in each list, not the percentages used in the research: 5% and 10%
It does not offer Results tables of the Chinese lists CSCD, and CSSCI.
The author offers the trimmed average in the results section, but this search does not indicate that he will do it in the objectives or in the methodology.
The bibliographic references are correct but it seems that the numbering is not adequate according to the publication norm of the journal.

Author’s response: The numbering of the references (Roman) is not what I submitted. It became this way through MDPI editing. I manually re-did the numbering and hopefully it won’t further mess up the format. Added the objective of trimmed average in Methods.

CSCD and CSSCI data are in Table 3. Tables 1 or 2 are all-journal (non-sampling) data, which are only available from SCI and SSCI lists. Table 3 includes sampling data from all 4 lists. The Methods section has described sampling method.

Reviewer 3 Report

The paper compares 2018/19 publication frequencies and number of published articles of international and Chinese journals. For this, large journal data collections are utilized (SCI and SSCI for the international journals, and CSCD and CSSCI for the Chinese journals). The main contribution of the paper is the quite cumbersome manual data collection. However, it does not become quite clear why the data is collected manually instead of automatically and why only random subsamples are used in the analysis (see my comments below). The analysis per se is very simple (for example, the number of articles in the latest issue of each journal is multiplied by the number of issues per year to get the total number of articles per year). The main finding of the study (Chinese journals publish more articles, possibly because it enables them to collect more authors fees) is definitely significant and could be emphasized more (for example, the title could reveal more). The effect of OA versus non-OA on the number of articles in a journal should be backed-up by numbers. In sum, I have some major and several minor comments listed below in the order of the sections.

 

Title:

The title should inform the reader about the comparison of international versus Chinese journal frequencies.

Introduction:

The observations about the OA journals are interesting. It would be nice if the analysis would pick up on that (for example, highlight publication frequencies and number of articles by taking into account whether the journal is a traditional or OA journal).

Literature Review:

The author points out that he did not find any prior study of SSCI journal publication frequency. However, it is not said how this search was conducted. Please indicate the databases that were searched and the used search terms/keywords.

Methods:

I believe there are better/easier (automatic) ways to obtain the number of articles journals published in a year. The number of citable documents/articles in a journal within a certain time period is the denominator used for computing journal’s impact factor. Thus, the major journal databases should include this information. For example, this information can be directly obtained from https://www.scimagojr.com/journalrank.php (see column “Total Docs. (2018)”).As a consequence, it does not become clear why this information is collected manually (which naturally is more cumbersome and error-prone) and why random sampling is applied instead of utilization of the whole datasets. Already from the publication frequency of the samples compared to the whole publication frequency obtained from the whole data (for example, SCI 10.95 for the whole data and 9.98 for the sample), it becomes clear that the sampled data seem to be the ones with less issues and articles. Table 1 and 2: Please indicate also the total/average if the irregular and no info journals are not included (otherwise confusing, because they are not included in the calculation of the average issues per year).

Results:

Interesting that the median and mean number of articles per issues seem to have decreased from 2001 to 2018/19 (especially the mean decreased a lot; i.e. from 22 to 15). Does the author have an explanation for that? The average number of articles per issue for the Chinese database CSCD is a lot closer to the SCI in 2001 found by Moed (23.18 compared to 22) than to the SCI found by the author (15.34). Could it be that the sample is not representative and that in fact, the difference between Chinese and international journal is not as large as presented here? Line 254: What is the possible mega journal mentioned here? The findings related to the significantly higher trimmed average number of articles in Chinese journals is indeed suspicious. The author points out that the author fees for non-OA journals might be a reason. It would be nice to see some numbers how many journals in the different data samples are OA and which are non-OA. This comparison would be especially interesting for the Chinese journals. Moreover, it could be discussed if there are other possible explanations for the higher average number of articles in Chinese journals.

 

Conclusions:

Line 322: Please explicitly name the one or two journals that could be characterized as mega-journals. Line 335: Please mark explicitly that starting from here limitations of the study are discussed. It is indeed a severe limitation of this study that some journal with a large number of articles were possibly not included. Future work should be mentioned. The author might consider to discuss policy implications of the finding that Chinese CSSCI publish considerably more articles than SSCI journals.

References:

8 out of the 17 are in Chinese language. Since I (as probably most of the international readership of this journal) do not speak Chinese, I cannot check whether the references are appropriate. More international and recent references would certainly increase the quality of the article. Some newer references are needed. For example, there is a newer article on mega-journals, which also discusses the large number of Chinese authors who now publish in these mega-journals

Björk B. 2018. Evolution of the scholarly mega-journal, 2006–2017. PeerJ 6:e4357 https://doi.org/10.7717/peerj.4357

The references should be adjusted according to the journal style (numbers instead of Roman numerals, fontsize same as the text)

Author Response

Reviewer 3: Title:
The title should inform the reader about the comparison of international versus Chinese journal frequencies.

Author’s response: Added a subtitle to reflect that.

Reviewer 3: Introduction:
The observations about the OA journals are interesting. It would be nice if the analysis would pick up on that (for example, highlight publication frequencies and number of articles by taking into account whether the journal is a traditional or OA journal).

Author’s response: The lists offer no info whether their journals are OA or not.

Reviewer 3: Literature Review:
The author points out that he did not find any prior study of SSCI journal publication frequency. However, it is not said how this search was conducted. Please indicate the databases that were searched and the used search terms/keywords.

Author’s response: Added “through searching Scopus and Google Scholar with these keywords in early 2019: journal frequency, journal publication frequency, journal issues per year, SSCI and publication frequency, SSCI and frequency, and SSCI issues per year, as well as through reading the references of related literature”

Reviewer 3: Methods:
I believe there are better/easier (automatic) ways to obtain the number of articles journals published in a year. The number of citable documents/articles in a journal within a certain time period is the denominator used for computing journal’s impact factor. Thus, the major journal databases should include this information. For example, this information can be directly obtained from https://www.scimagojr.com/journalrank.php (see column “Total Docs. (2018)”). As a consequence, it does not become clear why this information is collected manually (which naturally is more cumbersome and error-prone) and why random sampling is applied instead of utilization of the whole datasets. Already from the publication frequency of the samples compared to the whole publication frequency obtained from the whole data (for example, SCI 10.95 for the whole data and 9.98 for the sample), it becomes clear that the sampled data seem to be the ones with less issues and articles. Table 1 and 2: Please indicate also the total/average if the irregular and no info journals are not included (otherwise confusing, because they are not included in the calculation of the average issues per year).

Author’s response: When the study was done, I was aware of another manual way that has been used by Björk and others: searching Scopus to obtain total number of articles. Björk’s sample sizes are a lot smaller (14 journals in 2015 and 19 in 2018).

Many thanks to Review 3 for SJR's “Total Docs” info at https://www.scimagojr.com/journalrank.php. That may make it easier to count articles of SCI/SSCI journals. However, SJR site is good for view (human eyes viewing html file), and is not good for collecting Total Docs data automatically. For example, when I used the SJR “Download data” link on Aug 29, 2019, with choice of all journals of 2018, sorting by total docs in 2018, I obtained the data of 24,702 journals, with PLOS One on top of the list having 18,871 docs in 2018. But many journals have the Total Docs field either empty or 0: 9,950 journals have the field empty and 2,742 journals have the value 0 in the Total Docs field. That could be why Björk and others also resorted to manual ways.

Yes, added notes in Tables 1 and 2 that irregular and no info journals are not included in counting total and average values.

Reviewer 3: Results:
Interesting that the median and mean number of articles per issues seem to have decreased from 2001 to 2018/19 (especially the mean decreased a lot; i.e. from 22 to 15). Does the author have an explanation for that? The average number of articles per issue for the Chinese database CSCD is a lot closer to the SCI in 2001 found by Moed (23.18 compared to 22) than to the SCI found by the author (15.34). Could it be that the sample is not representative and that in fact, the difference between Chinese and international journal is not as large as presented here? Line 254: What is the possible mega journal mentioned here? The findings related to the significantly higher trimmed average number of articles in Chinese journals is indeed suspicious. The author points out that the author fees for non-OA journals might be a reason. It would be nice to see some numbers how many journals in the different data samples are OA and which are non-OA. This comparison would be especially interesting for the Chinese journals. Moreover, it could be discussed if there are other possible explanations for the higher average number of articles in Chinese journals.

Author’s response: I did not try to explain why the SCI mean changed from 2001 to 2018-19. As for the CSCD question, yes, it is possible that the samples are not representative, but the sampling method is “systematic random sampling”, as stated in Abstract and described in Methods.

Yes, the trimmed average aims to see more real average by removing the one with extreme number on both ends, and it indeed shows that the difference is more significant.

CSCD and CSSCI lists do not indicate any journals are OA or not. OA journals in Chinese language are new and not common. Because CSCD and CSSCI list established journals, there should be few OA journals on the lists, if at all.

Add other possible reasons for higher number of articles in Chinese journals to the end of Results: “Other possible reasons may include: Chinese universities require that PhD students publish before they can graduate [19] and Chinese hospitals require that physicians publish in order to get promotion [20].”

Reviewer 3: Conclusions:
Line 322: Please explicitly name the one or two journals that could be characterized as mega-journals.

Author’s response: Yes, info added (twice here).

Line 335: Please mark explicitly that starting from here limitations of the study are discussed. It is indeed a severe limitation of this study that some journal with a large number of articles were possibly not included. Future work should be mentioned. The author might consider to discuss policy implications of the finding that Chinese CSSCI publish considerably more articles than SSCI journals.

Author’s response: Created a new section, Limitations and Future Studies in the very end: “The samples of this study are from SCI, SSCI, CSCD, and CSSCI, the selective indexes of more established journals. It is possible that a higher proportion of “mega journals” or journals with unusually large number of articles are not included in these indexes. Other limitations of this study include: Larger sample sizes might find some new information and development of certain individual journals; automatic tally of the total articles per year for all the journals from both international and Chinese lists would produce more accurate data. Future studies may want to find an automatic way to collect data from SJR site (https://www.scimagojr.com/). The author of this study found that, as of August 2019, the “Download data” link of SJR site incomplete for its “Total Docs” data, even though the site data for viewing looks complete. The harder part for future studies would be collecting Chinese data with some automation.”

Reviewer 3: References:
8 out of the 17 are in Chinese language. Since I (as probably most of the international readership of this journal) do not speak Chinese, I cannot check whether the references are appropriate. More international and recent references would certainly increase the quality of the article. Some newer references are needed. For example, there is a newer article on mega-journals, which also discusses the large number of Chinese authors who now publish in these mega-journals

Björk B. 2018. Evolution of the scholarly mega-journal, 2006–2017. PeerJ 6:e4357 https://doi.org/10.7717/peerj.4357

Author’s response: Many thanks to Reviewer 3 for providing the citation of a newer article on mega journals. It has very interesting and valuable data. It has been added to the manuscript. Two additional English-language articles on other matters have been added in citations/references.

Reviewer 3: The references should be adjusted according to the journal style (numbers instead of Roman numerals, font size same as the text).

Author’s response: The numbering of the references (Roman numbers) is not what I submitted. It became this way through MDPI editing. I manually re-did the numbering and hopefully it won’t further mess up the format. Font sizes changed.

Round 2

Reviewer 3 Report

The author addressed most of the issues adequately. My recommendation is a conditional accept based on two conditions further detailed below.

1) The main problem is still that the data (number of articles per year/issue) is highly skewed and the collected sample might not be representative. That means including or excluding certain samples might paint very different pictures (this can be very well observed by the difference between the sample and the trimmed sample, where the statistical estimates changed a lot after just one sample on each end was removed). In order to report everything scientifically correct, the author should either

(i) add standard deviations for the means and median absolute deviations for the medians to the summary tables, so that the distributions are summarized or

(ii) provide the collected data. This could be done either as appendix to the current article or using a data management tool such as figshare (https://figshare.com/). The latter would allow the author to collect additional citations for his data.

2) In the literature review, it should also be mentioned that some countries utilize the size/number of articles in a journal as a feature when ranking journals. For example, in Finland journals of a higher rank can account only for certain percentage of the world’s publications in a discipline, cite

Saarela, Mirka, et al. "Expert-based versus citation-based ranking of scholarly and scientific publication channels." Journal of Informetrics 10.3 (2016): 693-718.

 

Back to TopTop