Previous Article in Journal
Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

All Roads Lead to Excellence: A Comparative Scientometric Assessment of French and Dutch European Research Council Grant Winners’ Academic Performance in the Domain of Social Sciences and Humanities

by
Gergely Ferenc Lendvai
1,
Petra Aczél
2 and
Péter Sasvári
1,3,4,*
1
Doctoral School of Public Administration Sciences, Ludovika University of Public Service, 1083 Budapest, Hungary
2
Management Campus, Széchenyi István University, 9026 Győr, Hungary
3
Faculty of Mechanical Engineering and Informatics, Institute of Informatics, University of Miskolc, 3515 Miskolc, Hungary
4
Faculty of Public Governance and International Studies, Ludovika University of Public Service, 1083 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Publications 2025, 13(3), 34; https://doi.org/10.3390/publications13030034
Submission received: 23 June 2025 / Revised: 18 July 2025 / Accepted: 21 July 2025 / Published: 24 July 2025

Abstract

This study investigates how differing national research governance models impact academic performance by comparing European Research Council (ERC) grant winners in the social sciences and humanities from France and the Netherlands. Situated within the broader context of centralized versus decentralized research systems, the analysis aims to understand how these structures shape publication trends, thematic diversity, and collaboration patterns. Drawing on Scopus and SciVal data covering 9996 publications by 305 ERC winners between 2019 and 2023, we employed a multi-method approach, including latent Dirichlet allocation for topic modeling, compound annual growth rate analysis, and co-authorship network analysis. The results show that neuroscience, climate change, and psychology are dominant domains, with language and linguistics particularly prevalent in France and law and political science in the Netherlands. French ERC winners are more likely to be affiliated with national or sectoral institutions, whereas in the Netherlands, elite universities dominate. Collaboration emerged as a key success factor, with an average of four co-authors per publication and network analyses revealing central figures who bridge topical clusters. International collaborations were consistently linked with higher visibility, while single-authored publications showed limited impact. These findings suggest that institutional context and collaborative practices significantly shape research performance in both countries.

1. Introduction

Perceived by researchers as the “Champions League” for research (Griffiths, 2007), the European Research Council (hereinafter: ERC) was established in 2006 to fund science for Europe’s top scientists. The focus was, and still is, clear: supporting excellence and fostering groundbreaking research and innovation across various scientific disciplines throughout the continent (Heldin, 2008; Antonoyiannakis & Kafatos, 2009; Groß & Karaalp, 2014). This initiative aimed to boost European science, which has faced challenges in competing globally due to lower funding levels and suboptimal research evaluation procedures in many countries (Heldin, 2008).
The ERC’s goals are diverse and ambitious. It aims to support fundamental and cutting-edge research in all fields of science and technology, including the social sciences and humanities, via introducing a pan-European competition for grants and boosts investment in research and development (Heldin, 2008). Also, as Griffiths (2007) underlines, the ERC’s novelty lies in the fact that it focuses on individuals over large teams and collaborations and it also emphasizes fundamental and exploratory endeavors over applied research. Taking into account the latest statistics from the ERC dashboard-statistics (at the time of drafting the present paper), 16,029 projects have been funded thus far with a grand total of 27,685 million Euros in funding. As a premise, it is important to outline the funding structure, since the ERC has radically changed the European research funding framework and had a global effect as it allows technically anyone to apply as long as they work at an institution based in the European Union (Griffiths, 2007; Luukkonen, 2013). After the revision of the panel structure in 2019, to better emphasize the representation of social sciences, three main domains were established for project funding: social sciences and humanities (SH), physical sciences and engineering (PE), and life sciences (LS) (Raiola, 2020). These domains were then divided into separate panels, further diversifying the disciplines to best reflect the multitude of different subject areas and categories. The ERC offers four core grant schemes: Starting Grants, Consolidator Grants, Advanced Grants, and Synergy Grants (ERC, 2024). Starting Grants are aimed at researchers who completed their PhD between 2 and 7 years ago, and they offer early-career researchers the chance to initiate and build their own research teams (Beerkens, 2018; Van Den Besselaar et al., 2018; Urbanovics et al., 2024). Consolidator Grants are available to researchers of any nationality with 7–12 years of experience since completing their PhD, a strong scientific track record, and an excellent research proposal are eligible to apply (ERC, 2024), making it a favorable option for early-career researchers (Pizzolato et al., 2023). Advanced Grants are designed for more established researchers: these grants provide opportunities to initiate new collaborations and expand co-authorship networks (Urbanovics et al., 2024; ERC, 2024). Lastly, Synergy Grants are aimed at supporting collaborative research projects (Zelmanovitch & Castellanos, 2019). These grants support groups of two to four principal investigators working together on ambitious research projects that cannot be tackled by individual researchers alone. These grants provide up to EUR 10 million for six years, with additional funding available for start-up costs and major equipment (ERC, 2024).
Circling back to what makes the ERC unique, we must underline that its impact is wide-ranging. Beerkens (2018) notes that the ERC system has significantly influenced the academic profession in Europe by enforcing norms about promising research careers and enhancing the autonomy of top scholars. Hoenig (2017) accentuates that it has reshaped the social organization of science, influenced national public science systems, and affected problem-choice in research. It has also enhanced the cultural legitimacy of scientific endeavors and contributed to the establishment of new research councils at national, European, and global levels. However, despite the undoubted prominence of the ERC as a funding framework, the “shift towards European competition and excellence” (Nedeva & Stampfer, 2012) has also raised concerns. For instance, research indicated a negative impact on cross-border collaboration in the early years of the program (Arrieta et al., 2017). Rodríguez-Navarro and Brito (2018) also underlined that technical research, despite the ERC grants, is still suboptimal, as results show that its development in the EU is below the global average. Moreover, both Groß and Karaalp (2014) and Hoenig (2018) highlight that the organizational structure of the ERC prompts potential problems about its autonomy and efficiency, underlining significant changes in the social organization of science. Hoenig (2018) also questions whether focusing on the scientific excellence of individual researchers can challenge the promotion of interdisciplinary research, as it prioritizes the excellence of individual projects over collaborative efforts. Groß and Karaalp (2014) propose that addressing concerns related to the ERC’s funding allocation and organizational structure is crucial for ensuring its autonomy and efficiency while policy implications should focus on balancing the promotion of scientific excellence with fostering interdisciplinary research within the ERC’s framework.
It should also be mentioned that the ERC faces a significant challenge in evaluating large volumes of proposals, and despite a multitude of evaluation devices and methods, the grant is still exceedingly competitive, where many high-quality research proposals do not receive any funding (Brunet & Müller, 2022). In this context, Cruz-Castro et al. (2016) argued that due to the high number of proposals and the limited grants, the ERC does not achieve its ultimate goal; instead, it increasingly functions as an organizational measure of excellence, much like university rankings are used to assess quality rather than an individual assessment of the best researchers. To address the high number of grant applications, Hörlesberger et al. (2013) developed a method that combines peer review with a quantitative bibliometric evaluation using scientometric indicators like novelty, interdisciplinarity, and risk to evaluate the attributes of “frontier research” in grant proposals; however, issues still persist with the number of proposals growing every year. On a more technical note, Goh et al. (2020) argued that machine learning is and will continue to be useful in tackling the aforementioned challenge, as they are highly accurate in filtering out abstracts in proposals; however, no official ERC stance has been made on this question.

2. France and The Netherlands: Setting Out the Scope and a Brief Literature Review

In the present example, two of the most research-successful countries, France and the Netherlands, will be examined in terms of their funded projects and individual academic performance between 2019 and 2023. In the examined period, France ranked as the second (N = 666, NSH = 104) and the Netherlands as the third (N = 614, NSH = 201) most successful countries by number of ERC grant awarded, behind Germany (Figure 1).
The choice of these two countries, however, is not solely based on the quantitative measures demonstrated in our findings. It should also be noted that the literature has been produced on individual countries’ performances. Albert et al. (2007), for instance, conducted an early study on the performance of the CSIC (Spanish Council for Scientific Research) in relation to the ERC, with a particular focus on biotechnology. Ruggieri et al. (2021) have recently analyzed the Italian National Research Council’s performance, with a particular attention to open access issues and gender disparities (cf. Bautista-Puig et al., 2019). Győrffy et al. (2020) investigated Hungarian researchers and their publication performance. Cavallaro and Lepori’s (2021) contribution should also be highlighted, as it was among the first research that investigated the implications of institutional barriers that British and Swiss researchers face when applying to an ERC grant. Dzieżyc and Kazienko (2021) also made a case to compare national and international (ERC) funding opportunities for peripherical countries such as Poland. Nonetheless, the “top” countries have not been analyzed either individually or comparatively; therefore, our research aims to bring a novel perspective to the already dynamic discourse.
When selecting French and Dutch grantees, one may also identify two vastly different yet equally successful models: the centralized French research model, which is mostly focusing on research being carried in governmental institutions, and the decentralized Dutch model, which, as opposed to its French counterpart, primarily pursues excellent, ERC-winning research at excellent universities rather than institutions. Though it is well-established that national research structures and systems differ (Sandström & Van Den Besselaar, 2018), in the present case, we can perceive a stark contrast between research philosophies.
For context, it is critical to outline these two models from a historical standpoint. France’s research strategy has traditionally been characterized by a network of regional clusters that provide scientific expertise, technical support, and business networks, serving as hubs for innovation (Knivett, 2007). Over time, the French research and innovation system has evolved from centralized state control to a more flexible structure, increasingly relying on bottom-up initiatives and market-driven mechanisms, indicating a shift toward a multi-level governance model (Héraud & Lachmann, 2015). Pinson (2016) also underlines that the French system of research and higher education was marked by strong, recurring intervention by the central state, a feature that underscores its centralized nature even in modern reforms. The “crown jewel” of the centralization is the Centre National de la Recherche Scientifique (CNRS). Established in 1939, the CNRS was created to unify fragmented research efforts across France, providing a national framework for scientific inquiry. It operates under the Ministry of Higher Education, Research, and Innovation, emphasizing state control in directing scientific endeavors, as accentuated by earlier research (Aust & Crespy, 2009). One of the core characteristics of this centralization is the predominance of public research organizations (PROs) such as the CNRS, which manages a significant portion of national research activities. The CNRS oversees a network of laboratories, ensuring that resources, personnel, and research agendas are aligned with national priorities. France has also launched a funding system that, similarly to the ERC, funds individual research in 2005; however, its budget is significantly lower, and the number of projects funded is also marginal compared to the ERC (Corsini & Pezzoni, 2023). The centralized research system in France faces several challenges, including financial inefficiencies in technology transfer, a lack of continuity in policies, and “over-bureaucratization” of institutional procedures (Flesia, 2006). However, centralization also offers advantages, such as the potential for streamlining procedures through the creation of a national holding, fostering multidisciplinary collaboration, and enhancing clinical research excellence through dedicated centers (Boitard et al., 2024).
The Netherlands’ decentralized model, on the other hand, opts for research being conducted at universities. The reason behind this philosophy is complex. Firstly, the Netherlands has a long-standing tradition of scientific excellence, dating back to the 17th century, with elite universities like Amsterdam, Leiden, Groningen, and Utrecht playing a critical role in the country’s research development. The Dutch Research Council (NWO), established in 1950, became central in funding and coordinating research, focusing on scientific excellence and progressive policies that support societal well-being and innovation, primarily making the NWO a funding institution (Koens et al., 2018; Simsek et al., 2024). The country’s research system is highly professionalized, emphasizing collaboration between universities and economic sectors, while aligning science and technology with societal concerns through the Responsible Research and Innovation (RRI) framework (Van Zoelen & Kanters, 2023). Secondly, Dutch universities enjoy high independence and apply a “cooperation-over-competition”-based research attitude, which is a unique approach in European higher education (Law, 2016; Kolman, 2024). Dutch universities are also highly internationalized, with central policies encouraging global partnerships, student exchanges, and collaborative research (Popescu & Helsen, 2018; Kolman, 2024). This process, along with the institutionalization of research in universities of applied sciences, strengthens the Netherlands’ innovative capacity. Dutch research has significant global influence, particularly in areas like sustainability, and is frequently cited in patents and policy documents, demonstrating its impact on innovation and policy.
In view of the ERC’s prevalence in research and the aforementioned two models, in the present research we seek to investigate how these nations, or rather, models, perform, and how they become “excellent”. We propose three research questions:
  • RQ1: What are the key characteristics of French and Dutch ERC Grant winning projects?
  • RQ2: How do the centralized and decentralized models perform in terms of individual ERC winners’ publication performance
  • RQ3: How prevalent are scholarly networks in winning ERC Grants in the different models?
  • Through answering these RQs, our research highlights the distinctive approaches taken by France and the Netherlands. The analyses of publication trends, collaboration networks, and the overall impact of these two models are intended to offer valuable insights into how different national structures influence research excellence and global scientific contributions.

3. Materials and Methods

Our research is based on data from Scopus, SciVal, and the European Research Council’s database. Our research concerns a period of 5 years (between 2019 and 2023) with regard to projects (“project dataset”) and 10 years (between 2014 and 2023) concerning individual publication performance review (“publications dataset”).
We specifically focused on the domain of social sciences and humanities (SH). The SH domain was structured into 7 panels in the examined period (Table 1).
The choice to analyze the SH domain was informed by several considerations. SH fields are particularly relevant for examining how national governance models influence research behavior, as they tend to be more sensitive to institutional, cultural, and linguistic contexts. Unlike the natural sciences, which are often oriented toward globally standardized problems, SH disciplines are deeply embedded in national academic traditions and policy frameworks. This makes SH especially suitable for exploring the effects of centralized versus decentralized governance models on academic performance, collaboration practices, and thematic priorities. Moreover, SH disciplines have historically been marginalized in quantitative research assessment (cf. Hicks, 1999; Kulczycki et al., 2018) but also in the research on ERC grantees (cf. Munari et al., 2024, who specifically focused on the life sciences and physical science and engineering domains but not the SH). By focusing on this domain, our study aims to contribute to this underexplored area by offering empirically grounded insights into the performance of SH ERC grantees. Finally, concentrating on SH allows us to test the limitations and potentials of bibliometric approaches in fields that are less represented in major databases. In this way, our study also contributes to a broader methodological dialog on the evaluation of research in the humanities and social sciences—an ongoing conversation rooted in a rich body of literature in both science and technology studies and scientometrics.
For the Netherlands, we retrieved 201 ERC winning projects from the ERC database, and after filtering, 196 remained, representing 97.51% of the original data. For France, the original dataset contained 104 entries, and 98 remained after filtering, making up 94.23% of the original data. The combined dataset contained these filtered entries from both countries. The filtered data was removed, as the data for the winning project did not include one or more of the following pieces of information: researcher(s)’ name, host institution’s name, country, title, abstract, or domain. After identifying the winning projects and associated researchers, we scraped data from Scopus with SciVal, Python, and Scopus API to retrieve all publication data for the respective researchers, with particular attention to the fact that the research output should be limited in terms of years until winning the project. We identified 2786 publications for French researchers and 7228 for Dutch researchers in the period examined (Ntotal = 9996). All publications were in English per Scopus data. We screened the two datasets manually and found zero duplications. The data was collected on 30 September 2024 (Figure 2).
To investigate RQ1, in particular, the thematic structure of ERC-funded SH research projects, we employed a semantic topic modeling approach using BERTopic. BERTopic is a transformer-based topic modeling technique that leverages pre-trained sentence embeddings to group documents by meaning, rather than by surface-level word frequency (see Egger & Yu, 2022; Khodeir & Elghannam, 2024). We conducted this analysis by extracting all project abstracts and metadata (country affiliation, panel assignment) from our database. Then, each abstract was converted into a high-dimensional embedding using the MiniLM-L6-v2 model from the Sentence-Transformers framework, which captures contextual semantic similarity between documents. These embeddings were then clustered using BERTopic, which combines density-based clustering with a class-based TF-IDF (c-TF-IDF) representation to identify and label coherent semantic topics. Unlike traditional clustering methods such as KMeans, BERTopic does not require the number of clusters to be specified in advance and does not rely on internal validation metrics like a silhouette score or the Davies–Bouldin Index. Instead, the topics are discovered based on natural groupings in a semantic space and are labeled automatically using the most representative terms per group. To enhance interpretability, we manually assigned human-readable names to the resulting topics based on the top five TF-IDF keywords. Also, this analysis was inspired by Bonaccorsi et al.’s (2022) work, who have conducted a similar topic modeling with regard to how interdisciplinary ERC projects are; therefore, this method will also complement earlier results.
We also considered the “All Science Journal Classification” (ASJC) system, which is used to categorize academic journals into specific subject areas, allowing for the grouping of journals by disciplines for bibliometric analysis. It facilitates the comparison of publication and citation patterns across different fields by assigning journals to relevant research categories. Concerning RQ2 on publication trends and individual publishing performance, we applied a compound annual growth rate (CAGR) analysis to analyze growth trends (Murphy, 2005) and examined trends in quantity with trendlines. To investigate RQ3, we conducted a co-authorship analysis based on the publications, a commonly used method in academic research for analyzing collaboration patterns and identifying prominent scientists and institutions (Katz & Martin, 1997; Ponomariov & Boardman, 2016). For this RQ, we also considered the Field-Weighted Citation Impact (FWCI) metric, which measures the citation performance of a publication relative to the global average in its specific field, accounting for variations in citation practices across disciplines. An FWCI of 1.0 indicates that the publication has been cited at the global average for its field, while values above or below 1.0 represent higher or lower citation impact, respectively.
For data visualization, we used matplotlib in Python and Gephi (0.10.1. version). All figures and tables are the edits of the authors.

4. Results

4.1. Characteristics of French and Dutch ERC Winner Projects (RQ1)

The overall distribution of projects shows that The Netherlands has a more balanced representation across SH panels, while France has more concentrated representation in a few SH categories. SH1 is the only panel where France leads (with 23 projects) and it is also one of France’s most represented panels (the other one being SH4). The Netherlands, however, only has four projects in this category, indicating a significantly lower focus on topics related to individuals and markets. In SH2, SH3, SH5, and SH7, the Netherlands has a significant lead (SH2 = 34 − 7, SH3 = 43 − 7, SH5 = 34 − 12, SH7 = 17 − 7). SH4 is unique in the sense that both countries seem to prioritize research related to the human mind and its complexities, but the Netherlands has nearly twice as many projects as France (49 − 27), reflecting a stronger focus on psychological and cognitive research. Research on history, though not garnering grandiose attention in view of the other six panels, is in parity, with both countries being represented by 15projects each. To assess whether the distribution of funded ERC projects across SH panels differs significantly between France and the Netherlands, we performed a chi-square test of independence. The result was statistically significant (χ2 = 51.14, df = 6, p < 0.001), which confirmed that the disciplinary emphases of the two countries diverge in a systematic manner and align with the aforementioned structural differences between the centralized French and decentralized Dutch research models discussed earlier.
From a general standpoint, the Netherlands exhibits a broader and more balanced distribution of projects across SH panels, particularly in areas like governance, social diversity, and cultural production. France, on the other hand, shows stronger representation in specific areas like individuals and markets and focuses more heavily on psychological research (Figure 3).
Project-wise, the University of Amsterdam was the most successful in the examined period, with 37 grant winning projects among the top 10. The models are well-perceivable through this analysis. The Dutch institutions are exclusively universities, while the French grant-winners are represented through institutions in nearly every instance. The sole counterexample is the Jean Jacques Laffont Foundation, which is operated by the Toulouse School of Economics; however, it can be argued that the Foundation is not within the university’s structured system (Figure 4).
Based on the semantic topic-modeling approach using BERTopic, we identified four dominant “thematic zones”, alongside a residual category of unassigned or thematically diffuse abstracts. The most prevalent topic, assigned to 133 abstracts, was labeled “Global Research and Cultural Systems”. This semantic theme encompassed work on globalization, cultural transformation, historical systems, and cross-border dynamics in human societies. The second-largest topic, “Cognitive Science and Neural Research,” contained 65 abstracts and focused on neural networks, brain function, cognition, perception, and related psychological mechanisms. This prevalence also aligned closely with SH4, one of the most prominent panels in the SH domain. The third theme, “Political Institutions and Data Analysis,” included 26 abstracts that addressed the interaction between governance, quantitative policy analysis, and data-driven social modeling. These projects frequently referenced voting behavior, regulatory structures, and political decision-making through the lens of empirical data. The fourth topic, “Labor and Economic Structures,” was the smallest in size, with only 11 abstracts. Despite its rather modest scope, this theme was conceptually coherent, with keywords including labor, inequality, job markets, and economic justice. A fifth group of 59 abstracts was classified as outliers or unassigned. These abstracts were unassigned, given that the texts lacked sufficient thematic concentration or were too linguistically diverse to be consistently assigned to one of the four dominant topics.
Thematic distributions across countries revealed distinct research orientations. The Netherlands dominated the “Global Research and Cultural Systems” theme, contributing to over 60% of the total projects in this cluster. France, by contrast, was more evenly spread across the identified themes but had a relatively higher share in “Political Institutions and Data Analysis” and “Labor and Economic Structures.” Both countries contributed almost equally to the “Cognitive Science and Neural Research” theme, which confirms its strong co-alignment with SH4 panel winners’ count. Diving into a more micro-level analysis, we also examined panel-level distribution, which, again, further confirmed the semantic integrity of the derived themes. “Global Research and Cultural Systems” was most strongly associated with SH3 (“The Social World and its Diversity”) and SH5 (“Cultures and Cultural Production”). As mentioned before, the “Cognitive Science and Neural Research” theme aligned almost exclusively with SH4 (“The Human Mind and its Complexity”), as expected, given its domain-specific content. Projects in “Political Institutions and Data Analysis” were concentrated in SH2 (“Institutions, Governance and Legal Systems”) and SH1 (“Individuals, Markets and Organizations”), while the smaller “Labor and Economic Structures” cluster was split between SH2 and SH3. The presence of cross-panel associations within certain themes points to the transdisciplinary nature of the ERC’s funding orientation, especially in the domain of data-driven institutional analysis (Figure 5).

4.2. French and Dutch Grantees’ Publication Trends (RQ2)

Through descriptive analysis, we first examined the publication trends for French and Dutch ERC winners. Generally, over the examined period, both countries showed an upward trend in the number of publications. Dutch ERC winners consistently published more than their French counterparts, with noticeable peaks in publication output around 2018. For France, the number of publications ranged from 232 in 2014 to 282 in 2018, while Dutch winners showed a broader range, from 531 in 2014 to 684 in 2018.
The French ERC winners displayed a more gradual and steady increase in publication output, with slight annual variations. In contrast, Dutch winners demonstrated sharper fluctuations, particularly between 2015 and 2016, where their publication count increased by over 90. In terms of variance, the Netherlands shows a greater spread in its annual publication numbers compared to France. The average number of publications per year is higher for the Netherlands, underscoring its dominant position in terms of research output among ERC winners during this period (Figure 6).
The compound annual growth rate (CAGR) reveals significant differences in the publication growth of French and Dutch ERC winners. France shows a modest CAGR of approximately 2.67%, indicating a steady, gradual increase in the number of publications over time. On the other hand, the Netherlands exhibits a much higher CAGR (about 5.42%), demonstrating a more rapid acceleration in research output. This suggests that Dutch ERC winners are experiencing a faster increase in productivity, which could be driven by factors like greater collaboration, funding efficiency, or institutional support. Over the observed period, Dutch winners consistently produced more publications than their French counterparts. In terms of absolute numbers, the Netherlands recorded 1784 publications in the early years compared to France’s 738, and this gap widened significantly in the later years, with the Netherlands producing 5442 publications versus France’s 2028. The substantial growth in the Netherlands points to a potentially more aggressive research strategy, where winners capitalize on their ERC grants to drive publication output. France, while increasing in productivity, shows a slower and more stable trajectory. The faster growth in the Netherlands suggests greater research scalability and responsiveness to ERC funding compared to France (Table 2).
We have also examined the average co-authorship to achieve a better perspective as to how collaborations account to better research results and eventually winning ERC grants. To do this, we first excluded papers from the publications database that had over 15 co-authors. This resulted in the exclusion of 159 papers in the French segment, and 309 in the Dutch segment, resulting in the removal of 4.7% publications from the dataset.
It is clearly shown that both French and Dutch winners have an extremely strong inclination toward collaboration. In the case of France, the number of co-authors shows a steady growth, from four to five over 10 years. The Dutch trend is interesting too: collaborative publications declined between 2014 and 2016, but have since shown steady growth, approaching France’s 2019 levels over the last 7–8 years. The trendlines also exhibit a strong fit, verifying the above statements on the growing trends in collaborative research (Figure 7).
In terms of publication volume, both France and the Netherlands demonstrate high output from certain top-tier journals. The French dataset shows that NeuroImage, Scientific Reports, and PLoS ONE are among the most frequently published journals, each exceeding 70 publications. These journals also have relatively high average citations, particularly NeuroImage, which averages over 100 citations. However, there is a noticeable drop-off in citation averages among some of the lower-ranked journals, such as Cortex and Neuropsychologia, which have lower citation counts but still maintain respectable publication volumes.
In the Dutch dataset, journals such as PLoS ONE and Scientific Reports similarly dominate in terms of publication count, each exceeding 100 publications. However, the citation distribution in the Netherlands is more varied. While journals like the Journal of Cleaner Production have an average over 250 citations, indicating significant impact, many of the other top journals have lower average citation counts. This suggests that while the Dutch dataset reflects a high output in terms of sheer volume, the impact (as measured by citations) is more concentrated in a few journals, unlike the relatively more uniform citation spread seen in the French dataset.
These patterns, alongside with the results outlined for the analysis regarding Figure 6, must also be contextualized by considering the known biases of Scopus in its coverage of SH fields. Both the ASJC category distribution (Table 3) and topic cluster outputs (Figure 5) are shaped by Scopus’s indexing. As SH research in France is frequently published in French-language journals—many of which are not covered here—the results likely underrepresent the full thematic range and volume of French scholarship. Consequently, the apparent emphasis on STM-related fields within the SH domain, such as neuroscience or psychology, may be attributed as an “artifact” of Scopus’s indexing practices rather than an authentic representation of disciplinary focus (for this, see De La Laurencie & Maddi, 2019; Maddi et al., 2025) (Figure 8).
Since the sole count of publications in the journals outlined in the analysis related to Figure 7 may not convert fully the impact of journal choices, we also conducted a statistical analysis of SJR (SCImago Journal Rank) scores and percentiles of the journals preferred by both French and Dutch grantees. SJRs are frequently used in scientometric analyses, since it is a complex metric that is arguably more representative than impact factors or other impact metrics, as it also measures citations depending on the journal’s prominence (i.e., it accounts for both the number of citations received by a journal and the importance or prestige of the journals itself) (Olmeda-Gómez & De Moya-Anegón, 2015; Jain et al., 2021; Schöpfel & Prost, 2009). For this analysis, we analyzed all publications.
The mean SJR (the higher the better) for Dutch publications was 1.94, while the French average was significantly higher at 3.23. The standard deviation of SJR scores was also larger in France (4.12) than in the Netherlands (2.10), and the median SJR scores were 1.41 for the Netherlands and 1.79 for France. For SJR percentiles (the lower the better), Dutch publications had a slightly higher mean (14.61) than French ones (12.98). However, both countries had similar percentile medians (eight for the Netherlands and six for France). These results underline that French researchers tend to publish in slightly more prestigious journals; however, both produce extremely high-quality research (Figure 8). Normality tests using the Shapiro–Wilk method revealed that none of the distributions were normally distributed (all p-values = 0.00). As a result, non-parametric Mann–Whitney U tests were applied. These tests confirmed statistically significant differences in both SJR scores and SJR percentiles between the two countries (p-values = 0.00 for both).
For the ASJC categories, we have filtered out the top 10 most popular categories for both countries. It is important to underline that a single publication can have more than one ASJC. Our research showed that most publications had one–three ASJC categories, but there were also a few publications with nine. ASJCs were analyzed according to their occurrence. A total of 230 ASJC categories were identified.
As can be seen in Table 3, there is a very strong presence in both countries of research related to psychology, neuroscience, and mental health. In the case of French research, the top ASJC categories are dominated by linguistics (143–140 mentions), while in the Netherlands, the social sciences, in general, dominate (N = 554). In addition to the above categories, sociology, political science, geography, and other related disciplines are also in the top four ASJC categories. The multidisciplinary category is in the top 10 ASJCs for both countries, which suggests that there is a significant spread of inter- and multidisciplinary research (Table 4).
In the analysis of the topic clusters, 918 topic clusters were identified in the publications database, from which the top 10 topic clusters were filtered by occurrence. Overall, it can be concluded that there is an overlap between the most popular French and Dutch topic clusters, but also important differences. In the case of France, a particular finding is that the thematic clusters do not fully cover the ASJC categorization in terms of their themes; for example, language and linguistics do not appear in the top clusters, while archeology does. The prevalence of functional magnetic resonance imaging was particularly high (N = 411), and this cluster was also prevalent in the Netherlands (N = 307). This topic is perhaps less familiar to those interested in the social sciences; the discipline, also used as an acronym for fMRI, essentially covers neuroimaging techniques used to measure and map brain activity, i.e., it is very closely related to the neurosciences, which are also often included in the ASJC classification. It is also worth mentioning that climate change shows an extraordinary frequency in both the French and the Dutch databases; this clearly suggests that these topics are not only popular but may also contribute to the success of ERC proposals if the “ERC applicant” starts researching such topics. In the case of the Netherlands, it is also important to underline the prevalence of legal and political science topics not found in France, such as democracy and justice, which indicates that, as shown above, the study of regulation and democratic processes is a very prominent theme for Dutch ERC-winning researchers (Table 5).

4.3. Co-Authorship and Collaboration (RQ3)

The co-authorship network highlights the collaborative relationships among researchers, with each node representing an author and each edge representing a co-authored publication. The central figure in the French ECR winners’ network is Margulies, D.S., whose node is the largest and most connected, indicating their central role in linking various groups of researchers. The Dutch co-authorship analysis shows a more interconnected co-authorship network numerous overlapping edges between the various authors, suggesting a more collaborative research environment where co-authorships span multiple subfields or research teams. The presence of several large clusters, each potentially representing a distinct research group or specialization, is also apparent. It can also be argued that these clusters are connected by key authors, who act as hubs, facilitating communication and collaboration between different groups. Prominent authors like Cardoso, P., Hahn, T., Han H., and van der Ent, R.J., among others, represent a more equally composed network with relatively similar distribution of bigger nodes (Figure 9).
The centralized French model shows the unarguable dominance of CNRS with 1519 co-authored publications. CNRS’s central role in these collaborations is further supported by its solid FWCI rate of 2.24, reflecting the high impact of its research. Université PSL follows with 746 co-authored publications and a higher citation impact, with 38.2 citations per publication and an FWCI of 2.34, signaling both productivity and influence in ERC-related research. Among the institutions, the Institut national de la santé et de la recherche médicale (INSERM) and Sorbonne Université are notable for their high citation-per-publication figures, with 45.6 and 45.3, respectively.
It is worth underlining that four research institutions made up the top collaborating list, further proving that centralized, research institution-based research can be and is successful in producing and fostering ERC-winning researchers (Table 6).
The top 10 collaborating institutions with Dutch ERC winners show a strong concentration of academic partnerships within the Netherlands, with only one medical institution, Amsterdam UMC, making the list. The University of Amsterdam leads the chart with 1892 co-authored publications, highlighting its central role in Dutch ERC collaborations. Utrecht University and Vrije Universiteit Amsterdam follow, each contributing over 1200 co-authored publications, underscoring their significant involvement in ERC-related research. Wageningen University & Research stands out with the highest Field-Weighted Citation Impact (FWCI) of 4.39, suggesting that its collaborations produce highly impactful research, despite having fewer co-authored publications than the top institutions. Delft University of Technology also demonstrates high productivity, with an impressive FWCI of 2.71, reflecting the strong impact of its collaborative output. In contrast, Radboud University Nijmegen and Tilburg University have relatively lower FWCI scores of 1.95 and 2.39, respectively, despite their moderate numbers of publications. This suggests that their research may not be cited as frequently in comparison to other institutions in the top 10. The medical sector is represented by Amsterdam UMC, which ranks eighth, with 386 co-authored publications and an FWCI of 2.03, reflecting the increasing interdisciplinary nature of ERC research. Erasmus University Rotterdam, though lower in the number of co-authored publications (416), has a respectable FWCI of 2.44, indicating a balance between quantity and research impact. Table 7 demonstrates how several Dutch academic institutions play critical roles in ERC collaborations, with varying levels of research impact.
When examining collaborations based on publication information, several similarities have been identified. Firstly, international collaborations play a critical, arguably instrumental, part of the ERC winners’ research dissemination. The rather low national and institutional collaboration rates in both France and the Netherlands also confirm the above statement. Single authorship rates are extremely low. In France, only 11.7% of all publications published by ERC winners are single-authored papers, which is highly similar to the rate in the Netherlands (12.8%).
Collaborative papers also achieve substantially higher citations, especially in the case of international collaboration. It is also to be underlined here that not only is international collaborative research successful in the number of citations, but they are also papers disseminated in top journals, as proven by their exceedingly high FWCI rates (Table 8).

5. Discussion

The findings of this study aimed to highlight significant differences in the characteristics, publication performance, and collaborative networks of ERC grant winners from the two of the most successful countries in this regard, France and the Netherlands.

5.1. Diverse, Interdisciplinary, and Multifaceted: Key Characteristics of French and Dutch ERC Grant-Winning Projects (RQ1)

The analysis of ERC-funded projects in France and the Netherlands shows significant differences in thematic focus and research priorities. The Netherlands leads in almost all panels, except for the SH1 panel on markets, where France dominates. For the Dutch projects, there is clear evidence of a more even distribution between SH panels, i.e., while the Dutch model results in a uniform disciplinary distribution, the French model, which focuses on institutional work, favors specific research areas, which may put some disciplines at a disadvantage.
The LDA thematic modeling further reinforces this difference in the thematic focus of the models; while Dutch projects deal with a wide range of topics, including social diversity and cognitive science, French projects often focus on topics such as heritage and cultural dynamics. This discrepancy suggests that the Dutch model promotes a more interdisciplinary approach, facilitating exploration across different fields, while the French model promotes specific research according to a centralized model.
The implications of these findings suggest that national research strategies significantly influence the characteristics of funded projects. The decentralized Dutch model appears to cultivate an environment conducive to diverse research outputs, while the centralized French model may constrain the breadth of exploration. It is important to underline, however, that there is no “better” model. As derived from the data, each model has significant advantages for researchers wishing to disseminate their research to a wide readership.

5.2. Publication Performance Comparison of the French Centralized and the Dutch De-Centralized Model (RQ2)

The data show that Dutch ERC winners consistently have a higher publication output compared to their French counterparts, with 7228 publications in the Netherlands compared to 2786 in France. An analysis of the compound annual growth rate showed that the CAGR of Dutch researchers was approximately 5.42%, while that of French researchers was 2.67%. This indicates that Dutch ERC winners are not only publishing more, but their research output is also growing faster. These results also suggest that the decentralized model allows for faster progression and scaling-up of research activities among high-performing researchers. French researchers, on the other hand, show a more gradual and steady increase in their publication record. To summarize the analyses, the decentralized Dutch model promotes higher publication rates and faster growth, while the centralized French model provides stability but is slightly more conservative in terms of dealing more broadly with contemporary research trends. This is an interesting result, as the Netherlands is about 3.8 times smaller than France in strictly quantitative, population-based terms, showing that the decentralized model has been able to achieve extraordinary results for a country with a relatively small population.
The All Science Journal Classification analysis and the clustering of topics have shown that researchers in both models are very active in the perhaps “non-classical” social sciences and humanities, such as neuroscience, psychology, or even in various topics closer to life sciences, such as brain research and brain mapping. It is important to underline, however, that there is no question that the traditional social sciences and humanities are well-represented. Linguistics, sociology, law, political science, and archeology are all subjects that appeared in the top 10 most popular items in both subject classification analyses. Finally, it is worth mentioning that climate change featured prominently in all the analyses, which demonstrates that French and Dutch researchers are very active and determined in their research on this very important and timely topic. On the other hand, the ERC explicitly supports such topics, i.e., it is worth studying the climate change publications of these two “success stories”.

5.3. Prevalence of Networks Among ERC Winning French and Dutch Researchers (RQ3)

Our findings on the prevalence of networks indicate that Dutch ERC winners exhibit a more interconnected co-authorship network, highlighting the collaborative nature of the research environment in the Netherlands. As seen from the network analysis, this interconnectedness is evident in the greater number of overlapping co-authorships and the presence of multiple large research clusters, suggesting that Dutch researchers are more actively engaging in interdisciplinary and international collaborations. In contrast, the French research landscape shows a more concentrated co-authorship network. Although French researchers participate in collaborative efforts, their partnerships tend to be less extensive, with a smaller number of interconnected authors. This pattern may reflect the characteristics of the centralized French model, where collaboration is often organized around specific institutions or projects rather than fostering a broader, more inclusive network of scholars. The evident dominance of the CNRS and other public research organizations in French research may contribute to this phenomenon, as researchers often align their efforts within these institutional frameworks.
The co-authorship analysis reveals that a significant proportion of publications in both countries are produced collaboratively, accentuating the importance of partnerships in enhancing research output and impact. However, the data indicates that international collaborations, particularly for Dutch researchers, result in higher citation counts and a greater Field-Weighted Citation Impact, suggesting that the collaborative nature of Dutch research not only increases publication volume but also enhances the visibility and influence of their work in the global academic community. When measuring co-authorship from a quantitative standpoint, our examination showed that, in France, only 11.7% of publications are single-authored, and the Netherlands shows a similar rate (12.8%). This low rate highlights the growing trend of teamwork in research, which is crucial for addressing contemporary scientific challenges that require diverse expertise.
Thus, it can be concluded the decentralized system in the Netherlands promotes a “culture of collaboration”, which enhances both productivity and research impact. Conversely, the centralized system in France, while effective in certain contexts, may limit the extent and diversity of collaborative opportunities available to researchers. These findings underscore the importance of fostering collaborative networks to drive research excellence and innovation, highlighting the need for strategic efforts to enhance interdisciplinary engagement in both national contexts.

5.4. Implications and Contribution

This study builds upon a growing body of literature investigating the European Research Council’s role in shaping scientific excellence across the continent (Chowdhary et al., 2023; Urbanovics et al., 2024; Nagar et al., 2024; Munari et al., 2024). While prior research has analyzed ERC funding outcomes at national levels, such as in Spain (Albert et al., 2007), Italy (Ruggieri et al., 2021), and Hungary (Győrffy et al., 2020), as well as on micro-levels (Urbanovics et al., 2024 who focused on an extremely narrow scope, namely, the SH2 panel in the SH domain), there remains a notable gap in comparative studies that focus on countries at the forefront of ERC performance. In the present research, we not only aimed to fill this gap through the comparative analysis of two countries’ grantees but also juxtaposed the centralized French and decentralized Dutch research governance models in relation to ERC grants in order to contribute to a new discourse that is inclusive of both scientometric evaluations and research system specifications. Our findings echo earlier concerns about the structural biases in European science funding (for a comprehensive analysis on institutional and career trajectory-related influences see Chowdhary et al., 2023), particularly the tension between collaboration and individual excellence. Whereas the ERC has traditionally emphasized individual scientific merit, our results reinforce that collaborative structures, especially international ones, are crucial for visibility and citation impact (Urbanovics et al., 2024). This confirms the observations made by Urbanovics et al. (2024), who showed how co-authorship networks act as engines of productivity in the SH domain, although we widened the scope to all SH panels. However, unlike previous studies that examined collaboration in aggregate, our co-authorship network analysis reveals nuanced national differences: while the Dutch system cultivates horizontally distributed, interdisciplinary ties, the French model produces more institutionally anchored hubs primarily centered around CNRS. Additionally, our use of topic modeling to examine the thematic orientations of funded projects extends the work of Bonaccorsi et al. (2022), who explored interdisciplinarity within ERC projects. While their study focused on mapping interdisciplinary structures across domains, our research focuses on the SH domain, identifying national-level thematic divergences. For instance, the Dutch emphasis on global cultural systems and political science complements the country’s historically renowned reputation for high-impact social science research (Jolles, 1962; Van Der Heijden & Sijtsma, 1996; Nijkamp, 2019), whereas France’s concentration in cognitive neuroscience and language aligns with its historic strength in psychology and linguistic theory (Poirier et al., 2012; Chamak, 2011; Joanette et al., 2008; Combettes, 2018). Furthermore, we provide empirical validation to theoretical arguments about the impact of governance structures on academic productivity. In sum, our study offers a novel empirical contribution by bridging governance models, thematic specializations, and collaboration metrics in a comparative framework. It advances ongoing debates about the ERC’s structural impact on European science and provides a template for assessing the efficacy of national strategies in global research competition.

6. Conclusions

Our study aimed to comparatively examine two highly successful models of research governance: the centralized French model and the decentralized Dutch model. Our comparative analysis on ERC grant winners from these two countries set the scope of our research on the social sciences and humanities. The paper highlights how different research models shape academic outcomes. Dutch researchers, operating in a decentralized system, tend to explore a broader range of topics and show higher publication output and growth compared to their French counterparts. The decentralized Dutch model encourages thematic diversity and fosters interdisciplinary and international collaborations, as seen in their extensive co-authorship networks and higher prominence in international partnerships. In contrast, the French system, with its more centralized approach, focuses research efforts on specific areas, particularly cultural studies. While this centralized model provides a strong foundation for in-depth research in key fields, it can limit the variety of topics covered and slow down the rate of research output. Nevertheless, French researchers benefit from the institutional support provided by entities like the CNRS, which plays a pivotal role in collaborative efforts. Both systems rely heavily on collaboration, with single-author papers being uncommon in both countries. However, the Dutch model tends to promote broader, more interdisciplinary networks, while the French model is more focused on institution-based collaborations.
Our findings offer valuable insights for policymakers and institutions looking to enhance research outcomes by striking a balance between thematic diversity, collaboration, and specialization. We suggest that future research and policy efforts should focus on how both models can be optimized to meet the evolving demands of global research and how these models adapt to emerging global challenges, such as sustainability or digital innovation, and how shifts in funding strategies might further enhance research productivity and impact. Also, following up on the study conducted by Perianes-Rodríguez and Olmeda-Gómez (2021), it would also be valuable to examine how open access practices differ between successful models and whether open access publishing has substantial effects on research prevalence and prominence. Lastly, it would be more than important to see how ERC winners’ performance change after receiving the grant. In this regard, Tóth et al.’s (2024) model, used for examining the publication performance of Marie Skłodowska-Curie Actions’ individual fellows in the social sciences and humanities, would fit this research excellently.

7. Limitations

Although our study provides new insights into the comparative academic performance of ERC grant winners in SH, we must acknowledge some limitations that provide a transparent and comprehensive reflection on the results. Firstly, as with all scientometric papers that employ data from a single repository, we must address these databases’ inherent bias. By this, we mean that the use of Scopus and SciVal as the primary data sources introduces well-documented biases (Tennant, 2020) that affect the reliability of bibliometric assessments in all domains, including the social sciences and humanities (cf. Mongeon & Paul-Hus, 2015; Van Leeuwen, 2006; Engels et al., 2012; Kulczycki et al., 2018 and see: Hicks, 1999). These tools are optimized for journal-based, English-language outputs and tend to underrepresent non-English publications, monographs, book chapters, and nationally oriented journals (formats that are especially prevalent in French SH scholarship, for instance) (for more on classification and indexation issues, see: Liu & Wang, 2025; Thelwall & Pinfield, 2024). Furthermore, although we applied disciplinary filters and limited the analysis to ERC-defined SH panels, the skew in data coverage remains a structural constraint that cannot be fully corrected through sampling alone. As a result, our findings may understate the full scope and diversity of French and Dutch SH research output, particularly work published in French-language journals not indexed by Scopus in case of the former context. Future research would benefit from triangulation with national databases, institutional repositories, and qualitative data to better capture the multilingual and multimodal nature of SH scholarship across Europe.

Author Contributions

Conceptualization, G.F.L., P.A., and P.S. methodology, G.F.L. and P.S.; software, G.F.L. and P.S.; validation, G.F.L. and P.S.; formal analysis, G.F.L., P.A., and P.S.; investigation, G.F.L., P.A., and P.S.; resources, P.S.; data curation, P.S.; writing—original draft preparation, G.F.L. and P.A.; writing—review and editing, G.F.L. and P.A.; visualization, G.F.L. and P.S.; supervision, P.A. and P.S.; project administration, P.S.; funding acquisition, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

TKP2021-NKTA-51 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NKTA funding scheme.

Data Availability Statement

The data will be made available on request.

Acknowledgments

We must address that parts of this research were communicated in our earlier endeavor related to the French and Dutch ERC grantees in Sasvári and Lendvai (2025). The results of this paper have been expansively updated in the present publication, and we have also added a number of new examinations. Since the paper in question was published in Hungarian, we underline that by publishing parts of the results in English, we mainly wished to disseminate our research to a wider audience in the spirit of inclusivity and potentially wider reach of our findings. We have also used the help of a new co-author, who redrafted the article and also proposed a more sophisticated semantic modeling.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Albert, A., Granadino, B., & Plaza, L. M. (2007). Scientific and technological performance evaluation of the Spanish Council for Scientific Research (CSIC) in the field of Biotechnology. Scientometrics, 70(1), 41–51. [Google Scholar] [CrossRef]
  2. Antonoyiannakis, M., & Kafatos, F. C. (2009). The European Research Council: A Revolutionary Excellence Initiative for Europe. European Review, 17(3–4), 511–516. [Google Scholar] [CrossRef]
  3. Arrieta, O. a. D., Pammolli, F., & Petersen, A. M. (2017). Quantifying the negative impact of brain drain on the integration of European science. Science Advances, 3(4), e1602232. [Google Scholar] [CrossRef] [PubMed]
  4. Aust, J., & Crespy, C. (2009). Napoléon renversé? Revue Française De Science Politique, 59(5), 915–938. [Google Scholar] [CrossRef]
  5. Bautista-Puig, N., García-Zorita, C., & Mauleón, E. (2019). European Research Council: Excellence and leadership over time from a gender perspective. Research Evaluation, 28(4), 370–382. [Google Scholar] [CrossRef]
  6. Beerkens, M. (2018). The European Research Council and the academic profession: Insights from studying starting grant holders. European Political Science, 18(2), 267–274. [Google Scholar] [CrossRef]
  7. Boitard, C., Clément, B., Migus, A., & Netter, P. (2024). Vers une organisation intégrée du financement de la recherche dans les universités: Un « UKRI à la française ». Bulletin DE L Académie Nationale De Médecine, 208(1), 52–58. [Google Scholar] [CrossRef]
  8. Bonaccorsi, A., Melluso, N., & Massucci, F. A. (2022). Exploring the antecedents of interdisciplinarity at the European Research Council: A topic modeling approach. Scientometrics, 127(12), 6961–6991. [Google Scholar] [CrossRef]
  9. Brunet, L., & Müller, R. (2022). Making the cut: How panel reviewers use evaluation devices to select applications at the European Research Council. Research Evaluation, 31(4), 486–497. [Google Scholar] [CrossRef]
  10. Cavallaro, M., & Lepori, B. (2021). Institutional barriers to participation in EU framework programs: Contrasting the Swiss and UK cases. Scientometrics, 126(2), 1311–1328. [Google Scholar] [CrossRef]
  11. Chamak, B. (2011). Dynamique d’un mouvement scientifique et intellectuel aux contours flous: Les sciences cognitives (États-Unis, France). Revue D Histoire Des Sciences Humaines, 25(2), 13. [Google Scholar] [CrossRef]
  12. Chowdhary, S., Defenu, N., Musciotto, F., & Battiston, F. (2023). Dependency of ERC-funded research on US collaborations. Nature Physics, 19(12), 1746–1749. [Google Scholar] [CrossRef]
  13. Combettes, B. (2018). L’opposition langue parlée/langue écrite dans la linguistique historique de tradition française (1860–1930). Langages, 208(4), 69–82. [Google Scholar] [CrossRef]
  14. Corsini, A., & Pezzoni, M. (2023). Does grant funding foster research impact? Evidence from France. Journal of Informetrics, 17(4), 101448. [Google Scholar] [CrossRef]
  15. Cruz-Castro, L., Benitez-Amado, A., & Sanz-Menéndez, L. (2016). The proof of the pudding: University responses to the European Research Council. Research Evaluation, 25(4), 358–370. [Google Scholar] [CrossRef]
  16. De La Laurencie, A., & Maddi, A. (2019, ). The dynamics of french publications in social sciences and humanities: A European comparison. 17th International Conference on Scientometrics & Informetrics ISSI2019 with a Special Sti indicators Conference Track (pp. 1–13). Available online: https://hal.science/hal-04935911 (accessed on 18 July 2025).
  17. Dzieżyc, M., & Kazienko, P. (2021). Effectiveness of research grants funded by European Research Council and Polish National Science Centre. Journal of Informetrics, 16(1), 101243. [Google Scholar] [CrossRef]
  18. Egger, R., & Yu, J. (2022). A topic modeling comparison between LDA, NMF, Top2VEC, and BERTopic to demystify Twitter posts. Frontiers in Sociology, 7, 1–16. [Google Scholar] [CrossRef] [PubMed]
  19. Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the Social Sciences and Humanities, 2000–2009. Scientometrics, 93(2), 373–390. [Google Scholar] [CrossRef]
  20. ERC. (2024). ERC. Available online: https://erc.europa.eu/about-erc/erc-glance (accessed on 18 July 2025).
  21. Flesia, E. (2006). Valorisation de la recherche, innovation et création d’entreprises. Géographie Économie Société, 8(1), 149–158. [Google Scholar] [CrossRef]
  22. Goh, Y. C., Cai, X. Q., Theseira, W., Ko, G., & Khor, K. A. (2020). Evaluating human versus machine learning performance in classifying research abstracts. Scientometrics, 125(2), 1197–1212. [Google Scholar] [CrossRef] [PubMed]
  23. Griffiths, M. (2007). How to become a European champion. Physics World, 20(5), 45–46. [Google Scholar] [CrossRef]
  24. Groß, T., & Karaalp, R. N. (2014). The European Research Council: A legal evaluation of research funding structures. In Higher education dynamics (pp. 179–187). Springer International Publishing. [Google Scholar]
  25. Győrffy, B., Csuka, G., Herman, P., & Török, Á. (2020). Is there a golden age in publication activity?-an analysis of age-related scholarly performance across all scientific disciplines. Scientometrics, 124(2), 1081–1097. [Google Scholar] [CrossRef]
  26. Heldin, C. H. (2008). The European Research Council—A new opportunity for European science. Nature Reviews Molecular Cell Biology, 9(5), 417–420. [Google Scholar] [CrossRef] [PubMed]
  27. Héraud, J., & Lachmann, J. (2015). L’évolution du système de recherche et d’innovation: Ce que révèle la problématique du financement dans le cas français. Innovations, 46(1), 9–32. [Google Scholar] [CrossRef]
  28. Hicks, D. (1999). The difficulty of achieving full coverage of international social science literature and the bibliometric consequences. Scientometrics, 44(2), 193–215. [Google Scholar] [CrossRef]
  29. Hoenig, B. (2017). Europe’s new scientific elite: Social mechanisms of science in the european research area. Routledge. [Google Scholar]
  30. Hoenig, B. (2018). Structures, mechanisms and consequences of Europeanization in research: How European funding affects universities. Innovation the European Journal of Social Science Research, 31(4), 504–522. [Google Scholar] [CrossRef]
  31. Hörlesberger, M., Roche, I., Besagni, D., Scherngell, T., François, C., Cuxac, P., Schiebel, E., Zitt, M., & Holste, D. (2013). A concept for inferring ‘frontier research’ in grant proposals. Scientometrics, 97(2), 129–148. [Google Scholar] [CrossRef]
  32. Jain, A., Khor, K. S., Beard, D., Smith, T. O., & Hing, C. B. (2021). Do journals raise their impact factor or SCImago ranking by self-citing in editorials? A bibliometric analysis of trauma and orthopaedic journals. ANZ Journal of Surgery, 91(5), 975–979. [Google Scholar] [CrossRef] [PubMed]
  33. Joanette, Y., Ansaldo, A., Carbonnel, S., Ska, B., Kahlaoui, K., & Nespoulous, J. (2008). Communication, langage et cerveau: Du passé antérieur au futur proche. Revue Neurologique, 164, S83–S90. [Google Scholar] [CrossRef] [PubMed]
  34. Jolles, H. (1962). Social research and social sciences in the Netherlands. Some introductory remarks on research organization. Social Science Information, 1(4), 50–73. [Google Scholar] [CrossRef]
  35. Katz, J., & Martin, B. R. (1997). What is research collaboration? Research Policy, 26(1), 1–18. [Google Scholar] [CrossRef]
  36. Khodeir, N., & Elghannam, F. (2024). Efficient topic identification for urgent MOOC Forum posts using BERTopic and traditional topic modeling techniques. Education and Information Technologies. [Google Scholar] [CrossRef]
  37. Knivett, V. (2007, May 18). Investing in the future—Research Europe. New Electronics. [Google Scholar]
  38. Koens, L., Harkema, B., & Faasse, P. (2018). Making knowledge work: The function of public knowledge organizations in the Netherlands. In Springer eBooks (pp. 71–87). Springer. [Google Scholar]
  39. Kolman, M. (2024, March 14). The impact of internationalization on research performance. Available online: https://www.elsevier.com/connect/the-impact-of-internationalization-on-research-performance (accessed on 18 July 2025).
  40. Kulczycki, E., Engels, T. C. E., Pölönen, J., Bruun, K., Dušková, M., Guns, R., Nowotniak, R., Petr, M., Sivertsen, G., Starčič, A. I., & Zuccala, A. (2018). Publication patterns in the social sciences and humanities: Evidence from eight European countries. Scientometrics, 116(1), 463–486. [Google Scholar] [CrossRef]
  41. Law, D. (2016). Going Dutch: Higher education in the Netherlands. Perspectives Policy and Practice in Higher Education, 20(2–3), 99–109. [Google Scholar] [CrossRef]
  42. Liu, W., & Wang, H. (2025). Red alert: Millions of “homeless” publications in Scopus should be resettled. Journal of the Association for Information Science and Technology, 1–9. [Google Scholar] [CrossRef]
  43. Luukkonen, T. (2013). The European Research Council and the European research funding landscape. Science and Public Policy, 41(1), 29–43. [Google Scholar] [CrossRef]
  44. Maddi, A., Maisonobe, M., & Boukacem-Zeghmouri, C. (2025). Geographical and disciplinary coverage of open access journals: OpenAlex, Scopus, and WoS. PLoS ONE, 20(4), e0320347. [Google Scholar] [CrossRef] [PubMed]
  45. Mongeon, P., & Paul-Hus, A. (2015). The journal coverage of web of science and scopus: A comparative analysis. Scientometrics, 106(1), 213–228. [Google Scholar] [CrossRef]
  46. Munari, F., Leonardelli, E., Menini, S., Righi, H. M., Sobrero, M., Tonelli, S., & Toschi, L. (2024). Public research funding and science-based innovation: An analysis of ERC research grants, publications and patents. Research Evaluation, 2024, rvae012. [Google Scholar] [CrossRef]
  47. Murphy, C. (2005). Business statistics without tears. Business Information Review, 22(3), 189–198. [Google Scholar] [CrossRef]
  48. Nagar, J. P., Breschi, S., & Fosfuri, A. (2024). ERC science and invention: Does ERC break free from the EU Paradox? Research Policy, 53(8), 105038. [Google Scholar] [CrossRef]
  49. Nedeva, M., & Stampfer, M. (2012). From “Science in Europe” to “European Science”. Science, 336(6084), 982–983. [Google Scholar] [CrossRef] [PubMed]
  50. Nijkamp, P. (2019). Regional science in the Netherlands: A contextual interpretation of the “power of smallness”. Papers of the Regional Science Association, 99(2), 275–293. [Google Scholar] [CrossRef]
  51. Olmeda-Gómez, C., & De Moya-Anegón, F. (2015). Publishing trends in library and information sciences across European countries and institutions. The Journal of Academic Librarianship, 42(1), 27–37. [Google Scholar] [CrossRef]
  52. Perianes-Rodríguez, A., & Olmeda-Gómez, C. (2021). Effect of policies promoting open access in the scientific ecosystem: Case study of ERC grantee publication practice. Scientometrics, 126(8), 6825–6836. [Google Scholar] [CrossRef]
  53. Pinson, G. (2016). The knowledge business and the neo-managerialisation of research and academia in France. In Routledge eBooks (pp. 197–218). Routledge. [Google Scholar]
  54. Pizzolato, D., Elizondo, A. R., Bonn, N. A., Taraj, B., Roje, R., & Konach, T. (2023). Bridging the gap–How to walk the talk on supporting early career researchers. Open Research Europe, 3, 75. [Google Scholar] [CrossRef] [PubMed]
  55. Poirier, J., Clarac, F., Barbara, J., & Broussolle, E. (2012). Figures and institutions of the neurological sciences in Paris from 1800 to 1950. Part IV: Psychiatry and psychology. Revue Neurologique, 168(5), 389–402. [Google Scholar] [CrossRef] [PubMed]
  56. Ponomariov, B., & Boardman, C. (2016). What is co-authorship? Scientometrics, 109(3), 1939–1963. [Google Scholar] [CrossRef]
  57. Popescu, F., & Helsen, E. (2018). Comprehensive internationalization at HAN university of applied sciences. curriculum, co-curriculum, and learning outcomes. In Advances in intelligent systems and computing (pp. 36–45). Springer International Publishing. [Google Scholar]
  58. Raiola, G. (2020). The movement and sport science in Italy towards the European Research Council. Physical Culture and Sport Studies and Research, 86(1), 37–48. [Google Scholar] [CrossRef]
  59. Rodríguez-Navarro, A., & Brito, R. (2018). Technological research in the EU is less efficient than the world average. EU research policy risks Europeans’ future. Journal of Informetrics, 12(3), 718–731. [Google Scholar] [CrossRef]
  60. Ruggieri, R., Pecoraro, F., & Luzi, D. (2021). An intersectional approach to analyse gender productivity and open access: A bibliometric analysis of the Italian National Research Council. Scientometrics, 126(2), 1647–1673. [Google Scholar] [CrossRef]
  61. Sandström, U., & Van Den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12(1), 365–384. [Google Scholar] [CrossRef]
  62. Sasvári, P., & Lendvai, G. F. (2025). Tudománymetriai összehasonlítás az Európai Kutatási Tanács pályázatainak franciaországi és hollandiai nyerteseiről. Területi Statisztika, 65(3), 317–344. [Google Scholar] [CrossRef]
  63. Schöpfel, J., & Prost, H. (2009). Le JCR facteur d’impact (IF) et le SCImago Journal Rank Indicator (SJR) des revues françaises: Une étude comparative. Psychologie Française, 54(4), 287–305. [Google Scholar] [CrossRef]
  64. Simsek, M., De Vaan, M., & Van De Rijt, A. (2024). Do grant proposal texts matter for funding decisions? A field experiment. Scientometrics, 129(5), 2521–2532. [Google Scholar] [CrossRef]
  65. Tennant, J. P. (2020). Web of Science and Scopus are not global databases of knowledge. European Science Editing, 46, e51987. [Google Scholar] [CrossRef]
  66. Thelwall, M., & Pinfield, S. (2024). The accuracy of field classifications for journals in Scopus. Scientometrics, 129(2), 1097–1117. [Google Scholar] [CrossRef]
  67. Tóth, T., Demeter, M., Csuhai, S., & Major, Z. B. (2024). When career-boosting is on the line: Equity and inequality in grant evaluation, productivity, and the educational backgrounds of Marie Skłodowska-Curie Actions individual fellows in social sciences and humanities. Journal of Informetrics, 18(2), 101516. [Google Scholar] [CrossRef]
  68. Urbanovics, A., Márkusz, I., Palla, G., Pollner, P., & Sasvári, P. (2024). Path of excellence: A co-authorship network analysis of European Research Council grant winners in social sciences. Heliyon, 10(12), e32403. [Google Scholar] [CrossRef] [PubMed]
  69. Van Den Besselaar, P., Sandström, U., & Schiffbaenker, H. (2018). Studying grant decision-making: A linguistic analysis of review reports. Scientometrics, 117(1), 313–329. [Google Scholar] [CrossRef] [PubMed]
  70. Van Der Heijden, P. G. M., & Sijtsma, K. (1996). Fifty years of measurement and scaling in the Dutch social sciences. Statistica Neerlandica, 50(1), 111–135. [Google Scholar] [CrossRef]
  71. Van Leeuwen, T. (2006). The application of bibliometric analyses in the evaluation of social science research. Who benefits from it, and why it is still feasible. Scientometrics, 66(1), 133–154. [Google Scholar] [CrossRef]
  72. Van Zoelen, A. G., & Kanters, E. (2023). The profession of research management and administration in the Netherlands. In Emerald publishing limited ebooks (pp. 715–721). Emerald Publishing Limited. [Google Scholar]
  73. Zelmanovitch, N. R., & Castellanos, M. (2019, July 16–20). When astrochemistry is a journey. Highlights on Spanish Astrophysics X, Proceedings of the XIII Scientific Meeting of the Spanish Astronomical Society (pp. 654–659), Salamanca, Spain. [Google Scholar]
Figure 1. Distribution of ERC grant winners by country between 2019 and 2023, with a particular attention to France and The Netherlands. Source: Own edit based on ERC data.
Figure 1. Distribution of ERC grant winners by country between 2019 and 2023, with a particular attention to France and The Netherlands. Source: Own edit based on ERC data.
Publications 13 00034 g001
Figure 2. Identification, screening, and selection of projects and publications. Source: Own edit.
Figure 2. Identification, screening, and selection of projects and publications. Source: Own edit.
Publications 13 00034 g002
Figure 3. Winning project distribution in terms of SH type and countries. Source: Own edit based on ERC data.
Figure 3. Winning project distribution in terms of SH type and countries. Source: Own edit based on ERC data.
Publications 13 00034 g003
Figure 4. Best-performing French and Dutch universities by number of ERC grant awarded. Source: Own edit based on ERC data.
Figure 4. Best-performing French and Dutch universities by number of ERC grant awarded. Source: Own edit based on ERC data.
Publications 13 00034 g004
Figure 5. Thematic groups of abstracts based on BERTopic analysis. Source: Own edit based on ERC data.
Figure 5. Thematic groups of abstracts based on BERTopic analysis. Source: Own edit based on ERC data.
Publications 13 00034 g005
Figure 6. Publication trends of French and Dutch ERC winners. Source: Own edit based on Scopus and SciVal data.
Figure 6. Publication trends of French and Dutch ERC winners. Source: Own edit based on Scopus and SciVal data.
Publications 13 00034 g006
Figure 7. Co-authorship trends of French and Dutch ERC winners. Dash lines represent trendlines. Source: Own edit based on Scopus and SciVal data.
Figure 7. Co-authorship trends of French and Dutch ERC winners. Dash lines represent trendlines. Source: Own edit based on Scopus and SciVal data.
Publications 13 00034 g007
Figure 8. Distribution of the top journals accompanied by the average citation of count of the respective journals. Source: Own edit based on Scopus data.
Figure 8. Distribution of the top journals accompanied by the average citation of count of the respective journals. Source: Own edit based on Scopus data.
Publications 13 00034 g008
Figure 9. Co-authorship analysis networks of French (top) and Dutch (bottom) ERC winners. Source: Own edit based on Scopus data with Gephi.
Figure 9. Co-authorship analysis networks of French (top) and Dutch (bottom) ERC winners. Source: Own edit based on Scopus data with Gephi.
Publications 13 00034 g009aPublications 13 00034 g009b
Table 1. ERC panels by code and name.
Table 1. ERC panels by code and name.
ERC Panel CodePanel Name
SH1Individuals, Markets and Organisations
SH2Institutions, Governance and Legal Systems
SH3The Social World and Its Interactions
SH4The Human Mind and Its Complexity
SH5Cultures and Cultural Production
SH6The Study of the Human Past
SH7Human Mobility, Environment, and Space
As SH8 (“Studies of Cultures and Art”) was introduced after the examined period, we excluded it and focused on SH1–SH7.
Table 2. Compound annual growth rate of publications of French and Dutch ERC winners.
Table 2. Compound annual growth rate of publications of French and Dutch ERC winners.
CountryCAGREarly Years Publication CountLate Years Publication Count
France0.026665140449775757382028
Netherlands0.0542151692970045417845442
Source: Own edit based on Scopus and SciVal data.
Table 3. SJR distribution (score and percentile) of publications. The values are rounded up to two decimals for better interpretability.
Table 3. SJR distribution (score and percentile) of publications. The values are rounded up to two decimals for better interpretability.
The NetherlandsFrance
SJR ScoreSJR PercentileSJR ScoreSJR Percentile
mean1.9414.613.2312.98
std2.116.84.1217.87
min0.110.11
25%0.8330.972
50%1.4181.796
75%2.27203.6915
max27.119936.22100
Table 4. Distribution of ERC grant winners’ publications by ASJC.
Table 4. Distribution of ERC grant winners’ publications by ASJC.
FranceThe Netherlands
RankASJC Field NameNASJC Field NameN
1Cognitive Neuroscience307General Social Sciences554
2Multidisciplinary282Cognitive Neuroscience501
3General Neuroscience249Sociology and Political Science498
4Economics and Econometrics242Geography, Planning and Development447
5Experimental and Cognitive Psychology204Experimental and Cognitive Psychology425
6General Biochemistry, Genetics and Molecular Biology189Psychiatry and Mental Health403
7Ecology, Evolution, Behavior and Systematics146Developmental and Educational Psychology379
8Language and Linguistics143General Psychology362
9Linguistics and Language140Multidisciplinary358
10Neurology129General Neuroscience330
Source: Own edit based on Scopus and ERC data.
Table 5. Distribution of ERC grant winners’ publications by topic clusters in Scopus.
Table 5. Distribution of ERC grant winners’ publications by topic clusters in Scopus.
FranceThe Netherlands
RankTop Topic ClustersNTop Topic ClustersN
1Functional Magnetic Resonance Imaging411Climate Change671
2Electroencephalography279Social Media551
3Brain Mapping270Behavior (Neuroscience)489
4Visual Perception256Visual Perception418
5Climate Change237Democracy398
6Holocene162Transport390
7Empathy145Mental Health385
8Behavior (Neuroscience)142Quality of Life344
9Archeology141Functional Magnetic Resonance Imaging307
10Transport130Justice251
Source: Own edit based on Scopus and ERC data.
Table 6. Top 10 collaborating institutions of French ERC winners based on their publications.
Table 6. Top 10 collaborating institutions of French ERC winners based on their publications.
InstitutionCo-Authored PublicationsCitations Per PublicationField-Weighted Citation Impact
CNRS151932.92.24
Université PSL74638.22.34
Institut national de la santé et de la recherche médicale45245.62.22
Université Paris-Saclay37544.92.05
Institut de recherche pour le développement30140.62.48
Sorbonne Université30045.32.34
Commissariat à l’énergie atomique et aux énergies alternatives29651.02.22
Paris-Est Sup25425.11.99
Université de Montpellier24940.52.61
Université Fédérale Toulouse Midi-Pyrénées24338.12.09
Source: Own edit based on Scopus data.
Table 7. Top 10 collaborating institutions of Dutch ERC winners based on their publications.
Table 7. Top 10 collaborating institutions of Dutch ERC winners based on their publications.
InstitutionCo-Authored PublicationsCitations Per PublicationField-Weighted Citation Impact
University of Amsterdam189236.72.8
Utrecht University130240.32.7
Vrije Universiteit Amsterdam123834.32.28
Radboud University Nijmegen91829.01.95
Leiden University86925.52.05
Delft University of Technology61844.52.71
Erasmus University Rotterdam41631.92.44
Amsterdam UMC38633.92.03
Wageningen University & Research37959.64.39
Tilburg University37224.42.39
Source: Own edit based on Scopus data.
Table 8. Collaboration rates of French and Dutch ERC winners based on their publications.
Table 8. Collaboration rates of French and Dutch ERC winners based on their publications.
France
Metric%Scholarly OutputCitationsCitations per PublicationField-Weighted Citation Impact
International collaboration61.0%168673,20443.42.69
Only national collaboration21.9%60515,20425.11.52
Only institutional collaboration5.5%152388325.51.74
Single authorship (no collaboration)11.7%323390812.11.16
The Netherlands
Metric%Scholarly OutputCitationsCitations per PublicationField-Weighted Citation Impact
International collaboration49.1%3551149,69042.22.9
Only national collaboration19.6%141739,56427.91.94
Only institutional collaboration18.5%133340,63830.52.25
Single authorship (no collaboration)12.8%92513,72314.81.91
Source: Own edit based on Scopus data.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lendvai, G.F.; Aczél, P.; Sasvári, P. All Roads Lead to Excellence: A Comparative Scientometric Assessment of French and Dutch European Research Council Grant Winners’ Academic Performance in the Domain of Social Sciences and Humanities. Publications 2025, 13, 34. https://doi.org/10.3390/publications13030034

AMA Style

Lendvai GF, Aczél P, Sasvári P. All Roads Lead to Excellence: A Comparative Scientometric Assessment of French and Dutch European Research Council Grant Winners’ Academic Performance in the Domain of Social Sciences and Humanities. Publications. 2025; 13(3):34. https://doi.org/10.3390/publications13030034

Chicago/Turabian Style

Lendvai, Gergely Ferenc, Petra Aczél, and Péter Sasvári. 2025. "All Roads Lead to Excellence: A Comparative Scientometric Assessment of French and Dutch European Research Council Grant Winners’ Academic Performance in the Domain of Social Sciences and Humanities" Publications 13, no. 3: 34. https://doi.org/10.3390/publications13030034

APA Style

Lendvai, G. F., Aczél, P., & Sasvári, P. (2025). All Roads Lead to Excellence: A Comparative Scientometric Assessment of French and Dutch European Research Council Grant Winners’ Academic Performance in the Domain of Social Sciences and Humanities. Publications, 13(3), 34. https://doi.org/10.3390/publications13030034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop