Next Article in Journal
The Effect of a Personalized Exercise Program on Muscle Functional Capacity and Quality of Daily Life: A Randomized Pilot Study
Previous Article in Journal
System Mapping of Farm-to-School Partnerships to Enhance Student Access to Healthy, Local Foods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years

by
Emmanuel Kabengele Mpinga
1,
Sara Chebbaa
2,
Anne-Laure Pittet
1,* and
Gabin Kayumbi
3
1
Institute of Global Health, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland
2
Faculty of Biology and Medicine, University of Lausanne, 1011 Lausanne, Switzerland
3
The Alan Turing Institute, London NW1 2DB, UK
*
Author to whom correspondence should be addressed.
Int. J. Environ. Res. Public Health 2025, 22(9), 1343; https://doi.org/10.3390/ijerph22091343
Submission received: 4 July 2025 / Revised: 2 August 2025 / Accepted: 20 August 2025 / Published: 28 August 2025

Abstract

Background: Scientific evidence plays a critical role in informing public health decision-making processes. However, the extent, nature, and effectiveness of its use remain uneven across contexts. Despite the increasing volume of literature on the subject, previous syntheses have often suffered from narrow thematic, temporal, or geographic scopes. Objectives: This study undertook a comprehensive systematic literature review spanning 50 years to (i) synthesise current knowledge on the use of scientific evidence in public health decisions, (ii) identify key determinants, barriers, and enablers, (iii) evaluate implementation patterns, and (iv) propose future directions for research and practice. Methods: We adopted the PRISMA model (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Moreover, we researched three large databases (Web of Science, Embase, and PubMed), and this study focused on articles published in the English and French languages between January 1974 and December 2024. Studies were analysed thematically and descriptively to identify trends, patterns, and knowledge gaps. Results: This review reveals a growing corpus of scholarship with a predominance of qualitative studies mainly published in public health journals. Evidence use is most frequently analysed at the national policy level. Analyses of the evolution of scientific production over time revealed significant shifts beginning as early as 2005. Critical impediments included limited access to reliable and timely data, a lack of institutional capacity, and insufficient training among policy-makers. In contrast, enablers encompass cross-sector collaboration, data transparency, and alignment between researchers and decision-makers. Conclusions: Addressing persistent gaps necessitates a more nuanced appreciation of interdisciplinary and contextual factors. Our findings call for proactive policies aimed at promoting the use of scientific evidence by improving the accessibility of health data (addressing the absence or lack of data, as well as its reliability, timeliness, and accessibility), and by training decision-makers in the use of scientific evidence for decision making. Furthermore, our findings advocate for better alignment between the agendas of healthcare professionals (e.g., data collection), researchers (e.g., the selection of research topics), and decision-makers (e.g., expectations and needs) in order to develop and implement public health policies that are grounded in and informed by scientific evidence.

1. Introduction

Perhaps more than any other discipline, the science of public health is a discipline of deliberations and, above all, of decision making [1,2,3]. Whether it concerns prevention programmes, hospital planning, training initiatives, epidemic and pandemic control measures, resource allocation, project and activity evaluation, or interventions during natural disasters and humanitarian crises, the practice of public health requires decision making processes.
The success and effectiveness of such decisions depend not only on the means deployed to achieve the objectives but also on the consideration of scientific evidence that may justify the choice of a particular decision, priority, or option over others [4,5].
Beyond the ineffective policies they may produce, ignorance, lack of awareness, and the non-use of evidence can lead to high rates of morbidity and mortality, restrictions in access to care, infringements of patients’ rights, significant economic and social costs associated with diseases, the non-participation of affected communities, and even the loss of credibility and trust in health systems [6,7,8,9].
It is undoubtedly due to this critical importance that the Evidence-Based Public Health movement emerged and developed, which Jenicek [10] defined as the conscientious, explicit, and judicious use of the best available evidence in decisions concerning the health of communities and populations, namely, in the areas of health protection, disease prevention, and the maintenance and improvement of health.
From its origins, in some place in the 1960s–70s, this movement has experienced various developments in response to the challenges faced by global public health, such as the emergence of new epidemics and the resurgence of old ones (HIV and tuberculosis), the introduction of new technologies in prevention and care (telemedicine, eHealth, artificial intelligence, etc.), the issue of social determinants and lack of access to care, climate change and the epidemiological transition, mental health crises, and health workforce shortages, as well as violence and armed conflicts.
Several examples help illustrate the importance of using scientific evidence in public health decision making. One of the most emblematic cases concerns tobacco control policies, particularly in the United States. This case has become widely recognised due to the Reports of the Surgeon General, which played a pivotal role in raising national awareness about the health risks of smoking. These reports helped shift the issue from being perceived primarily as a matter of individual or consumer choice to being understood as a matter of epidemiology, public health, and population-level risk, affecting both smokers and non-smokers alike [11].
Equally emblematic is the second case, which concerns the study of health inequalities among British civil servants. The findings from this research had a significant impact on public policy, influencing measures aimed at improving working conditions, promoting mental health in the workplace, and supporting broader social policies designed to reduce socio-economic disparities [12]. More recently, scientific evidence has been widely used in shaping responses to the COVID-19 pandemic [13].
Overall, these developments have led to substantial scientific production concerning the use of evidence in public health decision making, making the synthesis of these findings highly important in several respects. Indeed, our study synthesises fifty years of scholarship on the use of scientific evidence in public health decision making.
First, without such syntheses, decisions and interventions are likely to be inappropriate, ineffective, or even harmful to individuals and communities. Second, decision-making processes can be hampered by prolonged periods of deliberation. Finally, it is worth noting that policies underpinned by robustly synthesised evidence enhance public trust in health systems.
When viewed from a temporal perspective, the use of scientific evidence in public health decision making has already been the subject of several knowledge syntheses, most of which are limited. Their limitations stem either from narrow datasets, specific populations, short study periods, particular types of review, restricted thematic scopes, or limited geographic coverage.
For instance, the literature review conducted by Kneale et al. (2017) on the use of evidence in local public health decision making is narrow in scope, and its findings cannot be readily applied to other contexts [14]. Similarly, the review by Goyet et al. two years earlier focused solely on Cambodia [15].
The reviews conducted by the teams of Campbell et al. (2018) on increasing the use of research in public health policy and programmes, as well as by Innvaer et al. (2002) on policy-makers’ perceptions of their use of evidence, also illustrate the narrowness of databases and a focus on a specific subpopulation [16,17].
Regarding the thematic content of the reviews, limitations also exist. Norton et al. (2011) systematically examined research showing how public health decision-makers use evidence, drawing on materials from 15 qualitative studies and 3 surveys [18]. Meanwhile, Oliver et al. (2014) focused solely on barriers and enablers to evidence use by policy-makers [19].
The short time frames of prior reviews also restrict their conclusions. Kneale et al. (2017), for example, limited their scoping review to publications from 2010 only [14], and similarly, Masood et al. (2018) reviewed the use of research in public health policy over a brief period of 2010–2016 [20].
Additionally, the scopes of reviews vary considerably in terms of thematic focus. Some syntheses concentrate mostly on decision-makers’ perceptions of their use of evidence [16], while others look exclusively at barriers and obstacles to evidence use [17].
Finally, the knowledge syntheses produced also differ greatly according to their design. Alongside systematic reviews conducted according to rigorous guidelines [14,16], there are scoping reviews, narrative reviews, rapid reviews, and predictive studies [14,16,21,22], whose strength and generalisability require cautious interpretation.
It is precisely these limitations that motivated the present study, which differed from previous work in three key respects: a long historical outlook of 50 years; use of the broadest available databases (PubMed and, especially, Web of Science); and a robust, tested systematic review methodology with the following objectives:
(i)
To synthesise knowledge on the use of scientific evidence in public health decisions;
(ii)
To identify the determinants, barriers, and facilitators;
(iii)
To contribute to the improvement of evidence-based public health decisions;
(iv)
To evaluate the implementation of evidence;
(v)
To define perspectives for future research on this topic.
These objectives stem from three research questions highlighted in this introduction and which can be formulated as follows:
  • What is the current state of knowledge on scientific production concerning the use of evidence in public health decision making? Specifically, what are the main characteristics of this output in terms of volume, research themes, main geographic centres, leading contributors, publication channels, types of study design, etc., over the period under review?
  • What are the obstacles, barriers, and/or facilitators to such use, or, more precisely, what are the organisational, structural, or contextual factors that influence, in one way or another, the use of evidence?
  • Finally, what actions can be undertaken by actors and institutions to enhance the use of evidence in the formulation, implementation, and evaluation of public health policies, i.e., in public health practice?
Answers to these questions will be important for policy-makers, health professionals, research institutions, and other stakeholders, including international funding bodies engaged in global health. For the latter, the findings will help guide and direct their investments towards programmes and activities that are robustly evidence-informed.
Health professionals will gain insight into the determinants of successful or unsuccessful interactions with policy-makers, which are also central to the credibility and effectiveness of their interventions. With these findings, research institutions and investigators will also obtain guidance on knowledge gaps and future research directions in this domain. As for decision-makers, the recommendations arising from this study, as well as the future research prompted by it, will be of direct relevance to them.
Before addressing these questions, it is important to present the theoretical framework underpinning this study, as well as its overall structure.

2. Theoretical Framework and Study Structure

This study falls within the field of research on the use of scientific evidence in public health. It is grounded in a theoretical framework that encompasses the foundational paradigm of research in this domain; the nature of the problem concerning the gap between scientific evidence and its use in public policy; the explanatory theoretical models addressing this gap and their critiques; and the major challenges currently facing research on the use of scientific evidence in public policy.
The fundamental paradigm of research on the use of scientific evidence posits that public decisions, interventions, and actions must be grounded in scientific data to be legitimate [23,24].
Without going into detailed exposition, this paradigm has faced several critiques, including the difficulty of producing robust scientific evidence in certain contexts (e.g., pandemics, humanitarian crises, widespread violence, and complex emergencies); the failure to incorporate local and community knowledge; and the predominance of Western models and methodologies [25,26,27].
For some scholars, the core issue lies in the non-use of scientific evidence in public health decision making due to a variety of factors. This problem is characterised by the gap between the existence of evidence and its lack of use, underuse, or disregard. Understanding this gap entails identifying its explanatory causes as well as potential solutions [20,28].
With regard to the theoretical models, numerous explanatory theoretical models have been proposed over the years to account for the use or non-use of scientific evidence, each with its own limitations. One of the most cited is Weiss’s model of research utilisation, which argues that evidence use is more of a cumulative process of knowledge building rather than a direct or linear translation of evidence into action. However, this model has been criticised for not sufficiently accounting for contextual factors, timeframes (short- vs. medium-term use), and the complexity of decision-making processes, which are also shaped by political interests, institutional memory, and unique local contexts. In contrast, other scholars have focused on realist evaluation approaches, which aim to explain how and why evidence may or may not be used by policy-makers [29,30,31,32,33].
It should also be noted that the research on the use of scientific evidence in public policy continues to face several major challenges, such as
(i)
Epistemological challenges, related to divergent understandings of foundational concepts such as “evidence” and what constitutes “research production” and “research use” [23,29,34].
(ii)
Methodological challenges, linked to the complexity and heterogeneity of data, difficulties in contextualising or generalising results, and the misalignment between the production of evidence and political agendas [35].
(iii)
Finally, the use of scientific evidence can face practical and ethical challenges. Among the practical barriers are financial constraints, a lack of or poor-quality data, fragmented data sources, and weak transmission mechanisms between evidence producers and decision-makers. Additionally, certain public health decisions based on evidence may raise risks of human rights violations if not handled ethically [36,37,38].
Figure 1 shows a schematic diagram of the theoretical framework.
The application of this theoretical framework has generated a body of knowledge on the use of evidence in public health decision making, which we analysed through a descriptive systematic literature review.

3. Materials and Methods

This literature review adopted a systematic approach to establish the state of knowledge on the use of scientific evidence in public health decision making. The different stages of this study follow the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) [39].

3.1. Protocol

The protocol was drafted and validated with the support of the library team at the University of Geneva and subsequently registered on PROSPERO (CRD420251025428).

3.2. Search Strategy

The research question was translated into a search strategy using keywords. The search strategies were pre-tested, adapted to each database, and then validated with the assistance of the University of Geneva library team. The literature search was carried out in November 2024. It was applied to three databases: PubMed, Web of Science, and Embase. The keywords were designed to capture the concept of evidence use (“Use of evidence” OR "Utilization of evidence” OR “Application of evidence” OR “Evidence-based public policy” OR “Evidence-based policy” OR “Evidence-based public health policy” OR “Evidence-based public policies” OR “Evidence-based policies” OR “Evidence-based public health policies” OR “Evidence-informed policies” OR “Evidence-informed policy” OR “Use of research evidence” OR “Utilization of research evidence”), the concept of policy making (“Policy making” OR “Politics” OR “Policy” OR “policymaker*” OR “policy-maker*” OR “policymaking” OR “policy-making” OR “policy” OR “policies” OR “public health decision*”), as well as the concept of public health (“Public Health” OR “Public Health Administration” OR “Health Policy” OR “Public health” OR “national health” OR “governmental health” OR “health field*” OR “health policy” OR “health policies”), and, finally, the concept of positive, negative, or neutral determinants (“determinant*” OR “factor*” OR “barrier*” OR “facilitator*” OR “limitation*” OR “strengthen*” OR “support*” OR “hinder*” OR “facilitate*” OR “promote*”) (File S1: Details of Boolean search string for each database).

3.2.1. Data Extraction

We identified eligible articles by using the PRISMA 2020 flow diagram. The first two authors independently scanned all titles and abstracts selected in their search, and articles that were assessed as not relevant were discarded. So were duplicates.

3.2.2. Exclusion Criteria

We then applied the following exclusion criteria: articles published before 1974, articles without an abstract, articles written in a language other than French or English, press releases, editorials, technical reports, symposium/conference abstracts or articles, magazine or newspaper articles, glossaries, commentaries, books and book chapters, project descriptions, descriptive reports, technical notes, viewpoints, position papers, tutorials, and methodology articles.

3.2.3. Data Analysis

We developed a data extraction categorisation to apply to the final inventory of articles (File S2). First, we extracted the article’s year of publication, the first author, and their institutional affiliation. The latter was determined based on information from the database (PubMed), article content, and the corresponding author’s email address. Affiliations were categorised as follows: “university”, “government”, “foundation”/“think tank” or “private company”, “international organisation”, or “other”. This enabled the identification of the first author’s institutional affiliation country.
Two authors (E.K. and S.C.) independently assigned the categories of journals, topics, domains, and study designs before cross-checking the data. Rare disagreements on categorisation were resolved through discussion until consensus was achieved.
Regarding journal types, they were categorised as “biomedical”, “care-focused”, “public health”, “natural sciences”, “human and social sciences”, or “mixed”.
Themes were assigned to articles according to their title and abstract. The available themes were ”evidence production”, ”use”, ”implementation”, ”evaluation”, “translation”, “collaboration”, “resource allocation”, and “capacity building”.
In our study, the term
-
“Evidence” does not reside only in the world where science is produced; it emerges in the political world of policy making, where it is interpreted, made sense of, and used, perhaps persuasively, in policy arguments [40].
-
“Evidence production” means scientific evidence production.
-
“Use” refers again to the utilisation of scientific evidence.
-
“Implementation” is the carrying out of planned, intentional activities that aim to turn evidence and ideas into policies and practices that work for people in the real world [41,42].
-
“Evaluation” is the systematic process to determine merit, worth, value, or significance [43].
-
“Translation” is associated with knowledge utilisation and refers to the use of knowledge in practice and decision making by the public, patients, health care professionals, managers, and policy-makers [44].
-
“Collaboration” is a core activity of a collaboration to share resources and capabilities that make the participators work closely together to create mutually beneficial outcomes [45].
-
“Resource allocation” is an aggregation of the functions required to track and manage all resources related to production. These resources include labour, machines, tools, fixtures, materials, and other entities, such as documents that must be available in order for work to start at the operation [46].
-
“Capacity building” is the development of knowledge, skills, commitment, structures, systems, and leadership to enable effective health policies to build capacity for public health [47].
To determine the domains of the studies, several combined criteria were used, especially elements from the title, abstract, and journal of publication. Two broad domains emerged: on the one hand, “public health” (“preventive”) and, on the other hand, “medicine” (“curative”).
As for the study design, the article categorisation was based on information derived from the title, abstract, and journal of publication. These categories were: “qualitative study”, “case study”, “focus group”, “Delphi”, “descriptive quantitative”, “cohort”, “case–control”, “randomised controlled trial”, “socio-political analysis”, “ethical/moral”, “legal/judicial”/“medico-legal”, “historical”, “philosophical”, “economic”, “theological”, “narrative review”, “rapid review”, “scoping review”, “realist review”, “systematic review”, “meta-analysis”, or “mixed-methods study”.
Next, four variables enabled us to establish the geographical context of the studies. These included the number of study sites (“single-site”, “multi-site”, or “global”), the scope of the study (“local”/“community”, “provincial”, “national”, “international”, or “regional”), the continent where the study took place, and the level of development of the study country (“developed” or “developing”).
Finally, we extracted from the abstracts the determinants presented by the authors as barriers and/or facilitators to the use of evidence, when they were explicitly described as such. The frequency of occurrence of these determinants across the analysed articles was weighted by the number of times each determinant was mentioned so as to highlight the relative importance of each one.

4. Results

We present the results of our study in Figure 2, Figure 3 and Figure 4 and Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6.

4.1. Application of PRISMA Model (PRISMA 2020)

This search yielded 4442 articles, among which 2208 duplicates were removed. A further 1710 articles were excluded as they did not match the thematic scope of our research question. We then applied the exclusion criteria to the remaining 940 articles. Subsequently, we applied the data extraction categorisation to the final 741 articles (File S2). Figure 2 depicts a flow diagram of the study selection process, as per the PRISMA guidelines (PRISMA 2020) (see also the PRISMA checklist (File S3)).
Figure 2. PRISMA 2020 flow chart diagram of study selection process.
Figure 2. PRISMA 2020 flow chart diagram of study selection process.
Ijerph 22 01343 g002

4.2. Characteristics of Scientific Production

4.2.1. Annual Volume of Scientific Production

The evolution of the annual scientific production is shown in Figure 3. The 50-year window between 1974 and 2024 includes the publication of 741 articles, which were sorted according to the year of publication. In the data extracted from the 741 articles included in this study, we observed an exponential increase in published articles, moving from 1 in 1993 to 49 articles in 2024, with a peak observed in 2017.
Figure 3. Evolution of annual scientific production from 1974 to 2024.
Figure 3. Evolution of annual scientific production from 1974 to 2024.
Ijerph 22 01343 g003

4.2.2. Types of Journals and Study Designs, Study Settings and Scopes, and Author Affiliations

The majority of these articles were published in English (99.5%). Most were published in public health journals (63.8%), followed by biomedical sciences (17.8%) and humanities and social sciences (10.5%) journals (Table 1).
In terms of study designs (Table 1), the literature was predominantly composed of qualitative studies (32.4%), followed by case studies (18.9%), quantitative studies (11.2%), and sociopolitical analyses (8.4%). The authors were mainly affiliated with universities (78.1%), compared with governments (10.1%), private foundations and organisations (6.1%), and NGOs (3.5%).
Apart from studies labelled as “international” (27.9%), the continents that contributed most of the articles analysed were Europe (18.2%), North America (17.3%), Africa (14.3%), Oceania (14.3%), and Asia (9.2%) (Table 2). The lead authors were most frequently based in the United Kingdom (19.2%), the United States (19%), Australia (14.3%), Canada (11.9%), and Nigeria (3.8%) (Table 3).
Table 1. Types of journals and study designs.
Table 1. Types of journals and study designs.
FrequencyPercentage
Type of publication journal
    Public health47363.8%
    Biomedical13217.8%
    Humanities and social sciences7810.5%
    Mixed547.3%
    Natural sciences30.4%
    Care-focused journals10.1%
Study design
    Qualitative studies24032.4%
    Case studies14018.9%
    Focus groups111.5%
    Delphi Studies40.5%
    Descriptive quantitative studies8311.2%
    Cohort studies10.1%
    Case–control studies00.0%
    Randomised controlled trials70.9%
    Socio-political analyses628.4%
    Ethical/moral analyses10.1%
    Legal, medico-legal, and juridical analyses30.4%
    Historical analyses40.5%
    Philosophical analyses20.3%
    Economic analyses10.1%
    Psychological analyses10.1%
    Narrative reviews618.2%
    Rapid reviews50.7%
    Scoping reviews192.6%
    Realist reviews30.4%
    Systematic reviews385.1%
    Meta-analyses10.1%
    Mixed-methods studies547.3%
Table 2. Study settings and scopes.
Table 2. Study settings and scopes.
FrequencyPercentage
Continent of the study
    South America182.4%
    Asia689.2%
    Oceania7910.7%
    Africa10614.3%
    North America12817.3%
    Europe13518.2%
    International20727.9%
Study scope
    Local/community111.5%
    Regional (EU, AU, WHO, Africa, etc.)496.6%
    Provincial628.4%
    International21829.4%
    National40154.1%
Table 3. Authors’ affiliations.
Table 3. Authors’ affiliations.
FrequencyPercentage
Institutional affiliation of the first author
    University57978.1%
    Government7510.1%
    Foundation, think tank, or private organisations456.1%
    International organisation263.5%
    Other162.2%
Country of the first author’s institutional affiliation (top ten)
    United Kingdom14219.2%
    USA14119.0%
    Australia10614.3%
    Canada8811.9%
    Nigeria283.8%
    Switzerland243.2%
    Iran212.8%
    South Africa121.6%
    Lebanon111.5%
    The Netherlands101.3%

4.2.3. Main Contributors

Table 4 presents the main authors who contributed to the publications.
Table 4. Main authors.
Table 4. Main authors.
Main AuthorsFrequency
Uneke, C. J. et al.14
El-Jardali, F. et al.8
Nabyonga Orem, J. et al.6
Lavis, J. N. et al.5
Smith, K. E. et al.5
Zardo, P. et al.5
Armstrong, R. et al.4
Khalid, A. F. et al.4
Oliver, K. et al.4
Onwujekwe, O. et al.4
Waqa, G. et al.4

4.3. Content

4.3.1. Domains and Themes of Study

The themes addressed by these studies (Table 5), in order of magnitude, pertained to the use of evidence (42.5%), implementation (33.9%), evidence production (25.9%), and translation (15.5%) (Table 5).
Table 5. Domains and themes of study.
Table 5. Domains and themes of study.
FrequencyPercentage
Theme
    Production19225.9%
    Use31542.5%
    Implementation25133.9%
    Evaluation658.8%
    Translation11515.5%
    Collaboration131.8%
    Resource allocation375.0%
    Capacity building324.3%
Domain
    Medicine354.7%
    Public health70695.3%

4.3.2. Determinants (Barriers and/or Facilitators)

Finally, the context, the internal organisation and structure of health services, institutions, and/or health programs; the competencies of human resources; access to the necessary resources, including financial resources; and personal commitment and appropriate timing between the availability of evidence and the problems to be addressed, constituted the fundamental determinants of the use—or non-use—of evidence in public health (Figure 4).
Figure 4. Determinants, facilitators, and barriers to the use of evidence. In this word cloud, the occurrence of determinants across the analysed articles was weighted by the number of times they were mentioned.
Figure 4. Determinants, facilitators, and barriers to the use of evidence. In this word cloud, the occurrence of determinants across the analysed articles was weighted by the number of times they were mentioned.
Ijerph 22 01343 g004

4.4. Analysis of Trends in Scientific Production over Time

Over the study period, scientific production has undergone a number of changes. First, although authors affiliated with universities consistently dominated throughout the entire period under review, we observed that starting from 2010 (the inflexion point), the proportion of publications authored by individuals affiliated with governmental institutions and those working in foundations, think tanks, or private companies increased by 871% for the former and 550% for the latter.
Similarly, in terms of study scope, a turning point appeared around 2010, marked by a shift from a national and international focus to a growing number of studies centred at the provincial and regional levels, with their proportions rising, respectively, by 277% (provincial) and 4700% (regional).
Along the same lines, while single-site studies continued to dominate the landscape of scientific production throughout the study period, the number of multi-site and global studies began to increase significantly after 2010.
Regarding publication channels, a shift can be identified slightly earlier, around 2005. Despite the continued predominance of public health journals as the primary outlets, we observed a notable increase in the number of articles published in biomedical journals starting in 2005.
Finally, when analysing thematic domains, no major changes were observed in the volume of publications across the two main areas. In other words, articles related to public health remained largely dominant throughout the entire study period. The ratio of the differences between the two types and their sum remained, respectively, at 75% between 1993 and 2009, and 92% between 2010 and 2024.
These trends are summarised in Table 6.
Table 6. The trends in scientific production between 1993 and 2024. The first column indicates relevant variables, while the second and third report the numbers of articles over the two intervals marked by the inflexion point around 2010.
Table 6. The trends in scientific production between 1993 and 2024. The first column indicates relevant variables, while the second and third report the numbers of articles over the two intervals marked by the inflexion point around 2010.
Variables1993–20092010–2024
Author affiliation
    Universities72507
    Government institutions768
    Foundations, think tanks, and private companies639
Study scope
    Provincial1349
    Regional148
Study setting
    Single site54397
    Multi-site16151
    Global19104
Type of journal
    Biomedical2948
    Public health102425
Study domain
    Medicine1124
    Public health78628

5. Discussion

In this study, we set out to investigate the use of scientific evidence in public health decision making. The findings of this review identified that the issue of evidence-based decision making in public health continues to attract growing interest. To the best of our knowledge, the findings revealed an absence of scientific production during the first 20 years of our study, across the languages and databases examined. This is not a unique characteristic of our study, as other authors have reported similar observations [48,49]. This phenomenon could be explained by the relatively recent emergence of interest in the subject.
Furthermore, while the results indicated a temporary negative impact of the COVID-19 pandemic on the volume of scientific production on this topic, the overall trend in production over time follows a steady increase. The COVID-19 pandemic has had various effects on the use of scientific evidence in public health decision making [50].
On the one hand, the pandemic revealed a politicisation and selective use of scientific evidence, which led to intense polarisation and tensions between knowledge producers and decision-makers. The rise and activities of groups labelled as conspiracy theorists illustrate this extreme politicisation [51].
Secondly, there was a noticeable erosion of public trust in institutions and policy directives. This loss of trust was accompanied by widespread misinformation, including the rapid dissemination of unverified data, notably in the form of fake news on social media platforms [52].
The speed of scientific publishing during this same period is another notable feature that may have contributed to the rapid uptake of evidence by decision-makers. As demonstrated in the study by Schonhaut, “There were 31,319 research articles on COVID-19 and 4287 on human influenza published during 2020. The median time to acceptance for COVID-19 was significantly shorter than that for human influenza (8 vs. 92 days). The median time to publication for COVID-19 articles was also shorter (12 vs. 16 days). Notably, 47.0% of COVID-19 research articles were accepted within the first week of submission, and 19.5% within one day” [53].
Beyond this dynamic, it is important to recall that key decisions in the management of the pandemic—such as vaccination strategies, mask mandates, the use of hand sanitiser, and lockdowns—were based on Western epidemiological models. These measures were often replicated and implemented as-is in contexts with markedly different demographic, epidemiological, political, and economic realities, without sufficient understanding, analysis, or testing by local decision-makers [54].
Finally, it is worth noting the increased visibility of public health professionals and researchers in the media and within pandemic response structures, which undoubtedly influenced decision-making processes during the course of the pandemic [55].
By and large, the observed increase in production trend is not specific to public health or to the issue of scientific evidence use in this field. Other literature syntheses in engineering research, public health, biomedical sciences, and accounting demonstrate the same pattern [49,56,57].
Several factors may explain this trend: the emergence and proliferation of new scientific publication channels, changes in publication processes, the “commercialisation” of academic publishing, technological advancements, and the expansion of research institutions (universities). It is reasonable to anticipate that the integration of artificial intelligence tools in scientific production will further reinforce this trend [58].
Regarding journal types and research domains, the majority of publications are found in public health journals, with a smaller proportion appearing in biomedical journals. These publications predominantly focus on preventive health (public health) and only marginally on curative health (medical). This distribution can be explained by the fact that biomedical journals primarily concentrate on clinical issues, experimental studies, and laboratory research, with limited attention paid to public health concerns. In contrast, public health decisions pertain to implementation science, health systems research, and policy evaluation.
Our findings indicate a predominance of qualitative study designs. This poses challenges for policy-makers, who tend to favour quantitative studies, a preference that is not exclusive to health sector decision-makers [59,60]. The prevalence of qualitative studies over empirical research complicates both the comprehension and utilisation of evidence by policy-makers, who are often required to communicate their decisions using numerical data. This reliance on quantitative metrics may hinder the effective use of scientific evidence in public policy [61,62]. Moreover, it should be recalled that for some, systematic reviews have become central in certain domains as a mechanism for translating evidence into policy, particularly in sectors beyond public health, such as public administration and transport policy [63,64].
The prevailing international/global scope of studies highlights that many qualitative investigations approach the subject from a global perspective. Given that health policies are often defined at the national level, the predominance of this research scope, as demonstrated in our study, is understandable. The underlying centralised administrative model raises concerns about the consideration of regional and community-specific particularities. It is regrettable that few studies have focused on the use of scientific evidence in decision making at the community level.
Our literature review indicates the existence of global disparities in knowledge production and its relevance to policy.
From a geographical perspective, the analysis of the geographical distribution of scientific production within our corpus supports the reliability of our methodology. The primary countries of affiliation for first authors include England, the United States, Australia, and Canada. While Anglo-Saxon countries dominate scientific research, questions arise regarding the applicability of such evidence to diverse cultural, social, and political contexts, particularly in the implementation of local policies in non-Anglo-Saxon political systems. A notable finding was the prominent contribution of Nigerian authors to research on this topic. This result stems from the work of Chigozie J Uneke’s team at the African Institute for Health Policy and Health Systems, facilitated by substantial investment in the establishment and operation of their research infrastructure [65].
The issue of inequalities in knowledge production is reflected in the dominance of research, funding, and publication structures by high-income, so-called “developed” countries, in contrast with those in the Global South. Salager Meyer points out that the existing disparity is also highlighted by the fact that 90% of important scientific research is published in 10% of journals, and while developing countries comprise 80% of the world’s population, only 2% of indexed scientific publications come from these parts of the world [66].
A deeper analysis of this asymmetry also reveals significant epistemological gaps. The predominance of Western paradigms and methodologies tends to marginalise local knowledge systems. Indigenous, experiential, or non-Western epistemologies are often dismissed as “anecdotal” or “unscientific”. Akena (2012) and Chanza (2013) examined the production of Western knowledge and its validation, imposition, and effects on indigenous people and their knowledge. The author argues that there is a relationship between knowledge producers and their motives and the society in which they live. This relationship influences what is considered “legitimate knowledge” in society, politics, and the economy in non-Western contexts [67,68].
Similarly, the legacy of Eurocentrism continues to affect knowledge production in the social sciences. Evidence produced in and about the Global North is assumed to be more “universal”, whereas evidence from or produced in the Global South is considered valid only for specific contexts (i.e., “localised”) [69].
Another major disparity relates to the influence of Western donors in setting priorities, allocating resources, and shaping the strategic directions of global health. The study by Nugent et al. highlights this imbalance by examining donor spending on noncommunicable diseases (NCDs) in developing countries between 2001 and 2008. The analysis reveals that less than 3 per cent of total development assistance for health (DAH)—that is, USD 503 million out of USD 22 billion—was allocated to NCDs in 2007–2008. Donor assistance for health targeting NCDs reached USD 686 million in 2008.
When compared in terms of disease burden, donors provided approximately USD 0.78 per disability-adjusted life year (DALY) attributable to NCDs in developing countries in 2007, in contrast with USD 29.90 per DALY for HIV/AIDS, tuberculosis, and malaria. The authors estimate that if donors were to provide even half the level of support per DALY for NCDs that they currently allocate to the three major infectious diseases, this would amount to nearly USD 4 billion in DAH for NCDs [70,71].
The other major disparity previously mentioned concerns cultural bias. For example, it is well established that over 85% of the global population resides in low- and middle-income countries where English proficiency is less common than in high-income countries. Such disparities may exacerbate the discrepancies in both producing and accessing scientific literature. This issue is particularly pertinent in critical care. For example, among the top four countries with the highest numbers of intensive care unit beds worldwide (the United States, Brazil, China, and Germany), only one has English as its native language [72].
Structural problems affecting scientific research in countries of the Global South must also be taken into account in the analysis of these disparities, as noted by Ciocca and Delgado, the major factors contributing to low scientific productivity are the limited access to grant opportunities, inadequate budgets, substandard levels of laboratory infrastructure and equipment, the high cost and limited supply of reagents, and the inadequate salaries and personal insecurity of scientists. The political and economic instability in several Latin American countries results in a lack of long-term goals that are essential to the development of science. In Latin America, science is not an engine of the economy. Most equipment and supplies are imported, and national industries are not given the incentives to produce these goods at home [73].
Overall, the production and use of scientific evidence at the global level remain shaped by scientific colonialism and neo-colonialism, the dominance and constraints imposed by funding structures, imbalances in data hierarchies, ineffective knowledge transfer, and the limited applicability of so-called universal solutions to health challenges in non-Western contexts.
Regarding the institutional affiliations of first authors, our results indicate that the majority of studies originate from universities. Indeed, universities remain the primary hubs for knowledge production worldwide. This could serve as a lever for evidence utilisation by policy-makers, as decision-makers place significant importance on the source of data as opposed to its quality [74,75,76].
The analysis of research themes reveals that few studies have focused on evaluation, collaboration, or competencies. Most research has concentrated on evidence utilisation and the application of knowledge and practices. This trend may be attributed to the lack of an evaluation culture [77,78]. It is important to note that, given the frequencies observed, a single article could address multiple themes.
This study highlights that only approximately one-third of the reviewed works provide information on the determinants (factors and/or barriers) influencing the use of scientific evidence in public health decision making. The results align with those of a systematic review on barriers to evidence utilisation in public policy more broadly [17,19,79]. A more detailed analysis of how evidence is marginalised, ignored, or used reveals that
(i)
Evidence is often instrumentalised or disregarded depending on the political or institutional context. Centralised political systems, for example, are less conducive to research uptake, as power concentration limits pluralistic debate and reduces demand for evidence. In contrast, decentralised or federal systems foster greater use of research to legitimise and defend policy decisions [37,80].
(ii)
Evidence may be used in ways that are more strategic than scientific, with policy-makers tending to rely more heavily on technical reports from international agencies than on scientific data generated at the community or local level [80].
Indeed, the authors of those studies identify the most frequently cited obstacles as limited access to research, a lack of relevant studies, timing constraints/lack of opportunities for result application, and insufficient research literacy among policy-makers and other users.
(iii)
Certain forms of scientific evidence are either adopted or dismissed depending on the influence of lobbyists within national decision-making bodies, particularly in policy areas such as drug regulation, tobacco control, and the food and pharmaceutical industries. International organisations can exert direct power through conditionality attached to aid or loans, or indirect power by setting norms and standards that national governments adopt [81,82].
(iv)
The ability—or inability—to adapt and interpret knowledge in relation to local contexts can either facilitate or hinder the use of evidence from a technical standpoint [83].
(v)
Finally, it is important to recall that the prevailing culture of evidence hierarchies—particularly the privileging of quantitative over qualitative data—can lead to the marginalisation of qualitative evidence in certain public health decisions. According to some authors, specific collaborative environments between researchers and policy-makers can help facilitate the use of evidence in decision-making processes [5].
With regard to the evolution of scientific production over time, a deeper understanding requires placing these developments within the historical trajectory of the evidence-informed decision-making movement. This history has evolved across several key phases. Prior to 1950, the dominant paradigm was that of rational planning in public health decision making. Between 1950 and 1970, there was increased reliance on the technological model and the use of scientific research, notably influenced by Carol Weiss’s model of research utilisation. The third phase, spanning from 1980 to 1990, was marked by the institutionalisation of evidence-based medicine (EBM), particularly through the foundational work of the Cochrane Collaboration. The period from 2000 to 2010 witnessed the rise of realistic evaluation and a growing emphasis on stakeholder engagement in decision-making processes. From 2010 to the present, new tools and approaches have emerged, including Big Data, artificial intelligence (AI), and a heightened focus on critical reflection and epistemological vigilance in public health decision making. The year 2010 represents a turning point, reflecting a broader shift towards inclusive decision-making frameworks. This period is characterised by greater involvement of foundations and think tanks and a stronger emphasis on integrating the data, needs, and perspectives of diverse stakeholders involved in shaping public health policies. These changes also reflect a growing epistemological awareness regarding the nature, quality, and relevance of evidence in contemporary public health governance [19,84,85].
It should be noted that these developments did not occur in a linear or sequential manner, but rather as part of a continuum marked by the accumulation and layering of knowledge.

Strengths and Limitations

Beyond its internal coherence and external validity, our study presents several strengths. It is based on a broader corpus of materials than previous research on the subject.
Specifically, over 4000 references were initially collected, with approximately 800 articles analysed. The data collection was conducted using databases that are not only extensive but also multidisciplinary. Web of Science, for instance, encompasses more than 20,000 scientific journals and over one billion cited references across the domains of science, social sciences, arts, and humanities. Additionally, the long observation period (50 years, from 1974 to 2024) provides this review with a strong historical perspective for analysis.
However, this study also has certain limitations. A clear cultural bias exists, as the analysed works have been predominantly shaped by Anglo-Saxon culture, its healthcare system model, and its social and governance structures. The generalisability of the findings is, therefore, constrained when applied to other contexts, such as the majority of African healthcare systems (additional explanatory factors include low investment, etc.) [19,86,87,88].
In light of the specific historical, political, cultural, and economic contexts of Africa—as well as certain countries in Asia and Latin America—evidence-informed decision making must be understood and operationalised differently. This would require the following:
(i)
Conceptual clarification and contextual adaptation of nosologies, that is, the definitions and classifications of diseases that are culturally and epidemiologically relevant to local health realities.
(ii)
Methodologically, while randomised controlled trials (RCTs) are often considered the gold standard, incorporating experiential knowledge, including community narratives, traditional knowledge systems, and local health practices, may usefully complement formal scientific evidence.
(iii)
In contexts where health information systems are weak or non-operational, efforts should be made to leverage routine data generated by community-based structures, and not to rely solely on academic or institutional data sources.
(iv)
In cases where conflicts arise between the research agendas of donors, national or local governments, and community needs, the establishment of mediation and negotiation structures could help broker consensus and lead to politically and socially acceptable decisions, even in the absence—or in the presence of limitations—of conventional scientific evidence.
Finally, it would be beneficial to examine the funding sources and mechanisms behind the research leading to the analysed publications. Understanding these financial influences would help identify the interests of the funding bodies supporting these studies.

6. Future Research

This research allows for the identification of several avenues for future studies. The actions required to address the need to incorporate scientific evidence into public health decisions call for the design of national, and even regional, stakeholder partnership strategies founded upon (i) the establishment of frameworks, structures, infrastructures or research institutions, and mechanisms for political decision making based on evidence; (ii) the development of standards, law, or regulations governing scientific research and their harmonisation; and (iii) basic and continuing training to prepare researchers and policy-makers to work collaboratively. Regarding funders, research-funding institutions should reconsider their approaches to priority setting by aligning funding decisions with the pressing needs identified by national and local governments, as well as by the communities directly concerned.
Future research agendas might focus on the following objectives: (i) analysing the political, social, and cultural influences that explain the use or non-use of evidence; (ii) investigating the determinants that shape the thematic choices of international research-funding bodies and their impact on research priorities in different countries; (iii) improving the quality, accessibility, and timeliness of health data, especially in underserved areas, as well as ensuring interoperability across health information systems; (iv) identifying and testing effective strategies for translating evidence into concrete policies, overcoming implementation barriers, and assessing the impact of these strategies on health outcomes; (v) evaluating the role of universities in generating evidence and, especially, in facilitating its use in public policy; (vi) exploring the training needs of policy-makers to enable them to make better use of scientific evidence; and finally, (vii) developing robust data collection methods, fostering local research initiatives, and creating databases that support informed policy making; (viii) examining the alignment between research topics and findings, on the one hand, and policy-makers’ expectations and requirements, on the other; and (ix) bridging the existing gap between centres of knowledge (universities) and centres of decision making.

7. Conclusions

Our research highlights the existence of a science concerning the use of scientific evidence in public health decision making, for which the volume of scientific production is continuously increasing.
Interest in this topic originates predominantly from the Anglophone academic world and is reflected mostly in the production of qualitative studies, published largely in public health journals. Although public health decisions can be taken at various levels, most studies focus on the use of evidence at the national level. The synthesis we conducted on the themes covered by these studies indicates that this interest is primarily directed towards the use, implementation, and generation of evidence for public policy.
The analysis of trends in scientific production over time revealed significant shifts beginning in 2010, highlighting the importance of considering the historical contexts in which public health challenges emerge, as well as the necessity of evidence-based decision making to effectively address them.
Our findings underscore existing gaps that need to be addressed in future research, particularly the lack of coverage of evidence use at local, community, and regional levels. It is also crucial to keep in mind that the inherently interdisciplinary nature of the issue of evidence use in public health necessitates a deeper understanding of policy-makers’ needs with regard to (i) health data (absence or lack of data, data reliability, timeliness, and accessibility) and (ii) training on the use of scientific evidence for decision making. Aligning the agendas of health professionals (i.e., data collection), researchers (i.e., research priorities), and decision-makers (i.e., expectations and requirements) would enable the development of informed public health policies that meet the expectations and needs of both nations and communities. These avenues for research are essential to the formulation of recommendations, especially for certain continents facing fragile health systems, such as those in Africa.
The use of scientific evidence in public health decision making must be understood within the historical trajectory of the evidence-based movement (EBM) and the diverse contexts in which it is applied; our systematic review highlights its dynamic and evolving nature and underscores the pressing need for continued research in this field.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/ijerph22091343/s1. File S1: Details of Boolean search string for each database; File S2: Main version of table used to extract data from included studies; File S3: The PRISMA checklist.

Author Contributions

Conceptualisation, All authors; methodology, S.C., A.-L.P. and E.K.M.; validation, G.K., E.K.M. and A.-L.P.; formal analysis, S.C. and E.K.M.; investigation, S.C. and E.K.M.; data curation, S.C. and E.K.M.; writing—original draft preparation, All authors.; writing—review and editing, All authors; visualisation, S.C. and A.-L.P.; supervision, E.K.M. and G.K.; project administration, S.C. and A.-L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This review utilised published studies that are available in the public domain.

Acknowledgments

We thank the library at the University of Geneva.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of the data; in the writing of this manuscript; or in the decision to publish the results.

References

  1. Weir, E.; d’Entremont, N.; Stalker, S.; Kurji, K.; Robinson, V. Applying the balanced scorecard to local public health performance measurement: Deliberations and decisions. BMC Public Health 2009, 9, 127. [Google Scholar] [CrossRef]
  2. Degeling, C.; Carter, S.M.; Rychetnik, L. Which public and why deliberate?—A scoping review of public deliberation in public health and health policy research. Soc. Sci. Med. 2015, 131, 114–121. [Google Scholar] [CrossRef]
  3. La Brooy, C.; Kelaher, M. The research–policy–deliberation nexus: A case study approach. Health Res. Policy Syst. 2017, 15, 75. [Google Scholar] [CrossRef] [PubMed]
  4. Magnus, P.D. Science, values, and the priority of evidence. Logos Epistem. 2018, 9, 413–431. [Google Scholar] [CrossRef]
  5. Pappaioanou, M.; Malison, M.; Wilkins, K.; Otto, B.; Goodman, R.A.; Churchill, R.E.; White, M.; Thacker, S.B. Strengthening capacity in developing countries for evidence-based public health: The data for decision-making project. Soc. Sci. Med. 2003, 57, 1925–1937. [Google Scholar] [CrossRef]
  6. Mooney, H. Ignoring evidence has led to ineffective policies to prevent drug misuse, shows research. BMJ Br. Med. J. 2012, 344, e148. [Google Scholar] [CrossRef]
  7. Dockery, D.W. Health effects of particulate air pollution. Ann. Epidemiol. 2009, 19, 257–263. [Google Scholar] [CrossRef] [PubMed]
  8. Malta, M.; Vettore, M.V.; da Silva, C.M.F.P.; Silva, A.B.; Strathdee, S.A. Political neglect of COVID-19 and the public health consequences in Brazil: The high costs of science denial. EClinicalMedicine 2021, 35, 100878. [Google Scholar] [CrossRef]
  9. Idrovo, A.J.; Manrique-Hernández, E.F.; Fernández Niño, J.A. Report from Bolsonaro’s Brazil: The consequences of ignoring science. Int. J. Health Serv. 2021, 51, 31–36. [Google Scholar] [CrossRef]
  10. Jenicek, M. Epidemiology, evidenced-based medicine, and evidence-based public health. J. Epidemiol. 1997, 7, 187–197. [Google Scholar] [CrossRef]
  11. U.S. Department of Health, Education, and Welfare. Smoking and Health. Report of the Advisory Committee to the Surgeon General of the Public Health Service; PHS Publication No. 1103.; U.S. Department of Health, Education, and Welfare, Public Health Service, Center for Disease Control: Washington DC, USA, 1964. [Google Scholar]
  12. Marmot, M.G.; Smith, G.D.; Stansfeld, S.; Patel, C.; North, F.; Head, J.; White, I.; Brunner, E.; Feeney, A. Health inequalities among British civil servants: The Whitehall II study. Lancet 1991, 337, 1387–1393. [Google Scholar] [CrossRef] [PubMed]
  13. Cooper, A.; Lewis, R.; Gal, M.; Joseph-Williams, N.; Greenwell, J.; Watkins, A.; Strong, A.; Williams, D.; Doe, E.; Law, R.-J.; et al. Informing evidence-based policy during the COVID-19 pandemic and recovery period: Learning from a national evidence centre. Glob. Health Res. Policy 2024, 9, 18. [Google Scholar] [CrossRef]
  14. Kneale, D.; Rojas-García, A.; Raine, R.; Thomas, J. The use of evidence in English local public health decision-making: A systematic scoping review. Implement. Sci. 2017, 12, 53. [Google Scholar] [CrossRef] [PubMed]
  15. Goyet, S.; Touch, S.; Ir, P.; SamAn, S.; Fassier, T.; Frutos, R.; Tarantola, A.; Barennes, H. Gaps between research and public health priorities in low income countries: Evidence from a systematic literature review focused on Cambodia. Implement. Sci. 2015, 10, 32. [Google Scholar] [CrossRef] [PubMed]
  16. Campbell, D.M.; Moore, G. Increasing the use of research in population health policies and programs: A rapid review. Public Health Res. Pract. 2018, 28, e2831816. [Google Scholar] [CrossRef] [PubMed]
  17. Innvaer, S.; Vist, G.; Trommald, M.; Oxman, A. Health policy-makers’ perceptions of their use of evidence: A systematic review. J. Health Serv. Res. Policy 2002, 7, 239–244. [Google Scholar] [CrossRef]
  18. Orton, L.; Lloyd-Williams, F.; Taylor-Robinson, D.; O’Flaherty, M.; Capewell, S. The use of research evidence in public health decision making processes: Systematic review. PLoS ONE 2011, 6, e21704. [Google Scholar] [CrossRef]
  19. Oliver, K.; Innvar, S.; Lorenc, T.; Woodman, J.; Thomas, J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv. Res. 2014, 14, 2. [Google Scholar] [CrossRef]
  20. Masood, S.; Kothari, A.; Regan, S. The use of research in public health policy: A systematic review. Evid. Policy 2020, 16, 7–43. [Google Scholar] [CrossRef]
  21. Zardo, P.; Collie, A. Predicting research use in a public health policy environment: Results of a logistic regression analysis. Implement. Sci. 2014, 9, 142. [Google Scholar] [CrossRef]
  22. Milat, A.J.; Li, B. Narrative review of frameworks for translating research evidence into policy and practice. Public Health Res. Pract. 2017, 27, e2711704. [Google Scholar] [CrossRef] [PubMed]
  23. Brownson, R.C.; Fielding, J.E.; Maylahn, C.M. Evidence-based public health: A fundamental concept for public health practice. Annu. Rev. Public Health 2009, 30, 175–201. [Google Scholar] [CrossRef] [PubMed]
  24. Hansen, H.F. Organisation of evidence-based knowledge production: Evidence hierarchies and evidence typologies. Scand. J. Public Health 2014, 42 (Suppl. S13), 11–17. [Google Scholar] [CrossRef] [PubMed]
  25. Kemm, J. The limitations of ‘evidence-based’ public health. J. Eval. Clin. Pract. 2006, 12, 319–324. [Google Scholar] [CrossRef]
  26. Attena, F. Complexity and indeterminism of evidence-based public health: An analytical framework. Med. Health Care Philos. 2014, 17, 459–465. [Google Scholar] [CrossRef]
  27. Colombo, S.; Checchi, F. Decision-making in humanitarian crises: Politics, and not only evidence, is the problem. Epidemiol. Prev. 2018, 42, 214–225. [Google Scholar]
  28. Jansen, M.W.; Van Oers, H.A.; Kok, G.; De Vries, N.K. Public health: Disconnections between policy, practice and research. Health Res. Policy Syst. 2010, 8, 37. [Google Scholar] [CrossRef]
  29. Weiss, C.H. The many meanings of research utilization. In Social Science and Social Policy; Routledge: Milton Park, UK, 2021; pp. 31–40. [Google Scholar]
  30. Weiss, C.H.; Bucuvalas, M.J. Social Science Research and Decision-Making; Columbia University Press: New York, NY, USA, 1980. [Google Scholar]
  31. Pawson, R.; Tilley, N. Realistic Evaluation; Sage Publications: Thousand Oaks, CA, USA, 1997. [Google Scholar]
  32. Lavis, J.N.; Lomas, J.; Hamid, M.; Sewankambo, N.K. Assessing country-level efforts to link research to action. Bull. World Health Organ. 2006, 84, 620–628. [Google Scholar] [CrossRef]
  33. Astbury, B. Some reflections on Pawson’s science of evaluation: A realist manifesto. Evaluation 2013, 19, 383–401. [Google Scholar] [CrossRef]
  34. Petticrew, M. Public health evaluation: Epistemological challenges to evidence production and use. Evid. Policy 2013, 9, 87–95. [Google Scholar] [CrossRef]
  35. Anderson, L.M.; Brownson, R.C.; Fullilove, M.T.; Teutsch, S.M.; Novick, L.F.; Fielding, J.; Land, G.H. Evidence-based public health policy and practice: Promises and limits. Am. J. Prev. Med. 2005, 28, 226–230. [Google Scholar] [CrossRef]
  36. Dobbins, M.; Cockerill, R.; Barnsley, J. Factors affecting the utilization of systematic reviews: A study of public health decision makers. Int. J. Technol. Assess. Health Care 2001, 17, 203–214. [Google Scholar] [CrossRef]
  37. Dobrow, M.J.; Goel, V.; Lemieux-Charles, L.; Black, N.A. The impact of context on evidence utilization: A framework for expert groups developing health policy recommendations. Soc. Sci. Med. 2006, 63, 1811–1824. [Google Scholar] [CrossRef] [PubMed]
  38. Opsahl, A.; Nelson, T.; Madeira, J.; Wonder, A.H. Evidence-based, ethical decision-making: Using simulation to teach the application of evidence and ethics in practice. Worldviews Evid. Based Nurs. 2020, 17, 412–417. [Google Scholar] [CrossRef] [PubMed]
  39. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  40. Straf, M.L.; Schwandt, T.A.; Prewitt, K. (Eds.) Using Science as Evidence in Public Policy; National Academies Press: Washington, DC, USA, 2012. [Google Scholar]
  41. CES Guide to Implementation, What is Implementation Science? Available online: https://implementation.effectiveservices.org/ (accessed on 30 July 2025).
  42. Bauer, M.S.; Kirchner, J. Implementation science: What is it and why should I care? Psychiatry Res. 2020, 283, 112376. [Google Scholar] [CrossRef]
  43. Wanzer, D.L. What is evaluation? Perspectives of how evaluation differs (or not) from research. Am. J. Eval. 2021, 42, 28–46. [Google Scholar] [CrossRef]
  44. Straus, S.E.; Tetroe, J.M.; Graham, I.D. Knowledge translation is the use of knowledge in health care decision making. J. Clin. Epidemiol. 2011, 64, 6–10. [Google Scholar] [CrossRef]
  45. Lin, C.; Tsai, H.L.; Wu, J.C. Collaboration strategy decision-making using the Miles and Snow typology. J. Bus. Res. 2014, 67, 1979–1990. [Google Scholar] [CrossRef]
  46. Benatar, S.R.; Ashcroft, R. International perspectives on resource allocation. Health Syst. Policy Financ. Organ. 2010, 78, 180. [Google Scholar]
  47. WHO. Health Promotion: Glossary of Terms; World Health Organization: Geneva, Switzerland, 2006. [Google Scholar]
  48. Yao, Q.; Chen, K.; Yao, L.; Lyu, P.H.; Yang, T.A.; Luo, F.; Chen, S.-Q.; He, L.-Y.; Liu, Z.-Y. Scientometric trends and knowledge maps of global health systems research. Health Res. Policy Syst. 2014, 12, 26. [Google Scholar] [CrossRef] [PubMed]
  49. Wang, M.; Liu, P.; Zhang, R.; Li, Z.; Li, X. A Scientometric Analysis of Global Health Research. Int. J. Environ. Res. Public Health 2020, 17, 2963. [Google Scholar] [CrossRef]
  50. Abimbola, S. The uses of knowledge in global health. BMJ Glob. Health 2021, 6, e005802. [Google Scholar] [CrossRef]
  51. Bottemanne, H. Conspiracy theories and COVID-19: How do conspiracy beliefs arise? L’encephale 2022, 48, 571–582. [Google Scholar] [CrossRef]
  52. Carrion-Alvarez, D.; Tijerina-Salina, P.X. Fake news in COVID-19: A perspective. Health Promot. Perspect. 2020, 10, 290. [Google Scholar] [CrossRef] [PubMed]
  53. Schonhaut, L.; Costa-Roldan, I.; Oppenheimer, I.; Pizarro, V.; Han, D.; Díaz, F. Scientific publication speed and retractions of COVID-19 pandemic original articles. Rev. Panam. Salud Publica 2022, 46, e25. [Google Scholar] [CrossRef]
  54. Bong, C.L.; Brasher, C.; Chikumba, E.; McDougall, R.; Mellin-Olsen, J.; Enright, A. The COVID-19 Pandemic: Effects on Low- and Middle-Income Countries. Anesth. Analg. 2020, 131, 86–92. [Google Scholar] [CrossRef] [PubMed]
  55. Neresini, F.; Giardullo, P.; Di Buccio, E.; Morsello, B.; Cammozzo, A.; Sciandra, A.; Boscolo, M. When scientific experts come to be media stars: An evolutionary model tested by analysing coronavirus media coverage across Italian newspapers. PLoS ONE 2023, 18, e0284841. [Google Scholar] [CrossRef]
  56. Phillips, M.; Reed, J.B.; Zwicky, D.; Van Epps, A.S.; Buhler, A.G.; Rowley, E.M.; Zhang, Q.; Cox, J.M.; Zakharov, W. Systematic Reviews in the Engineering Literature: A Scoping Review. IEEE Access 2024, 12, 62648–62663. [Google Scholar] [CrossRef]
  57. Monteiro, A.; Cepêda, C. Accounting Information Systems: Scientific Production and Trends in Research. Systems 2021, 9, 67. [Google Scholar] [CrossRef]
  58. Homolak, J. Opportunities and risks of ChatGPT in medicine, science, and academic publishing: A modern Promethean dilemma. Croat. Med. J. 2023, 64, 1. [Google Scholar] [CrossRef] [PubMed]
  59. Natow, R.S. Policy actors’ perceptions of qualitative research in policymaking: The case of higher education rulemaking in the United States. Evid. Policy 2022, 18, 109–126. [Google Scholar] [CrossRef]
  60. Rabab’h, B.S.; Omar, K.M.; Alzyoud, A.A.Y. Literature review of the Impact of the Use of Quantitative techniques in administrative Decision Making: Study (Public and private sector institutions). Int. J. Sci. Res. Publ. (IJSRP) 2019, 9, 515–521. [Google Scholar] [CrossRef]
  61. Dieckmann, N.F.; Slovic, P.; Peters, E.M. The use of narrative evidence and explicit likelihood by decisionmakers varying in numeracy. Risk Anal. An. Int. J. 2009, 29, 1473–1488. [Google Scholar] [CrossRef]
  62. Van der Voort, H.G.; Klievink, A.J.; Arnaboldi, M.; Meijer, A.J. Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making? Gov. Inf. Q. 2019, 36, 27–38. [Google Scholar] [CrossRef]
  63. Moynihan, D.P.; Lavertu, S. Does involvement in performance management routines encourage performance information use? Evaluating GPRA and PART. Public Adm. Rev. 2012, 72, 592–602. [Google Scholar] [CrossRef]
  64. Egan, M.; Petticrew, M.; Ogilvie, D. Informing healthy transport policies: Systematic reviews and research synthesis. Transp. Res. Rec. 2005, 1908, 214–220. [Google Scholar] [CrossRef]
  65. Uneke, C.J.; Okedo-Alex, I.N.; Akamike, I.C.; Uneke, B.I.; Eze, I.I.; Chukwu, O.E.; Otubo, K.I.; Urochukwu, H.C. Institutional roles, structures, funding and research partnerships towards evidence-informed policy-making: A multisector survey among policy-makers in Nigeria. Health Res. Policy Syst. 2023, 21, 36. [Google Scholar] [CrossRef]
  66. Salager-Meyer, F. Scientific publishing in developing countries: Challenges for the future. J. Engl. Acad. Purp. 2008, 7, 121–132. [Google Scholar] [CrossRef]
  67. Akena, F.A. Critical Analysis of the Production of Western Knowledge and Its Implications for Indigenous Knowledge and Decolonization. J. Black Stud. 2012, 43, 599–619. [Google Scholar] [CrossRef]
  68. Chanza, N.; De Wit, A. Epistemological and methodological framework for indigenous knowledge in climate science. Indilinga Afr. J. Indig. Knowl. Syst. 2013, 12, 203–216. [Google Scholar]
  69. Castro Torres, A.F.; Alburez-Gutierrez, D. North and South: Naming practices and the hidden dimension of global disparities in knowledge production. Proc. Natl. Acad. Sci. USA 2022, 119, e2119373119. [Google Scholar] [CrossRef]
  70. Nugent, R.; Feigl, A. Where Have All the Donors Gone? Scarce Donor Funding for Non-Communicable Diseases; Working Paper No. 228; Center for Global Development: Washington, DC, USA, 2010. [Google Scholar]
  71. Altbach, P.G. Peripheries and centers: Research universities in developing countries. Asia Pac. Educ. Rev. 2009, 10, 15–27. [Google Scholar] [CrossRef]
  72. González-Dambrauskas, S.; Salluh, J.I.F.; Machado, F.R.; Rotta, A.T. Science over language: A plea to consider language bias in scientific publishing. Crit. Care Sci. 2024, 36, e20240084en. [Google Scholar] [CrossRef] [PubMed]
  73. Ciocca, D.R.; Delgado, G. The reality of scientific research in Latin America; an insider’s perspective. Cell Stress Chaperones 2017, 22, 847–852. [Google Scholar] [CrossRef] [PubMed]
  74. Oliver, K.A.; de Vocht, F. Defining ‘evidence’ in public health: A survey of policymakers’ uses and preferences. Eur. J. Public Health 2017, 27 (Suppl. S2), 112–117. [Google Scholar] [CrossRef]
  75. Oliver, K.; Lorenc, T.; Innvær, S. New directions in evidence-based policy research: A critical analysis of the literature. Health Res. Policy Syst. 2014, 12, 34. [Google Scholar] [CrossRef]
  76. Head, B.W. Toward more “evidence-informed” policy making? Public Adm. Rev. 2016, 76, 472–484. [Google Scholar] [CrossRef]
  77. Head, B.W. Three lenses of evidence-based policy. Aust. J. Public Adm. 2008, 67, 1–11. [Google Scholar] [CrossRef]
  78. Shulha, L.M.; Cousins, J.B. Evaluation use: Theory, research, and practice since 1986. Eval. Pract. 1997, 18, 195–208. [Google Scholar] [CrossRef]
  79. Humphries, S.; Stafinski, T.; Mumtaz, Z.; Menon, D. Barriers and facilitators to evidence-use in program management: A systematic review of the literature. BMC Health Serv. Res. 2014, 14, 171. [Google Scholar] [CrossRef]
  80. Liverani, M.; Hawkins, B.; Parkhurst, J.O. Political and institutional influences on the use of evidence in public health policy. A systematic review. PLoS ONE 2013, 8, e77404. [Google Scholar] [CrossRef]
  81. Fang, S.; Stone, R.W. International organizations as policy advisors. Int. Organ. 2012, 66, 537–569. [Google Scholar] [CrossRef]
  82. Barnett, M.; Duvall, R. Power in International Politics. Int. Organ. 2005, 59, 39–75. [Google Scholar] [CrossRef]
  83. Greenhalgh, T.; Howick, J.; Maskrey, N. Evidence Based Medicine: A Movement in Crisis? BMJ 2014, 348, g3725. [Google Scholar] [CrossRef] [PubMed]
  84. Pawson, R.; Greenhalgh, T.; Harvey, G.; Walshe, K. Realist Review—A New Method of Systematic Review Designed for Complex Policy Interventions. J. Health Serv. Res. Policy 2005, 10 (Suppl. S1), 21–34. [Google Scholar] [CrossRef] [PubMed]
  85. Head, B.W. Reconsidering evidence-based policy: Key issues and challenges. Policy Soc. 2010, 29, 77–94. [Google Scholar] [CrossRef]
  86. Erasmus, E.; Orgill, M.; Schneider, H.; Gilson, L. Mapping the existing body of health policy implementation research in lower income settings: What is covered and what are the gaps? Health Policy Plan. 2014, 29 (Suppl. S3), iii35–iii50. [Google Scholar] [CrossRef]
  87. Naude, C.E.; Zani, B.; Ongolo-Zogo, P.; Wiysonge, C.S.; Dudley, L.; Kredo, T.; Garner, P.; Young, T. Research evidence and policy: Qualitative study in selected provinces in South Africa and Cameroon. Implement. Sci. 2015, 10, 126. [Google Scholar] [CrossRef]
  88. Uneke, C.J.; Ezeoha, A.E.; Ndukwe, C.D.; Oyibo, P.G.; Onwe, F. Promotion of evidence-informed health policymaking in Nigeria: Bridging the gap between researchers and policymakers. Glob. Public Health 2012, 7, 750–765. [Google Scholar] [CrossRef]
Figure 1. A schematic diagram of the theoretical framework.
Figure 1. A schematic diagram of the theoretical framework.
Ijerph 22 01343 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabengele Mpinga, E.; Chebbaa, S.; Pittet, A.-L.; Kayumbi, G. Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years. Int. J. Environ. Res. Public Health 2025, 22, 1343. https://doi.org/10.3390/ijerph22091343

AMA Style

Kabengele Mpinga E, Chebbaa S, Pittet A-L, Kayumbi G. Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years. International Journal of Environmental Research and Public Health. 2025; 22(9):1343. https://doi.org/10.3390/ijerph22091343

Chicago/Turabian Style

Kabengele Mpinga, Emmanuel, Sara Chebbaa, Anne-Laure Pittet, and Gabin Kayumbi. 2025. "Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years" International Journal of Environmental Research and Public Health 22, no. 9: 1343. https://doi.org/10.3390/ijerph22091343

APA Style

Kabengele Mpinga, E., Chebbaa, S., Pittet, A.-L., & Kayumbi, G. (2025). Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years. International Journal of Environmental Research and Public Health, 22(9), 1343. https://doi.org/10.3390/ijerph22091343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop