Abstract
Recent decades have seen a rise of anti-science rhetoric, fueled by scientific scandals and failures of peer review, and the rise of trainable generative AI spreading misinformation. We argue, moreover, that the continued erosion of scientific authority also arises from inherent features in science and academia, including a reliance on publication as a method for gaining professional credibility and success. Addressing this multifaceted challenge necessitates a concerted effort across several key areas: strengthening scientific messaging, combating misinformation, rebuilding trust in scientific authority, and fundamentally rethinking academic professional norms. Taking these steps will require widespread effort, but if we want to rebuild trust with the public, we must make significant and structural changes to the production and dissemination of science.
1. Introduction
Inundated by a persistent stream of damning headlines in U.S. media, we are regularly confronted with the very public failures of American science (Hilliard et al., 2025). From glaring exposes on the incestuous nature of peer-review in Alzheimer’s disease research (Piller, 2023) to challenges regarding the replicability of scientific findings (National Academies of Sciences, Engineering, and Medicine, 2019), a veritable flood of failures and challenges now faces the scientific community (Hsiao & Schneider, 2021). This flood has further eroded the already tenuous trust that the public has placed in the scientific community. Elsewhere, we have argued that a shift in scientific discourse has been a compounding factor in the distrust of scientists and an erosion of scientific authority. Furthermore, we suggested that the IMRaD structure of scientific literature has created additional barriers to approachable scientific discourse and has created difficulty in the communication of scientific ideas and concepts. These issues are still persistent and must be addressed. In addition to these concerns regarding grammatical structure, here we argue that the continued erosion of scientific authority also arises from inherent features in science and academia as a whole. These concerns are wide sweeping but can be summarized into the reliance on publication as a method for gaining professional credibility and success, the continued emphasis on misinformation and disinformation within social media, the rise of trainable generative AI that often creates its own sources of information, and an increase and persistence of bias within scientific discourse. With anti-scientific and anti-intellectual ideals as a campaign talking point, the American political milieu has generally adopted anti-scientific rhetoric leading to the basis of budgetary and policy actions. As an example of this, the anti-scientific stance of the Trump administration has led to the curtailment of budgets, the reduction in scientific grants and a cessation of long-standing research focused on climate change (Ordner, 2025). Addressing this multifaceted challenge necessitates a concerted effort across several key areas: strengthening scientific messaging, combating misinformation, rebuilding trust in scientific authority, and fundamentally rethinking academic professional norms.
2. Science in Crisis and the Decline of Scientific Rigor
Academic publishing is partially responsible for the ongoing crisis in scientific authority. The publish or perish model does not reward quality; rather, it rewards production regardless of the quality or the replicability of a study. Despite the call for a move away from various metrics associated with the judgment of paper quality (DORA, 2012), productivity remains the most common metric of academic success (Pontika et al., 2022). When the modern University model was implemented in the United States in the early 20th century, access to higher education and graduate study was incredibly limited (D. Kim & Rury, 2007). Expectations for research and publication were modest, and the peer-review model worked (Niles et al., 2020). A small homogeneous group produced and reviewed each other’s work providing a source of confirmation within the generalized scientific community (Roy, 2021). This reinforced the traditional mechanism for scientists to gain credibility within their fields: a focus on narrow research questions deemed of interest, answered through experiment design, data collection, and interpretation, culminating in a peer-reviewed publication. This traditional model relies upon the assumption that scientists are both authorities in their fields and maintain a level of objectivity within their research—the ideal figure is of the ethical, earnest academic seeking truths for the public weal (Niles et al., 2020). However, this model began to crack decades ago (Hagiopol & Leru, 2024). As more people from more diverse backgrounds began entering higher education, they began to question the assumptions the traditional model was based on by pointing out how implicit biases frequently compromise objectivity and dictate research questions, study design, and data interpretation. Moreover, as universities moved toward a more business-structured model, they simultaneously viewed these diverse students as revenue streams without much regard to future employment. As a consequence, faculty became expensive fat to trim, shrinking the number of tenure-track jobs available (Shermer, 2021; Wright et al., 2004). A result of this has been an increase in the pressure to publish in order to bulk up CVs to justify an expensive position within academia. Even undergraduates today feel the pressure of having a publication in order to get into graduate school, whereas even 20 years ago having a publication before a PhD program was considered exceptional. To cite a few recent scandals, a 2024 scandal occurred in a California lab, where unreplicable data earned the unscrupulous postdoc a large grant and a tenure-track job (McMurray, 2024). In a similar case, a PhD student at MIT has been accused of faking data, and MIT’s own investigation has led to a retraction (Lumb, 2025). No doubt the pressure to produce positive results in order to land a tenure-track job was a contributing factor in this recent scandal.
This immense pressure upon scientists to publish their findings does not guarantee quality (Richardson et al., 2025); in fact, the pressure to publish decreases quality as harried researchers rush articles through the publication process in order to keep their jobs (Romero, 2017). Furthermore, there is a strong bias toward positive results which prevents a lot of research from being published that could be revelatory for the fields (Ophir & Jamieson, 2021). Negative results are not “sexy,” so they are ignored by journals, editors, and the scientific community at large (West & Bergstrom, 2021). The combination of these factors leads to rushed and poorly designed studies and an increase in the temptation to fake results (Piller, 2023). Science has been rocked by numerous scandals of recent date where top researchers have been ousted for faking data in both the social sciences, the humanities, and in the hard sciences (Fanelli, 2009). Since the 19th century, this perception of objectivity has led to authority and general respect for scientists and their messaging.
For example, Alzheimer’s scholarship has been set back decades with the revelations that the amyloid plaque hypothesis was largely based on shoddy or manufactured research going back decades. Journalist Charles Piller worked with academic whistle-blower Matthew Shrag to demonstrate how fraudulent data practices, nepotism, and the incestuous nature of academia, industry, the FDA, and granting organizations enabled billions of dollars to be wasted on research that produced over-hyped, subpar results that “sidelined, starved for resources, and even bullied” researchers outside of the amyloid plaque hypothesis (Piller, 2023). In Illinois, the convictions of countless individuals have now come under scrutiny after the results and expert testimony at a forensics lab under the aegis of the University of Illinois were revealed to be of questionable scientific accuracy (Dukmasova, 2025). In these and many other instances of misconduct and fabrication in science publishing can state the need to publish for career advancement or maintenance of funding as the reason for doing so (Paruzel-Czachura et al., 2021; Teixeira da Silva & Nazarovets, 2025). One group of commentators concludes that:
“Paper mills flourish because of research systems that evaluate scientists using publication metrics, thereby inadvertently providing an incentive for misconduct. People with paper-mill publications might be promoted over those who have more modest—but honest—publication records…40% of research-intensive institutions in the United States and Canada consider the impact factor of the journals in which an individual has had work published when making decisions about their promotion and tenure. Institutions seldom seem to punish researchers for using paper mills, perhaps owing to a lack of awareness, or concern about reputational or legal risks.”(Abalkina et al., 2025)
These authors offer several interventions to combat this phenomenon including educating universities and granting agencies about the presence of fake research-based paper mills, holding bad actors accountable for their work, and working with publishers to avoid publishing fake research. We have seen many efforts in this realm, including suggestions to call out falsified papers and retractions when seeking funding (Xu & Hu, 2025). Still, they do not suggest reducing the pressure to publish as a potential solution. Yet, academia is facing pressure to produce more and more positive results, which has led to a replication crisis. The replicability of scientific results has been in question for decades with many fields struggling to reproduce results of papers (Tang, 2024). According to a survey of scientists published in 2016, over 70% of scientific studies could not be replicated (Baker, 2025). Even if fraud only accounts for a small fraction of these irreplicable studies, that statistic still reveals that there is something amiss. Scientific reporting, at least in the U.S., often presents scientific findings as fact grounded in quantitative data. However, if each study is truly unique or nonreplicable, then they bear more resemblance to “squishy” qualitative data used in social sciences and the humanities that is often dismissed as anecdotal. Moreover, this reporting style contributes to the public’s state of mistrust. If scientific knowledge is built on a house of cards, when it tumbles, it not only sets back a field of research, but it also undermines public confidence in science in general. As more tenuous studies are published, the scientist is no longer seen as a seeker and arbiter of truth or an expert in a field, but rather just another googler at best to a self-interested partisan hack, or, in some dark corners of the internet, part of a global cabal plotting a new world order (Fuchs & Westervelt, 1996).
3. The Rise of Anti-Science Rhetoric and Scientific Misinformation
These challenges are amplified by social media and its ability to focus on and amplify the ideas of fringe groups with a proverbial ax to grind (Ruths, 2019; Scheufele & Krause, 2019). The public only sees these handfuls of bad actors and only knows that they have been lied to by scientists or science as a whole (West & Bergstrom, 2021). The next logical step concludes that if scientists will lie about one thing, why wouldn’t they lie about another? If they have been lying about Grandma’s Alzheimer’s treatment and power-posing, why wouldn’t they also be lying about climate science and vaccines (Lewandowsky et al., 2017)? Social media amplifies these concerns by seeking likes and clicks over truth (McLoughlin et al., 2024). To exacerbate this problem, unfortunately, in the United States, those swayed by these falsifications of truth now hold many of the highest levels of power to create science policy. Since February 2025, there has been a notable increase in personnel turnover within the Department of Health and Human Services and Center for Disease Control, including the departure of numerous experienced scientists. Simultaneously, there has been a rise in the appointment of individuals who advocate for treatments outside of the established medical consensus and those who have been vocally critical of broader scientific trends. While primarily centered in America, this anti-science movement has begun to adapt to other countries worldwide, adapting to local contexts (Rudroff, 2025).
Since the COVID-19 pandemic, the unsettling rise of anti-scientific rhetoric has blossomed, resulting in a generalized anti-science backlash and a reliance on ineffective or even detrimental healthcare practices, like essential oils or ivermectin (West & Bergstrom, 2021). Largely attributed to the increased politicization of science and the continued ideological polarization in the USA and other countries, anti-science rhetoric has been fueled by media shrouded under the guise of “health freedom” (Szabados, 2019). These anti-intellectual movements have been reinforced and amplified by both social media and traditional news media resulting in a dismantling of trust in science and the scientific process (Szabados, 2022). Unchecked claims in media and social media alike result in a generalized mistrust of scientific ideas and processes leaving many confused as to the consensus of truth and the perception of a need to defend scientific ideas (Colebrook, 2023). The general backlash against anti-science rhetoric that spreads on social media through slogans like “science is real” or “science is true whether you believe it or not,” while meaningful, can also be harmful. Truth is not necessarily hypothesizable or proof-able or replicable. It is not what science creates. Science looks to establish facts that are verifiable and often change as new information and technologies are discovered. The population that thinks science creates truth does not understand the role of change in the creation of scientific knowledge (Hagiopol & Leru, 2024).
False and misleading information gets amplified in social media echo chambers. Being inundated with repeated misinformation and falsities can lead to an increased sense of belief in the information (Pillai & Fazio, 2021). To showcase this idea, the main findings of the fraudulent Wakefield et al. paper (Wakefield, 1999) have been repeatedly refuted yet the idea that autism and vaccination are linked persists (Eggertson, 2010; Motta & Stecula, 2021). For example, the current Secretary of the Department of Health and Human Services, Robert Kennedy Jr (RFK Jr), has recently fired all the members of the US Vaccine advisory panel and replaced them with high-profile anti-vax conspiracy theorists who insist there must be a link between autism and vaccines (Stone, 2025). In an unprecedented move, he has tried to pressure a medical journal to retract a large, well-structured Danish study that found no connection between aluminum in vaccines and children’s health risks because he did not like the results (Fieldhouse, 2025). Just recently, the partisan firing of the head of the CDC and the mass resignations that followed have caused many top scientists, including former CDC heads to warn that Americans’ health is being endangered by these moves (Besser et al., 2025). The halls of American biomedical science are being emptied of scientists.
Part of the reason people like RFK Jr. have gained credibility in the eyes of the public is that they can point to scientific misconduct scandals as evidence that science and scientific publishing are corrupt. They are not wrong about the problems driven by the academic publish or perish model and focus on positive results, even if their science and scientific results are questionable. Thus, when RFK Jr recently floated the idea that the United States would create its own scientific journals and bar federal grantees from publishing in journals like the New England Journal of Medicine because of its editorial and publishing practices (Dyer, 2025), he was correctly identifying a problem in academic publishing but his solution wouldn’t fix this endemic problem. Junior faculty are expected to both establish a publication record as well as peer-review the work of others in order to advance their career (Guarino & Borden, 2017; Wright et al., 2004). The temptation to cheat exists because, although tenure is still usually earned based on research publications, the reality for most faculty members is that research becomes secondary to more immediate professional pressures, like the expectation of free service to the discipline or their university. Many faculty joke that the traditional expectations of 40% research, 40% teaching, and 20% service are really 40-100-100, with research being shunted to the summer months.
4. Erosion of Scientific Authority
Traditional scientific discourse attempts to present knowledge in an objective, neutral, bias-free manner, and for much of the 20th century, the general public largely viewed science as a reliable source of knowledge and scientists as authoritative, ethical actors. However, as Thomas Kuhn pointed out over 60 years ago, science itself is structured and understood through historical periodicity and paradigm shift (Kuhn, 2012). Indeed, perhaps the crisis in authority is itself evidence of such a paradigm shift. In any case, these shifts are marked discursively. Thus, the nature of language itself makes objectivity an unattainable ideal within scientific discourse. If language is never a neutral medium, then scientific discourse can also never be neutral nor truly objective. According to Russian philosopher Mikhail Bakhtin, “there are no ‘neutral’ words and forms–words and forms that can belong to ‘no one’; language has been completely taken over, shot through with intentions and accents.” (Bakhtin, 2010, p. 293). Bakhtin recognized language as inherently centrifugal and dialogic, shot through with intentions and histories that always exceed apparent denotative meanings. Furthermore, he argued that authoritative discourses like those of religion or science attempt to control this excess of meaning through a centripetal, monologizing discursive style. In particular, scientific discourse attempts to construct an objective stance through the stratification of its language and its generic conventions (e.g., IMRaD). As modern science has developed and specialized, its language has become more specialized (i.e., stratified) and its generic conventions standardized across venues in an attempt to present itself neutrally and objectively. While this has facilitated communication among specialists, this centripetal movement has caused scientific discourse to shift away from using a language and description that would be readily perceived by an educated populace.
Often, commentators on this situation lament the lack of scientific literacy among those who fall for anti-science conspiracy theories. They point to the failures of the American education system to inculcate a grasp of basic scientific concepts. Much can be debated about the problems within the American education system that have contributed to this crisis in scientific authority. In fact, embracing a Bakhtinian pedagogy within the science classroom could be one potential way to ameliorate the situation. Scholars and educators in the humanities often try to encourage a “heteroglossic” classroom in which students’ “home dialects” are welcomed in the belief that true learning happens when students can “translate” academic knowledge into their own words. However, scientists’ own (in)ability to communicate to varied audiences also shares the blame. Most scientists are taught only one way to communicate following the very prescriptive IMRAD style of publications and presentations. Furthermore, they are often required to adopt a single register be the disciplinary jargon of their field. While scientific language and structure facilitate understanding among initiates, they present an opaque barrier to others, even academics from different fields. This monologizing tendency contributes to the current crisis. Lay audiences often perceive this opacity with suspicion. Instead of being able to open the black box of science, however, most scientists are unable to break down complex concepts into registers more accessible to the lay public. This is especially unfortunate since many of these debates occur on social media, an area dominated by short snippets of distilled information (Hoes et al., 2024). The rise of social media as the primary source of information for many has resulted in the rise of unchecked spread of information, misinformation and disinformation in lieu of the peer-reviewed dissemination of results and use of more verifiable sources of scientific information (Garg & Fetzer, 2025; Iyengar & Massey, 2019). Because the general public looks to more approachable and readily available sources of information on social media, a smaller, vocal subset of “influencers” has an outsized impact (Baribi-Bartov et al., 2024). These “academic” voices tend to be underqualified in their respective fields with a focus on those who have wide followings over academic prowess (Garg & Fetzer, 2025).
Not only do many Americans lack the basic scientific literacy to judge the reliability of information on social media (Zhai & Pellegrino, 2023; Lederman et al., 2025), but many scientists lack the ability to translate scientific concepts into language accessible to them (Baran et al., 2025). Insisting that “science is true or real whether you believe it or not” does little to help the situation (Potochnik, 2024, p 36). Calls to unquestioned authority rarely sway skeptical minds. Such statements raise science into a monologic authority tantamount to religion rather than exposing science for the messy, contingent quest for knowledge that it is. If you are just supposed to “believe” science, then every new scientific breakthrough could cause a crisis of faith. But science is not grounded in faith. It is grounded in skepticism and the inherent need to prove new discoveries beyond a doubt. Cultivating the common ground of skepticism could be a better strategy for defusing the situation, as would acknowledging that objectivity is always aspirational rather than achieved. Both strategies would require scientists to discursively meet people where they are, to communicate complex ideas in simpler ways, and to welcome lay voices into the dialogue of science.
5. The Rise of AI in Scientific Literature
Compounding the issues presented by misinformation and falsification of scientific information is the rise of the use of artificial intelligence in scientific literature. From the use of artificial intelligence (AI) to develop heavily modified research figures to the training of AI on redacted papers (Guo et al., 2024; Nguyen & Vuong, 2025), AI presents a challenge that the academic publishing industry is ill-equipped to face. The academic publishing industry has come out in full force in an attempt to regulate and discourage authors from using generative AI engines like ChatGPT (Brainard, 2025; Frangou et al., 2025; Huff, 2025; Kwon, 2025; MDPI, 2023; Wiley Publishing, 2025). Despite these positional statements, there has been both an influx of completely falsified papers that seem “good enough” including those that have been heavily modified or supplemented by AI (J. J. H. Kim et al., 2024a). As a whole, the ease of falsification of data through the use of AI presents a significant challenge to the integrity of scientific publishing (Kasani et al., 2024). AI models, particularly generative AI, can hallucinate datasets and sources in a way that can seem passable to a peer reviewer (J. J. H. Kim et al., 2024b; Emsley, 2023). This capability allows bad-faith actors to create convincing-looking research from scratch or potentially manipulate real data to support a predetermined outcome (Hosseini et al., 2024). Some of these can be avoided by using systems that examine the use of AI by using AI tools as a checking mechanism. These AI detection systems can be inexact and flawed in their own right potentially opening the door for additional falsification and fabrication of data (Chaka, 2024). Furthermore, experts in the field often miss the presence of AI-generated data (Hartung et al., 2024). To combat these concerns, many scientific journals have adopted differing policies regarding AI use in the writing, editing and formatting of their articles ranging from a carte blanche of AI use to a required affidavit stating that the author has not used AI at all to a complete lack of clear policy regarding the use of AI (Ganjavi et al., 2024). These types of policies can provide some peace of mind for the scientific community but are often “paper tigers”—fearsome in writing but lacking the teeth that would be necessary to discourage bad-faith actors from falsifying or manipulating information within their articles (Elali & Rachid, 2023). Rather than only relying on AI-detection services and a policy surrounding the use of AI, it may be beneficial to adopt stronger peer-review incentives alongside a discouragement of using AI and a requirement of transparency when using AI in any process of the writing or editing process. As the world continues to struggle with how to adapt to AI and its impact on every aspect of the world, the scientific community will need to follow suit. While there may be some legitimate uses for AI in the academic setting, such as editing or some aspects of data analysis, there is at present no widespread consensus on its use. Developing clear policies and guidelines for the use of generative AI within science communication provides opportunities for reducing and discouraging bad-faith actors from continuing to taint the scientific communication landscape.
6. Conclusions and Solutions
As science has attempted to adopt a neutral lens to present itself as primarily objective and free of bias, it has obfuscated its messaging and has resulted in more of a tainted observation of fact and truth in its wake. An attempt to maintain a monologic discourse has failed and has resulted in a scientific voice that is riddled with ineffectual messaging, bias in presentation and an erosion of trust in scientific authority. The persistent erosion of scientific authority in the eyes of the general public poses a significant threat to societal progress.
Strengthening scientific messaging is paramount. Historically, scientific communication has often been dense, jargon-laden, and primarily directed at fellow experts. If scientists communicated their findings in a narrative style rather than an IMRaD structure, this could provide a deeper understanding amongst the general public (Nichols & Petzold, 2021). Avoiding the “professional stratification of language” that is persistent in scientific communication and adopting a communication style that provides opportunities for the broader public to better examine scientific findings. Moreover, scientists should go beyond simply presenting scientific findings as rote fact and describe their processes and mechanisms as a manner of reinforcing scientific methodology. Highlighting the importance of the iterative process in scientific research and being transparent about both positive and negative results in communications about science can aid in providing the general public with opportunities to examine the scientific process (Voci & Karmasin, 2023). Furthermore, scientists should proactively engage with traditional and social media, serving as reliable sources of information and participating in public discourse rather than reacting to crises. By taking a proactive approach to combating misinformation, scientists can better position themselves and science as a whole as a source of trustworthy information. Ensuring that scientists do not overstate the impacts of their research is key to assuaging some doubts about scientific discoveries (Weingart, 2017). Finally, embedding science communication training into academic programs and encouraging scientists to engage in public discourse are crucial to equip researchers with these essential skills.
Aside from encouraging more of a scientific acumen within social media, misinformation needs to be actively combated and pre-combated through carefully curated messaging surrounding new scientific findings. Simply debunking falsehoods can be seen as a curation of messaging and even an affront to the needs of the public (Aruguete et al., 2025). If not done carefully, debunking and “fact-checking” can inadvertently amplify the same misinformation that is attempting to be stopped (Lewandowsky et al., 2017). Pre-debunking can be done through thoughtfully curating messaging and explaining common misconceptions while inoculating the public with common manipulative tactics used by disinformation campaigns (Ophir and Jamieson, 2021). Media organizations, both social media platforms and traditional news media, need to engage more with the scientific audience to provide greater instances of scientific findings being shared with a broader public.
Additionally, the culture and expectations of academia need to evolve. Globally, academic professionals are challenged to be ever-more competitive for funding and for publications. This has ultimately created the “publish-or-perish” landscape upon which we live. One method for addressing these concerns revolves around reducing the emphasis that has been placed upon publication as a mechanism for promotion, tenure and other methods of reward within academia. Faculty indicate that the number of publications per year was most valued during review, promotion and tenure of faculty members (Niles et al., 2020). Uncoupling promotion and career advancement with the number or “impact” of these publications would require broad systemic change but could result in increased quality of research publications over sheer quantity (Carlotto & Gondim, 2025). Increasing funding for academia and the sciences and moving away from a results-driven approach to providing funding while focusing on a higher quality of holistic results could provide a second golden age of science. By following the suggestions that have been described in numerous journal articles and position statements, we can ensure more rigorous standards when examining the quality of research (Mousa et al., 2024; Torrance, 2019). These declarations of standards often suggest that researchers can produce higher-quality products with more impact while following these guidelines. Initiatives like the 2013 San Francisco Declaration on Research Assessment (DORA) have provided specific guidelines for removing some of these pressures while stressing research integrity (DORA, 2012). However, over a decade after the introduction of DORA, many of the initiatives are lagging behind current trends and adoption of these guidelines has not been consistent with many publications focusing on changing metrics rather than the publication practices outlined in the declaration (Morgan-Thomas et al., 2024). The existence of these guidelines provides opportunities for growth but they cannot stand alone and must be constantly updated for the changing landscape of academic publishing.
Sensationalistic news headlines that highlight the failings of science may sell subscriptions, but generally do little to instill trust in science or the scientific process. While it is always important to showcase some of the challenges that science faces, it is equally important to showcase the advancements and discoveries that have been made as well. Furthermore, there is a need to rely upon scientists themselves as sources that journalists frequent over celebrity talking heads, media darlings, or social media influencers (Swire-Thompson et al., 2024; Swire-Thompson & Lazer, 2022). Simply because a celebrity has a voice on social media or calls themselves “doctor” does not mean that they are qualified to comment upon anything science-related (Bahar & Hasan, 2016).
By focusing on the commoditization of science by instilling a publish or perish model, sensationalization of scientific journalism, reliance on social media darlings as bastions of information, and the wildcard of generative AI, the authority once held by the scientific voice has diminished (Rein, 2023). Bakhtin would argue that this progression towards a diminished authority could itself be predicted due to the attempt of science to be monologic. By presenting a unified front, science has inadvertently presented itself as being the arbiter of truth. This has been further predicted by Thomas Kuhn in his Structure of Scientific Revolutions; when scientific understanding evolves, new concepts are discovered or failures unearthed, this placement as arbiter of truth is questioned and an episodic change is required (Kuhn, 2012). What may have been seen as potential for the establishment of authority in the past has resulted in an unmoving monolith that is being destroyed by the structure that brought it power. Exacerbating this problem is the zombification of retracted papers, that many papers that have been retracted can no longer be removed from the generalized knowledge base without substantial electronic effort (Ioannidis et al., 2025). The persistence of these citations tends to create a mire of false or misleading information within the scientific landscape (Hsiao & Schneider, 2021).
As we seek to find the answers to the current conundrum that is faced by the scientific community writ large, we must be mindful of the need for wide sweeping change to a number of practices and industries that have been empowered by the current scientific mechanisms. As academics, we must move away from the publish or perish model of success and focus on the quality of work, not the quantity thereof. Furthermore, quality must be driven not by the artificially manufactured impact factor but rather by the applicability and use of the scientific finding. Finally, we must be sure to provide a voice to scientists who go beyond scientific literature and focus on more readily available forms of communication with unabashed responses to false narratives. Taking these steps will require widespread effort, but if we want to rebuild trust with the public, we must make significant and structural changes to the production and dissemination of science.
Author Contributions
A.M.P. and M.D.N. worked equally when conceptualizing, researching and writing this article. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Abalkina, A., Aquarius, R., Bik, E., Bimler, D., Bishop, D., Byrne, J., Cabanac, G., Day, A., Labbé, C., & Wise, N. (2025). ‘Stamp out paper mills’—Science sleuths on how to fight fake research. Nature, 637(8048), 1047–1050. [Google Scholar] [CrossRef]
- Aruguete, N., Calvo, E., & Ventura, T. (2025). The fact-checking dilemma: Fact-checking increases the reputation of the fact-checker but creates perceptions of ideological bias. Research & Politics, 12, 20531680251323120. [Google Scholar] [CrossRef]
- Bahar, V. S., & Hasan, M. (2016). #Fakefamous: How do influencers use disinformation to establish long-term credibility on social media? Information Technology & People, 38(6), 2441–2476. [Google Scholar] [CrossRef]
- Baker, M. (2025). 1500 Scientists lift the lid on reproducibility. Nature, 533, 452–454. [Google Scholar] [CrossRef]
- Bakhtin, M. M. (2010). The dialogic imagination: Four essays (Vol. 1). University of Texas Press. ISBN 0-292-78286-1. [Google Scholar]
- Baran, R. V., Fazari, M., Lightfoot, D., & Cusimano, M. D. (2025). Social media strategies used to translate knowledge and disseminate clinical neuroscience information to healthcare users: A systematic review. PLoS Digit Health, 4, e0000778. [Google Scholar] [CrossRef]
- Baribi-Bartov, S., Swire-Thompson, B., & Grinberg, N. (2024). Supersharers of fake news on Twitter. Science, 384, 979–982. [Google Scholar] [CrossRef] [PubMed]
- Besser, R., Cohen, M. K., Foege, W., Frieden, T., Koplan, J., Roper, W., Satcher, D., Schuchat, A., & Walensky, R. P. (2025, September 1). Opinion|we ran the C.D.C.: Kennedy is endangering every American’s health. The New York Times. [Google Scholar]
- Brainard, J. (2025). Far more authors use ai to write science papers than admit it, publisher reports. Available online: https://www.science.org/content/article/far-more-authors-use-ai-write-science-papers-admit-it-publisher-reports (accessed on 7 November 2025).
- Carlotto, M. S., & Gondim, S. M. G. (2025). Mental health and dignity in higher education: The academic career in decline. Academia Mental Health and Well-Being, 2(3). [Google Scholar] [CrossRef]
- Chaka, C. (2024). Reviewing the performance of AI detection tools in differentiating between AI-generated and human-written texts: A literature and integrative hybrid review. Journal of Applied Learning and Teaching, 7(1), 115–126. [Google Scholar] [CrossRef]
- Colebrook, C. (2023). Science is real. Symploke, 31, 423–430. [Google Scholar] [CrossRef]
- DORA. (2012). DORA about DORA. Available online: https://sfdora.org/ (accessed on 16 August 2025).
- Dukmasova, M. (2025, August 14). Fake science, faulty methods, misleading testimony. Injustice Watch. [Google Scholar]
- Dyer, O. (2025). RFK Jr threatens to stop US scientists from publishing in major medical journals. BMJ, 389, r1110. [Google Scholar] [CrossRef]
- Eggertson, L. (2010). Lancet retracts 12-year-old article linking autism to MMR vaccines. CMAJ, 182, E199–E200. [Google Scholar] [CrossRef]
- Elali, F. R., & Rachid, L. N. (2023). AI-generated research paper fabrication and plagiarism in the scientific community. Patterns, 4, 100706. [Google Scholar] [CrossRef]
- Emsley, R. (2023). ChatGPT: These are not hallucinations—They’re fabrications and falsifications. Schizophrenia, 9, 52. [Google Scholar] [CrossRef]
- Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(5), e5738. [Google Scholar] [CrossRef]
- Fieldhouse, R. (2025). RFK Jr Demanded a vaccine study be retracted—The journal said no. Nature, 645, 13–14. [Google Scholar] [CrossRef] [PubMed]
- Frangou, S., Volpe, U., & Fiorillo, A. (2025). AI in scientific writing and publishing: A call for critical engagement. European Psychiatry, 68, e98. [Google Scholar] [CrossRef] [PubMed]
- Fuchs, S., & Westervelt, S. D. (1996). Fraud and trust in science. Perspectives in Biology and Medicine, 39(2), 248–269. [Google Scholar] [CrossRef]
- Ganjavi, C., Eppler, M. B., Pekcan, A., Biedermann, B., Abreu, A., Collins, G. S., Gill, I. S., & Cacciamani, G. E. (2024). Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: Bibliometric analysis. BMJ, 384, e077192. [Google Scholar] [CrossRef]
- Garg, P., & Fetzer, T. (2025). Political expression of academics on Twitter. Nature Human Behaviour, 9, 1815–1832. [Google Scholar] [CrossRef] [PubMed]
- Guarino, C. M., & Borden, V. M. H. (2017). Faculty service loads and gender: Are women taking care of the academic family? Research in Higher Education, 58, 672–694. [Google Scholar] [CrossRef]
- Guo, X., Dong, L., & Hao, D. (2024). RETRACTED: Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Frontiers in Cell and Developmental Biology, 11, 1339390. [Google Scholar] [CrossRef]
- Hagiopol, C., & Leru, P. M. (2024). Scientific truth in a post-truth era: A review*. Science & Education, 34, 2923–2956. [Google Scholar] [CrossRef]
- Hartung, J., Reuter, S., Kulow, V. A., Fähling, M., Spreckelsen, C., & Mrowka, R. (2024). Experts fail to reliably detect AI-generated histological data. Scientific Reports, 14(1), 28677. [Google Scholar] [CrossRef]
- Hilliard, A., Sugden, N., Bass, K., & Gunter, C. (2025). Survey-based analysis of a science of science communication scientific interest group: Member feedback and perspectives on science communication. JCOM, 24, N03. [Google Scholar] [CrossRef]
- Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). Prominent misinformation interventions reduce misperceptions but increase scepticism. Nature Human Behaviour, 8, 1545–1553. [Google Scholar] [CrossRef]
- Hosseini, M., Rasmussen, L. M., & Resnik, D. B. (2024). Using AI to write scholarly publications. Accountability in Research, 31, 715–723. [Google Scholar] [CrossRef]
- Hsiao, T.-K., & Schneider, J. (2021). Continued use of retracted papers: Temporal trends in citations and (lack of) awareness of retractions shown in citation contexts in biomedicine. Quantitative Science Studies, 2, 1144–1169. [Google Scholar] [CrossRef]
- Huff, C. (2025). The promise and perils of using ai for research and writing. Available online: https://www.apa.org/topics/artificial-intelligence-machine-learning/ai-research-writing (accessed on 7 November 2025).
- Ioannidis, J. P. A., Pezzullo, A. M., Cristiano, A., Boccia, S., & Baas, J. (2025). Linking citation and retraction data reveals the demographics of scientific retractions among highly cited authors. PLoS Biology, 23, e3002999. [Google Scholar] [CrossRef] [PubMed]
- Iyengar, S., & Massey, D. S. (2019). Scientific communication in a post-truth society. Proceedings of the National Academy of Sciences of the United States of America, 116, 7656–7661. [Google Scholar] [CrossRef] [PubMed]
- Kasani, P. H., Cho, K. H., Jang, J.-W., & Yun, C.-H. (2024). Influence of artificial intelligence and chatbots on research integrity and publication ethics. Science Editing, 11, 12–25. [Google Scholar] [CrossRef]
- Kim, D., & Rury, J. L. (2007). The changing profile of college access: The truman commission and enrollment patterns in the postwar era. History of Education Quarterly, 47, 302–327. [Google Scholar] [CrossRef]
- Kim, J. J. H., Srivatsa, A. V., Nahass, G. R., Rusanov, T., Hwang, S., Kim, S., Solomon, I., Lee, T. H., Kadkol, S., Ajilore, O., & Dai, Y. (2024a). Generative AI can effectively manipulate data. AI Ethics, 5, 4515–4529. [Google Scholar] [CrossRef]
- Kim, J. J. H., Um, R. S., Lee, J. W. Y., & Ajilore, O. (2024b). Generative AI can fabricate advanced scientific visualizations: Ethical implications and strategic mitigation framework. AI Ethics, 5, 4481–4493. [Google Scholar] [CrossRef]
- Kuhn, T. (2012). The structure of scientific revolutions: 50th anniversary edition. University of Chicago Press. ISBN 978-0226458120. [Google Scholar]
- Kwon, D. (2025). Is it OK for AI to write science papers? Nature survey shows researchers are split. Nature, 641, 574–578. [Google Scholar] [CrossRef]
- Lederman, J. S., Akerson, V., Bartels, S., & Schwartz, R. (2025). Attention science educators, we have a problem: Lack of global functional scientific literacy. International Journal of Science Education, 47, 1275–1279. [Google Scholar] [CrossRef]
- Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6, 353–369. [Google Scholar] [CrossRef]
- Lumb, E. (2025, May 19). Research fraud at MIT: High-profile study was too good to be true. Signals. [Google Scholar]
- McLoughlin, K. L., Brady, W. J., Goolsbee, A., Kaiser, B., Klonick, K., & Crockett, M. J. (2024). Misinformation exploits outrage to spread online. Science, 386, 991–996. [Google Scholar] [CrossRef] [PubMed]
- McMurray, C. (2024, October 4). A scientific fraud. An investigation. A lab in recovery. The Transmitter: Neuroscience News and Perspectives. [Google Scholar]
- MDPI. (2023). MDPI’s updated guidelines on artificial intelligence and authorship. Available online: https://www.mdpi.com/about/announcements/5687 (accessed on 7 November 2025).
- Morgan-Thomas, A., Tsoukas, S., Dudau, A., & Paweł, G. (2024). Beyond declarations: Metrics, rankings and responsible assessment. Research Policy, 53, 105093. [Google Scholar] [CrossRef]
- Motta, M., & Stecula, D. (2021). Quantifying the effect of Wakefield et al. (1998) on skepticism about MMR vaccine safety in the U.S. PLoS ONE, 16, e0256395. [Google Scholar] [CrossRef]
- Mousa, A., Flanagan, M., Tay, C. T., Norman, R. J., Costello, M., Li, W., Wang, R., Teede, H., & Mol, B. W. (2024). Research integrity in guidelines and evIDence synthesis (RIGID): A framework for assessing research integrity in guideline development and evidence synthesis. EClinicalMedicine, 74, 102717. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- National Academies of Sciences, Engineering, and Medicine. (2019). Reproducibility and replicability in science. National Academies Press. ISBN 978-0-309-48619-4. [Google Scholar]
- Nguyen, M.-H., & Vuong, Q.-H. (2025). Artificial intelligence and retracted science. AI & Society, 40, 2345–2346. [Google Scholar] [CrossRef]
- Nichols, M. D., & Petzold, A. M. (2021). A crisis of authority in scientific discourse. Cultural Studies of Science Education, 16, 643–650. [Google Scholar] [CrossRef]
- Niles, M. T., Schimanski, L. A., McKiernan, E. C., & Alperin, J. P. (2020). Why we publish where we do: Faculty publishing values and their relationship to review, promotion and tenure expectations. PLoS ONE, 15, e0228914. [Google Scholar] [CrossRef] [PubMed]
- Ophir, Y., & Jamieson, K. H. (2021). The effects of media narratives about failures and discoveries in science on beliefs about and support for science. Public Understanding of Science, 30, 1008–1023. [Google Scholar] [CrossRef] [PubMed]
- Ordner, N. (2025). NIH budget cuts threaten the future of biomedical research—And the young scientists behind it. Available online: https://www.latimes.com/science/story/2025-07-06/nih-budget-cuts-threaten-the-future-of-medical-research-and-young-scientists (accessed on 16 September 2025).
- Paruzel-Czachura, M., Baran, L., & Spendel, Z. (2021). Publish or be ethical? Publishing pressure and scientific misconduct in research. Research Ethics, 17(3), 375–397. [Google Scholar] [CrossRef]
- Pillai, R. M., & Fazio, L. K. (2021). The effects of repeating false and misleading information on belief. WIREs Cognitive Science, 12, e1573. [Google Scholar] [CrossRef]
- Piller, C. (2023). Probe of Alzheimer’s studies finds ‘egregious misconduct’. Science, 382(6668), 251–252. [Google Scholar] [CrossRef]
- Pontika, N., Klebel, T., Correia, A., Metzler, H., Knoth, P., & Ross-Hellauer, T. (2022). Indicators of research quality, quantity, openness, and responsibility in institutional review, promotion, and tenure policies across seven countries. Quantitative Science Studies, 3, 888–911. [Google Scholar] [CrossRef]
- Potochnik, A. (2024). Science and the public (1st ed.). Cambridge University Press. ISBN 978-1-009-04947-4. [Google Scholar]
- Rein, B. (2023). Making science education more accessible: A case study of TikTok’s utility as a science communication tool. Neuroscience, 530, 192–200. [Google Scholar] [CrossRef]
- Richardson, R. A. K., Hong, S. S., Byrne, J. A., Stoeger, T., & Amaral, L. A. N. (2025). The entities enabling scientific fraud at scale are large, resilient, and growing rapidly. Proceedings of the National Academy of Sciences, 122, e2420092122. [Google Scholar] [CrossRef] [PubMed]
- Romero, F. (2017). Novelty versus replicability: Virtues and vices in the reward system of science. Philosophy of Science, 84, 1031–1043. [Google Scholar] [CrossRef]
- Roy, S. C. (2021). Peer review process—Its history and evolution. Science and Culture, 87, 36–44. [Google Scholar] [CrossRef]
- Rudroff, T. (2025). The growing tide of antiscience sentiment: A global concern. World Medical & Health Policy, 17, 529–537. [Google Scholar] [CrossRef]
- Ruths, D. (2019). The misinformation machine. Science, 363, 348. [Google Scholar] [CrossRef]
- Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, 116, 7662–7669. [Google Scholar] [CrossRef]
- Shermer, E. T. (2021). Indentured students: How government-guaranteed loans left generations drowning in college debt. Harvard University Press. ISBN 978-0-674-26980-4. [Google Scholar]
- Stone, W. (2025, June 12). RFK Jr. names new slate of vaccine advisers after purging CDC panel. NPR. [Google Scholar]
- Swire-Thompson, B., Kilgallen, K., Dobbs, M., Bodenger, J., Wihbey, J., & Johnson, S. (2024). Discrediting health disinformation sources: Advantages of highlighting low expertise. Journal of Experimental Psychology: General, 153, 2299–2313. [Google Scholar] [CrossRef]
- Swire-Thompson, B., & Lazer, D. (2022). Reducing health misinformation in science: A call to arms. The ANNALS of the American Academy of Political and Social Science, 700, 124–135. [Google Scholar] [CrossRef] [PubMed]
- Szabados, K. (2019). Can we win the war on science? Understanding the link between political populism and anti-science politics. Populism, 2(2), 207–236. [Google Scholar] [CrossRef]
- Szabados, K. (2022). The disenchantment with science: Anti-science in the postmodern age. In The Routledge history of American science. Routledge. ISBN 978-1-003-11239-6. [Google Scholar]
- Tang, B. L. (2024). Publishing important work that lacks validity or reproducibility—Pushing frontiers or corrupting science? Accountability in Research, 32, 1159–1179. [Google Scholar] [CrossRef]
- Teixeira da Silva, J. A., & Nazarovets, S. (2025). The publish or perish, publish and perish, publish then perish, and now retract and perish cultures in academia. Naunyn-Schmiedeberg’s Archives of Pharmacology. [Google Scholar] [CrossRef]
- Torrance, H. (2019). The research excellence framework in the United Kingdom: Processes, consequences, and incentives to engage. Qualitative Inquiry, 26, 107780041987874. [Google Scholar] [CrossRef]
- Voci, D., & Karmasin, M. (2023). Sustainability communication: How to communicate an inconvenient truth in the era of scientific mistrust. Journal of Communication Management, 28, 15–40. [Google Scholar] [CrossRef]
- Wakefield, A. J. (1999). MMR vaccination and autism. The Lancet, 354, 949–950. [Google Scholar] [CrossRef]
- Weingart, P. (2017). Is there a hype problem in science? If so, how is it addressed? In K. H. Jamieson, D. M. Kahan, & D. A. Scheufele (Eds.), The oxford handbook of the science of science communication. Oxford University Press. ISBN 978-0-19-049762-0. [Google Scholar]
- West, J. D., & Bergstrom, C. T. (2021). Misinformation in and about science. Proceedings of the National Academy of Sciences, 118, e1912444117. [Google Scholar] [CrossRef] [PubMed]
- Wiley Publishing. (2025). AI guidelines for researchers. Available online: https://www.wiley.com/en-us/publish/article/ai-guidelines/ (accessed on 7 November 2025).
- Wright, M. C., Assar, N., Kain, E. L., Kramer, L., Howery, C. B., McKinney, K., Glass, B., & Atkinson, M. (2004). Greedy institutions: The importance of institutional context for teaching in higher education. Teaching Sociology, 32, 144–159. [Google Scholar] [CrossRef]
- Xu, S. B., & Hu, G. (2025). Combating China’s retraction crisis. Nature Human Behaviour, 9(4), 631–634. [Google Scholar] [CrossRef] [PubMed]
- Zhai, X., & Pellegrino, J. W. (2023). Large-scale assessment in science education. In Handbook of research on science education. Routledge. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).