Generative Artificial Intelligence and the Future of Public Knowledge
Abstract
1. Introduction
2. Trajectories of the Creation of Public Knowledge
2.1. The Pre-Digital Creation of Public Knowledge
2.2. The Creation of Public Knowledge in an Online World
3. The Transformative Power of Generative AI Language Models
4. The Creation of Public Knowledge by Generative AI Language Models
- Generative AI language models are suited to semi-automated repetitive and routine tasks (drafting e-mails, summarizing and extracting information from larger textual datasets, providing item selection based on semi-vague user input) that are customized to a user’s needs [140,141]. The increasing familiarity with such systems in daily work life will ‘bleed’ into daily practice in non-work settings, leading to a wide-spread uptake.
- In an age of both instant gratification [137] and an attitude that ‘near enough is good enough,’ the bulk of the public will avail themselves of solutions that provide the immediate and most convenient answers with the least amount of effort (cognitive offloading)—especially where confidence in the abilities of AI is high [142,143].
- Transformative technologies that satisfy this demand are poised to gain traction and dominance over alternate ‘traditional’ and labour-intensive approaches.
- There is a worrying trend that sees critical thinking skills and information literacy in a near-terminal decline among large swathes of the populace. Evidence for this can be found in the increasingly uncritical consumption of news and information and the growing reliance on and the trust placed in the opinion of social media influencers [144,145] and the continued devaluation of academic subject matter experts. At present, many researchers, relying on years of experience and rigorous, peer reviewed research, find themselves in the position where they may well generate findings and insights into social or environmental phenomena, but that their findings are dismissed out of hand, without any evidence to the contrary, by ideologically or politically motivated commentators and social media influencers who have assumed a position of authority in online communities [146,147,148]. The past decade has shown an increased level of tribalism in the general public, where selective use of news sources, online communities that act as echo chambers, and the spruiking of alternative ‘truths’ that defy unequivocal evidence to the contrary have increasingly become normalized [149,150]. In many Western democracies there is no indication that this trend will abate anytime soon. Rather, it is bound to continue, intensify and accelerate.
- Finally, there are multiple examples where, over time, information sources that once were derided as untrustworthy or shallow have become accepted by the general public not only as the norm but also as the primary source of information. A good example is Wikipedia which has become one of the main ‘go-to’ sites on the internet, even though its content is not created by accredited experts nor reviewed by other experts and thus is of mixed quality subject to the epistemology of the page authors, revisers and editors [151,152,153,154,155].
5. Is There an Off-Ramp or Are We Doomed to Be on the Road to Public Ignorance?
Funding
Data Availability Statement
Conflicts of Interest
References
- Weizenbaum, J. Computer Power and Human Reason: From Judgment to Calculation; W. H. Freeman and Company: San Francisco, CA, USA, 1976. [Google Scholar]
- Dreyfus, H.L. What Computers Still Can’t Do: A Critique of Artificial Reason; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
- Biswas, S. Importance of chat GPT in Agriculture: According to chat GPT. SSRN 2023. [Google Scholar] [CrossRef]
- Castro Nascimento, C.M.; Pimentel, A.S. Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. J. Chem. Inf. Model. 2023, 63, 1649–1655. [Google Scholar] [CrossRef]
- Surameery, N.M.S.; Shakor, M.Y. Use chat gpt to solve programming bugs. Int. J. Inf. Technol. Comput. Eng. (IJITC) 2023, 3, 17–22. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. ChatGPT and the generation of digitally born “knowledge”: How does a generative AI language model interpret cultural heritage values? Knowledge 2023, 3, 480–512. [Google Scholar] [CrossRef]
- Sng, G.G.R.; Tung, J.Y.M.; Lim, D.Y.Z.; Bee, Y.M. Potential and pitfalls of ChatGPT and natural-language artificial intelligence models for diabetes education. Diabetes Care 2023, 46, e103–e105. [Google Scholar] [CrossRef]
- Bays, H.E.; Fitch, A.; Cuda, S.; Gonsahn-Bollie, S.; Rickey, E.; Hablutzel, J.; Coy, R.; Censani, M. Artificial intelligence and obesity management: An Obesity Medicine Association (OMA) Clinical Practice Statement (CPS) 2023. Obes. Pillars 2023, 6, 100065. [Google Scholar] [CrossRef]
- Grünebaum, A.; Chervenak, J.; Pollet, S.L.; Katz, A.; Chervenak, F.A. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696–705. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. Exhibiting the Heritage of COVID-19—A Conversation with ChatGPT. Heritage 2023, 6, 5732–5749. [Google Scholar] [CrossRef]
- Qi, X.; Zhu, Z.; Wu, B. The promise and peril of ChatGPT in geriatric nursing education: What We know and do not know. Aging Health Res. 2023, 3, 100136. [Google Scholar] [CrossRef]
- Currie, G.; Singh, C.; Nelson, T.; Nabasenja, C.; Al-Hayek, Y.; Spuur, K. ChatGPT in medical imaging higher education. Radiography 2023, 29, 792–799. [Google Scholar] [CrossRef] [PubMed]
- Agapiou, A.; Lysandrou, V. Interacting with the Artificial Intelligence (AI) Language Model ChatGPT: A Synopsis of Earth Observation and Remote Sensing in Archaeology. Heritage 2023, 6, 4072–4085. [Google Scholar] [CrossRef]
- Bolzan, M.; Scioni, M.; Marozzi, M. Futures Studies and Artificial Intelligence: First Results of an Experimental Collaborative Approach. In Proceedings of the Scientific Meeting of the Italian Statistical Society, Bari, Italy, 17–20 June 2024; pp. 299–303. [Google Scholar]
- Calleo, Y.; Giuffrida, N.; Pilla, F. Exploring hybrid models for identifying locations for active mobility pathways using real-time spatial Delphi and GANs. Eur. Transp. Res. Rev. 2024, 16, 61. [Google Scholar] [CrossRef] [PubMed]
- Calleo, Y.; Taylor, A.; Pilla, F.; Di Zio, S. AI-assisted Real-Time Spatial Delphi: Integrating artificial intelligence models for advancing future scenarios analysis. Qual. Quant. 2025, 59 (Suppl. S2), 1427–1459. [Google Scholar] [CrossRef]
- Di Zio, S.; Calleo, Y.; Bolzan, M. Delphi-based visual scenarios: An innovative use of generative adversarial networks. Futures 2023, 154, 103280. [Google Scholar] [CrossRef]
- Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10, 49. [Google Scholar] [CrossRef]
- De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Front. Public Health 2023, 11, 1166120. [Google Scholar] [CrossRef] [PubMed]
- Singh, S. ChatGPT Statistics (2025): DAU & MAU Data Worldwide. 19 May 2025. Available online: https://www.demandsage.com/chatgpt-statistics/ (accessed on 25 May 2025).
- Li, A.; Sinnamon, L. Generative AI Search Engines as Arbiters of Public Knowledge: An Audit of Bias and Authority. Proc. Assoc. Inf. Sci. Technol. 2024, 61, 205–217. [Google Scholar] [CrossRef]
- Wihbey, J. AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge? SSRN 2024. [Google Scholar] [CrossRef]
- Brown, D.; Ellerton, P. Is AI Making Us Stupider? Maybe, According to One of the World’s Biggest AI Companies. 2025. Available online: https://theconversation.com/is-ai-making-us-stupider-maybe-according-to-one-of-the-worlds-biggest-ai-companies-249586 (accessed on 12 May 2025).
- Hines, A.; Bishop, P.J.; Slaughter, R.A. Thinking About the Future: Guidelines for Strategic Foresight; Social Technologies: Washington, DC, USA, 2006. [Google Scholar]
- van Duijne, F.; Bishop, P. Introduction to Strategic Foresight; Future Motions, Dutch Futures Society: Den Haag, The Netherlands, 2018; Volume 1, p. 67. [Google Scholar]
- Dunagan, J.F. Jim Dator: The Living Embodiment of Futures Studies. J. Futures Stud. 2013, 18, 131–138. [Google Scholar]
- Inayatullah, S. Learnings from futures studies: Learnings from Dator. J. Futures Stud. 2013, 18, 1–10. [Google Scholar]
- Kieser, A. Organizational, institutional, and societal evolution: Medieval craft guilds and the genesis of formal organizations. Adm. Sci. Q. 1989, 540–564. [Google Scholar] [CrossRef]
- Belfanti, C. Guilds, patents, and the circulation of technical knowledge: Northern Italy during the early modern age. Technol. Cult. 2004, 45, 569–589. [Google Scholar] [CrossRef]
- Schubring, G. Analysing Historical Mathematics Textbooks; Springer: Cham, Switzerland, 2023. [Google Scholar]
- Demets, L. Bruges as a multilingual contact zone: Book production and multilingual literary networks in fifteenth-century Bruges. Urban Hist. 2024, 51, 313–332. [Google Scholar] [CrossRef]
- Nuovo, A. Book Privileges in the Early Modern Age: From Trade Protection and Promotion to Content Regulation. In Book Markets in Mediterranean Europe and Latin America: Institutions and Strategies (15th–18th Centuries); Cachero, M., Maillard-Álvarez, N., Eds.; Springer: Cham, Switzerland, 2023; pp. 21–33. [Google Scholar]
- Landau, D.; Parshall, P.W. The Renaissance Print, 1470–1550; Yale University Press: New Haven, CT, USA, 1994. [Google Scholar]
- Frey, W.; Raitz, W.; Seitz, D.; Frey, W.; Raitz, W.; Seitz, D. Flugschriften aus der Zeit der Reformation und des Bauernkriegs. In Einführung in die Deutsche Literatur des 12. bis 16. Jahrhunderts: Bürgertum und Fürstenstaat—15./16. Jahrhundert; Westdeutscher Verlag: Opladen, Germany, 1981; pp. 38–68. [Google Scholar]
- Peacey, J. Politicians and Pamphleteers: Propaganda During the English Civil Wars and Interregnum; Routledge: London, UK, 2017. [Google Scholar]
- Spennemann, D.H.R. Matthäus Merian’s crocodile in Japan. A biblio-forensic examination of the origins and longevity of an illustration of a Crocodylus niloticus in Jan Jonston’s Historiae naturalis de quadrupetibus. Scr. Print 2019, 43, 201–239. [Google Scholar]
- Boto, C. The Age of Enlightenment and Education. In Oxford Research Encyclopedia of Education; Noblit, G.W., Ed.; Oxford Universoty Press: Oxford, UK, 2021. [Google Scholar]
- Sullivan, L.E. Circumscribing knowledge: Encyclopedias in historical perspective. J. Relig. 1990, 70, 315–339. [Google Scholar] [CrossRef]
- Hohoff, U. 200 Jahre Brockhaus: Geschichte und Gegenwart eines großen Lexikons. Forsching Lehre 2009, 16, 118–120. [Google Scholar]
- Withers, C.W. Geography in its time: Geography and historical geography in Diderot and d’Alembert’s Encyclopédie. J. Hist. Geogr. 1993, 19, 255–264. [Google Scholar] [CrossRef]
- Simonsen, M. The Rise and Fall of Danish Encyclopedias, 1891–2017. In Stranded Encyclopedias, 1700–2000: Exploring Unfinished, Unpublished, Unsuccessful Encyclopedic Projects; Holmberg, L., Simonsen, M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 287–322. [Google Scholar]
- Inkster, I. The Social Context of an Educational Movement: A Revisionist Approach to the English Mechanics’ Institutes, 1820–1850. Oxf. Rev. Educ. 1976, 2, 277–307. [Google Scholar] [CrossRef]
- Bruce, R.V. The Launching of Modern American Science, 1846–1876; Plunkett Lake Press: Lexington, MA, USA, 2022. [Google Scholar]
- Geiger, R. The rise and fall of useful knowledge: Higher education for science, agriculture & the mechanics arts, 1850–1875. In History of Higher Education Annual: 1998; Routledge: London, UK, 2020; pp. 47–65. [Google Scholar]
- True, A.C. A History of Agricultural Extension Work in the United States, 1785–1923; US Government Printing Office: Washington, WA, USA, 1928.
- Mettler, S. Soldiers to Citizens: The GI Bill and the Making of the Greatest Generation; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
- Croucher, G.; Woelert, P. Institutional isomorphism and the creation of the unified national system of higher education in Australia: An empirical analysis. High. Educ. 2016, 71, 439–453. [Google Scholar] [CrossRef]
- McClelland, C.E. The German Experience of Professionalization: Modern Learned Professions and Their Organizations from the Early Nineteenth Century to the Hitler Era; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
- Brezis, E.S.; Crouzet, F. The role of higher education institutions: Recruitment of elites and economic growth. Inst. Dev. Econ. Growth 2006, 13, 191. [Google Scholar]
- Milburn, L.-A.S.; Mulley, S.J.; Kline, C. The end of the beginning and the beginning of the end: The decline of public agricultural extension in Ontario. J. Ext. 2010, 48, 7. [Google Scholar] [CrossRef]
- Scotto di Carlo, G. The role of proximity in online popularizations: The case of TED talks. Discourse Stud. 2014, 16, 591–606. [Google Scholar] [CrossRef]
- Haider, J.; Sundin, O. The materiality of encyclopedic information: Remediating a loved one–Mourning Britannica. Proc. Am. Soc. Inf. Sci. Technol. 2014, 51, 1–10. [Google Scholar] [CrossRef]
- Berners-Lee, T.J. Information Management: A Proposal No. CERN-DD-89-001-OC. 1989. Available online: https://web.archive.org/web/20100401051011/https://www.w3.org/History/1989/proposal.html (accessed on 1 September 2023).
- Berners-Lee, T. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor; Harper San Francisco: San Francisco, CA, USA, 1999. [Google Scholar]
- Van Dijk, J.; Hacker, K. The digital divide as a complex and dynamic phenomenon. Inf. Soc. 2003, 19, 315–326. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. Digital Divides in the Pacific Islands. IT Soc. 2004, 1, 46–65. [Google Scholar]
- Spennemann, D.H.R.; Green, D.G. A special interest network for natural hazard mitigation for cultural heritage sites. In Disaster Management Programs for Historic Sites; Spennemann, D.H.R., Look, D.W., Eds.; Association for Preservation Technology, Western Chapter and Johnstone Centre, Charles Sturt University: San Francisco, CA, USA; Albury, Australia, 1998; pp. 165–172. [Google Scholar]
- Langville, A.N.; Meyer, C.D. Google’s PageRank and Beyond: The Science of Search Engine Rankings; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
- Henzinger, M.; Lawrence, S. Extracting knowledge from the world wide web. Proc. Natl. Acad. Sci. USA 2004, 101, 5186–5191. [Google Scholar] [CrossRef]
- Choo, C.W.; Detlor, B.; Turnbull, D. Web Work: Information Seeking and Knowledge Work on the World Wide Web; Springer Science & Business Media: Dordrecht, The Netherlands, 2013; Volume 1. [Google Scholar]
- Wikipedia. History of Wikipedia. 2023. Available online: https://en.wikipedia.org/wiki/History_of_Wikipedia (accessed on 1 September 2023).
- Inayatullah, S. Future Avoiders, Migrants and Natives. J. Futures Stud. 2004, 9, 83–86. [Google Scholar]
- Merriam-Webster. Google [Verb]. 2023. Available online: https://www.merriam-webster.com/dictionary/google (accessed on 1 September 2023).
- Lee, P.M.; Foster, R.; McNulty, A.; McIver, R.; Patel, P. Ask Dr Google: What STI do I have? Sex. Transm. Infect. 2021, 97, 420–422. [Google Scholar] [CrossRef]
- Burzyńska, J.; Bartosiewicz, A.; Januszewicz, P. Dr. Google: Physicians—The Web—Patients Triangle: Digital Skills and Attitudes towards e-Health Solutions among Physicians in South Eastern Poland—A Cross-Sectional Study in a Pre-COVID-19 Era. Int. J. Environ. Res. Public Health 2023, 20, 978. [Google Scholar] [CrossRef]
- Subba Rao, S. Commercialization of the Internet. New Libr. World 1997, 98, 228–232. [Google Scholar] [CrossRef]
- Fabos, B. Wrong Turn on the Information Superhighway: Education and the Commercialization of the Internet; Teachers College Press: New York, NY, USA, 2004. [Google Scholar]
- Australian Competition and Consumer Commission. Digital Platform Services Inquiry. Interim Report 9: Revisiting General Search Services; Australian Competition and Consumer Commission: Canberra, Australia, 2024.
- Nielsen, R.K. News media, search engines and social networking sites as varieties of online gatekeepers. In Rethinking Journalism Again; Peters, C., Broersma, M., Eds.; Routledge: Abingdon, VA, USA, 2016; pp. 93–108. [Google Scholar]
- Helberger, N.; Kleinen-von Königslöw, K.; Van Der Noll, R. Regulating the new information intermediaries as gatekeepers of information diversity. Info 2015, 17, 50–71. [Google Scholar] [CrossRef]
- Silverstein, C.; Marais, H.; Henzinger, M.; Moricz, M. Analysis of a very large web search engine query log. In Proceedings of the Acm Sigir Forum, Berkeley, CA, USA, 15–19 August 1999; pp. 6–12. [Google Scholar]
- McTavish, J.; Harris, R.; Wathen, N. Searching for health: The topography of the first page. Ethics Inf. Technol. 2011, 13, 227–240. [Google Scholar] [CrossRef]
- Khamis, S.; Ang, L.; Welling, R. Self-branding,‘micro-celebrity’and the rise of social media influencers. Celebr. Stud. 2017, 8, 191–208. [Google Scholar] [CrossRef]
- Smith, B.G.; Kendall, M.C.; Knighton, D.; Wright, T. Rise of the brand ambassador: Social stake, corporate social responsibility and influence among the social media influencers. Commun. Manag. Rev. 2018, 3, 6–29. [Google Scholar] [CrossRef]
- Haenlein, M.; Anadol, E.; Farnsworth, T.; Hugo, H.; Hunichen, J.; Welte, D. Navigating the new era of influencer marketing: How to be successful on Instagram, TikTok, & Co. Calif. Manag. Rev. 2020, 63, 5–25. [Google Scholar] [CrossRef]
- Barrera, O.; Guriev, S.; Henry, E.; Zhuravskaya, E. Facts, alternative facts, and fact checking in times of post-truth politics. J. Public Econ. 2020, 182, 104123. [Google Scholar] [CrossRef]
- Collins, H. Establishing veritocracy: Society, truth and science. Transcult. Psychiatry 2024, 61, 783–794. [Google Scholar] [CrossRef]
- Hibberd, F.J. Unfolding Social Constructionism; Springer Science & Business Media: Dordrecht, The Netherlands, 2006. [Google Scholar]
- Aïmeur, E.; Amri, S.; Brassard, G. Fake news, disinformation and misinformation in social media: A review. Soc. Netw. Anal. Min. 2023, 13, 30. [Google Scholar] [CrossRef]
- Muhammed, T.S.; Mathew, S.K. The disaster of misinformation: A review of research in social media. Int. J. Data Sci. Anal. 2022, 13, 271–285. [Google Scholar] [CrossRef]
- Amazeen, M.A. Journalistic interventions: The structural factors affecting the global emergence of fact-checking. Journalism 2020, 21, 95–111. [Google Scholar] [CrossRef]
- Robertson, C.T.; Mourão, R.R.; Thorson, E. Who uses fact-checking sites? The impact of demographics, political antecedents, and media use on fact-checking site awareness, attitudes, and behavior. Int. J. Press/Politics 2020, 25, 217–237. [Google Scholar] [CrossRef]
- Humprecht, E. How do they debunk “fake news”? A cross-national comparison of transparency in fact checks. Digit. J. 2020, 8, 310–327. [Google Scholar] [CrossRef]
- Patil, S.V. Penalized for expertise: Psychological proximity and the devaluation of polymathic experts. In Academy of Management Proceedings; Academy of Management: Valhalla, NY, USA, 2012; p. 14694. [Google Scholar]
- Lavazza, A.; Farina, M. The role of experts in the COVID-19 pandemic and the limits of their epistemic authority in democracy. Front. Public Health 2020, 8, 356. [Google Scholar] [CrossRef]
- Sinatra, G.M.; Lombardi, D. Evaluating sources of scientific evidence and claims in the post-truth era may require reappraising plausibility judgments. Educ. Psychol. 2020, 55, 120–131. [Google Scholar] [CrossRef]
- Garrett, R.K. Echo chambers online?: Politically motivated selective exposure among Internet news users. J. Comput. Mediat. Commun. 2009, 14, 265–285. [Google Scholar] [CrossRef]
- Kitchens, B.; Johnson, S.L.; Gray, P. Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS Q. 2020, 44, 1619–1649. [Google Scholar] [CrossRef]
- Weismueller, J.; Gruner, R.L.; Harrigan, P.; Coussement, K.; Wang, S. Information sharing and political polarisation on social media: The role of falsehood and partisanship. Inf. Syst. J. 2024, 34, 854–893. [Google Scholar] [CrossRef]
- Miller, S.; Menard, P.; Bourrie, D.; Sittig, S. Integrating truth bias and elaboration likelihood to understand how political polarisation impacts disinformation engagement on social media. Inf. Syst. J. 2024, 34, 642–679. [Google Scholar] [CrossRef]
- Zwanka, R.J.; Buff, C. COVID-19 generation: A conceptual framework of the consumer behavioral shifts to be caused by the COVID-19 pandemic. J. Int. Consum. Mark. 2021, 33, 58–67. [Google Scholar] [CrossRef]
- Carrion-Alvarez, D.; Tijerina-Salina, P.X. Fake news in COVID-19: A perspective. Health Promot. Perspect. 2020, 10, 290. [Google Scholar] [CrossRef] [PubMed]
- Bojic, L.; Nikolic, N.; Tucakovic, L. State vs. anti-vaxxers: Analysis of COVID-19 echo chambers in Serbia. Communications 2023, 48, 273–291. [Google Scholar] [CrossRef]
- Lee, C.S.; Merizalde, J.; Colautti, J.D.; An, J.; Kwak, H. Storm the capitol: Linking offline political speech and online Twitter extra-representational participation on QAnon and the January 6 insurrection. Front. Sociol. 2022, 7, 876070. [Google Scholar] [CrossRef]
- Anderson, J.; Coduto, K.D. Attitudinal and Emotional Reactions to the Insurrection at the US Capitol on January 6, 2021. Am. Behav. Sci. 2022, 68, 913–931. [Google Scholar] [CrossRef]
- Valenzuela, A.; Puntoni, S.; Hoffman, D.; Castelo, N.; De Freitas, J.; Dietvorst, B.; Hildebrand, C.; Huh, Y.E.; Meyer, R.; Sweeney, M.E. How artificial intelligence constrains the human experience. J. Assoc. Consum. Res. 2024, 9, 241–256. [Google Scholar] [CrossRef]
- Ciria, A.; Albarracin, M.; Miller, M.; Lara, B. Social media platforms: Trading with prediction error minimization for your attention. Preprints.
- Markov, T.; Zhang, C.; Agarwal, S.; Eloundou, T.; Lee, T.; Adler, S.; Jiang, A.; Weng, L. New and Improved Content Moderation Tooling. [via Wayback Machine]. 22 August 2023. Available online: https://web.archive.org/web/20230130233845mp_/https://openai.com/blog/new-and-improved-content-moderation-tooling/ (accessed on 28 June 2023).
- Collins, E.; Ghahramani, Z. LaMDA: Our Breakthrough Conversation Technology. 18 May 2021. Available online: https://blog.google/technology/ai/lamda/ (accessed on 1 September 2023).
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2023, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
- OpenAI. ChatGPT 3.5 (August 3 version). 3 August 2023. Available online: https://chat.openai.com (accessed on 11 September 2023).
- OpenAI. GPT-4. 14 March 2023. Available online: https://web.archive.org/web/20230131024235/https://openai.com/blog/chatgpt/ (accessed on 1 October 2023).
- OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
- OpenAI. GPT-4 System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
- OpenAI. GPT-4o System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
- Conway, A. What is GPT-4o? Everything You Need to Know About the New OpenAI Model that Everyone Can Use for Free. 2024, 13 May 2024. Available online: https://www.xda-developers.com/gpt-4o/ (accessed on 12 August 2025).
- OpenAI. Models. 2025. Available online: https://platform.openai.com/docs/models (accessed on 4 February 2025).
- OpenAI. Introducing 4o Image Generation. 2025. Available online: https://openai.com/index/introducing-4o-image-generation/ (accessed on 30 March 2025).
- OpenAI. GPT-4 Vision System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
- OpenAI. GPT-4.5 System Card; OpenAi: San Francisco, CA, USA, 2025. [Google Scholar]
- Lehmann, J. On the Use of ChatGPT in Cultural Heritage Institutions. 3 March 2023. Available online: https://mmk.sbb.berlin/2023/03/03/on-the-use-of-chatgpt-in-cultural-heritage-institutions/?lang=en (accessed on 29 June 2023).
- Trichopoulos, G.; Konstantakis, M.; Caridakis, G.; Katifori, A.; Koukouli, M. Crafting a Museum Guide Using GPT4. Bid Data Cogntiive Comput. 2023, 7, 148. [Google Scholar] [CrossRef]
- Maas, C. Was Kann ChatGPT für Kultureinrichtungen Tun? 13 May 2023. Available online: https://web.archive.org/web/20230926102318/https://www.aureka.ai/2023/05/13/was-kann-chatgpt-fuer-kultureinrichtungen-tun// (accessed on 12 August 2025).
- Merritt, E. Chatting About Museums with ChatGPT. 25 January 2023. Available online: https://www.aam-us.org/2023/01/25/chatting-about-museums-with-chatgpt (accessed on 29 June 2023).
- Ciecko, B. 9 Ways ChatGPT Can Empower Museums & Cultural Organizations in the Digital Age. 13 April 2023. Available online: https://cuseum.com/blog/2023/4/13/9-ways-chatgpt-can-empower-museums-cultural-organizations-in-the-digital-age (accessed on 29 June 2023).
- Frąckiewicz, M. ChatGPT in the World of Museum Technology: Enhancing Visitor Experiences and Digital Engagement. 30 April 2023. Available online: https://ts2.space/en/chatgpt-in-the-world-of-museum-technology-enhancing-visitor-experiences-and-digital-engagement/ (accessed on 29 June 2023).
- Zimmerman, A.; Janhonen, J.; Beer, E. Human/AI relationships: Challenges, downsides, and impacts on human/human relationships. AI Ethics 2024, 4, 1555–1567. [Google Scholar] [CrossRef]
- Wu, J. Social and ethical impact of emotional AI advancement: The rise of pseudo-intimacy relationships and challenges in human interactions. Front. Psychol. 2024, 15, 1410462. [Google Scholar] [CrossRef] [PubMed]
- Spennemann, D.H.R.; Biles, J.; Brown, L.; Ireland, M.F.; Longmore, L.; Singh, C.J.; Wallis, A.; Ward, C. ChatGPT giving advice on how to cheat in university assignments: How workable are its suggestions? Interact. Technol. Smart Educ. 2024, 21, 690–707. [Google Scholar] [CrossRef]
- Jesson, A.; Beltran Velez, N.; Chu, Q.; Karlekar, S.; Kossen, J.; Gal, Y.; Cunningham, J.P.; Blei, D. Estimating the hallucination rate of generative ai. Adv. Neural Inf. Process. Syst. 2024, 37, 31154–31201. [Google Scholar]
- Siontis, K.C.; Attia, Z.I.; Asirvatham, S.J.; Friedman, P.A. ChatGPT hallucinating: Can it get any more humanlike? Eur. Heart J. 2024, 45, 321–323. [Google Scholar] [CrossRef]
- Kim, Y.; Jeong, H.; Chen, S.; Li, S.S.; Lu, M.; Alhamoud, K.; Mun, J.; Grau, C.; Jung, M.; Gameiro, R. Medical hallucinations in foundation models and their impact on healthcare. arXiv 2025, arXiv:2503.05777. [Google Scholar]
- Turing, A.M. Computing machinery and intelligence. Mind 1950, 49, 433–460. [Google Scholar] [CrossRef]
- French, R.M. The Turing Test: The first 50 years. Trends Cogn. Sci. 2000, 4, 115–122. [Google Scholar] [CrossRef] [PubMed]
- Pinar Saygin, A.; Cicekli, I.; Akman, V. Turing test: 50 years later. Minds Mach. 2000, 10, 463–518. [Google Scholar] [CrossRef]
- Jones, C.R.; Bergen, B.K. Large language models pass the turing test. arXiv 2025, arXiv:2503.23674. [Google Scholar] [CrossRef]
- Singh, A. Consequences of the Turing Test: OpenAI's GPT-4.5. SSRN 2025. [Google Scholar] [CrossRef]
- Mappouras, G. Turing Test 2.0: The General Intelligence Threshold. arXiv 2025, arXiv:2505.19550. [Google Scholar] [CrossRef]
- Mungoli, N. Exploring the synergy of prompt engineering and reinforcement learning for enhanced control and responsiveness in chat GPT. J. Electr. Electron. Eng. 2023, 2, 201–205. [Google Scholar] [CrossRef]
- Lee, U.; Jung, H.; Jeon, Y.; Sohn, Y.; Hwang, W.; Moon, J.; Kim, H. Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in english education. Educ. Inf. Technol. 2023, 29, 11483–11515. [Google Scholar] [CrossRef]
- Jacobsen, L.J.; Weber, K.E. The promises and pitfalls of ChatGPT as a feedback provider in higher education: An exploratory study of prompt engineering and the quality of AI-driven feedback. Preprint 2023. [Google Scholar] [CrossRef]
- Kim, B.S. Acculturation and enculturation. Handb. Asian Am. Psychol. 2007, 2, 141–158. [Google Scholar]
- Alcántara-Pilar, J.M.; Armenski, T.; Blanco-Encomienda, F.J.; Del Barrio-García, S. Effects of cultural difference on users’ online experience with a destination website: A structural equation modelling approach. J. Destin. Mark. Manag. 2018, 8, 301–311. [Google Scholar] [CrossRef]
- Hekman, S. Truth and method: Feminist standpoint theory revisited. Signs J. Women Cult. Soc. 1997, 22, 341–365. [Google Scholar] [CrossRef]
- Bennett, M.J. A developmental model of intercultural sensitivity. In The International Encyclopedia of Intercultural Communication; Yun, K.Y., Ed.; John Wiley & Sons: New York, NY, USA, 2017; pp. 1–10. [Google Scholar]
- Mokry, N. Instant Gratification: A Decline in Our Attention and a Rise in Digital Disinformation. Ph.D. Thesis, University of Texas, Austin, TX, USA, 2024. [Google Scholar]
- Reeves, N.; Yin, W.; Simperl, E.; Redi, M. “The Death of Wikipedia?”—Exploring the Impact of ChatGPT on Wikipedia Engagement. arXiv 2024, arXiv:2405.10205. [Google Scholar]
- Barrett, B. ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw. Wired Magazine. 23 April 2025. Available online: https://www.wired.com/story/google-ai-overviews-meaning/ (accessed on 15 May 2025).
- Ritala, P.; Ruokonen, M.; Ramaul, L. Transforming boundaries: How does ChatGPT change knowledge work? J. Bus. Strategy, 2023; ahead-of-print. [Google Scholar] [CrossRef]
- Trichopoulos, G.; Konstantakis, M.; Alexandridis, G.; Caridakis, G. Large Language Models as Recommendation Systems in Museums. Electronics 2023, 12, 3829. [Google Scholar] [CrossRef]
- Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
- Lee, H.-P.; Sarkar, A.; Tankelevitch, L.; Drosos, I.; Rintel, S.; Banks, R.; Wilson, N. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 1 May–26 April 2025; pp. 1–22. [Google Scholar]
- Kim, D.Y.; Kim, H.-Y. Trust me, trust me not: A nuanced view of influencer marketing on social media. J. Bus. Res. 2021, 134, 223–232. [Google Scholar] [CrossRef]
- Pop, R.-A.; Săplăcan, Z.; Dabija, D.-C.; Alt, M.-A. The impact of social media influencers on travel decisions: The role of trust in consumer decision journey. Curr. Issues Tour. 2022, 25, 823–843. [Google Scholar] [CrossRef]
- Mardon, R.; Cocker, H.; Daunt, K. How social media influencers impact consumer collectives: An embeddedness perspective. J. Consum. Res. 2023, 50, 617–644. [Google Scholar] [CrossRef]
- Baker, S.A.; Rojek, C. Lifestyle Gurus: Constructing Authority and Influence Online; John Wiley & Sons: New York, NY, USA, 2020. [Google Scholar]
- Arriagada, A.; Bishop, S. Between commerciality and authenticity: The imaginary of social media influencers in the platform economy. Commun. Cult. Crit. 2021, 14, 568–586. [Google Scholar] [CrossRef]
- Krasni, J. How to hijack a discourse? Reflections on the concepts of post-truth and fake news. Humanit. Soc. Sci. Commun. 2020, 7, 32. [Google Scholar] [CrossRef]
- van Dyk, S. Post-truth, postmodernism and the public sphere. Theory Cult. Soc. 2022, 39, 37–50. [Google Scholar] [CrossRef]
- Ruprechter, T.; Santos, T.; Helic, D. Relating Wikipedia article quality to edit behavior and link structure. Appl. Netw. Sci. 2020, 5, 61. [Google Scholar] [CrossRef]
- Ren, Y.; Zhang, H.; Kraut, R.E. How did they build the free encyclopedia? a literature review of collaboration and coordination among wikipedia editors. ACM Trans. Comput. Hum. Interact. 2023, 31, 1–48. [Google Scholar] [CrossRef]
- Borkakoty, H.; Espinosa-Anke, L. Hoaxpedia: A Unified Wikipedia Hoax Articles Dataset. arXiv 2024, arXiv:2405.02175. [Google Scholar] [CrossRef]
- Shenoy, K.; Ilievski, F.; Garijo, D.; Schwabe, D.; Szekely, P. A study of the quality of Wikidata. J. Web Semant. 2022, 72, 100679. [Google Scholar] [CrossRef]
- Amaral, G.; Piscopo, A.; Kaffee, L.-A.; Rodrigues, O.; Simperl, E. Assessing the quality of sources in Wikidata across languages: A hybrid approach. J. Data Inf. Qual. (JDIQ) 2021, 13, 1–35. [Google Scholar] [CrossRef]
- Rozado, D. The political biases of chatgpt. Soc. Sci. 2023, 12, 148. [Google Scholar] [CrossRef]
- Ferrara, E. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv 2023, arXiv:2304.03738. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. What has ChatGPT read? References and referencing of archaeological literature by a generative artificial intelligence application. arXiv 2023, arXiv:2308.03301. [Google Scholar] [CrossRef]
- Chang, K.K.; Cramer, M.; Soni, S.; Bamman, D. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv 2023, arXiv:2305.00118. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. Non-responsiveness of DALL-E to exclusion prompts suggests underlying bias towards Bitcoin. SSRN 2025. [Google Scholar] [CrossRef]
- Park, P.; Schoenegger, P.; Zhu, C. “Correct answers” from the psychology of artificial intelligence. arXiv 2023, arXiv:2302.07267. [Google Scholar]
- Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Pauly, M. The Self-Perception and Political Biases of ChatGPT. arXiv 2023, arXiv:2304.07333. [Google Scholar] [CrossRef]
- Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring chatgpt political bias. SSRN 2023, 198, 3–23. [Google Scholar] [CrossRef]
- McGee, R.W. Is chat gpt biased against conservatives? an empirical study (February 15, 2023). SSRN 2023. [Google Scholar] [CrossRef]
- Hartmann, J.; Schwenzow, J.; Witte, M. The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. arXiv 2023, arXiv:2301.01768. [Google Scholar] [CrossRef]
- Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Roidl, M.; Pauly, M. The Self-Perception and Political Biases of ChatGPT. Hum. Behav. Emerg. Technol. 2024, 2024, 7115633. [Google Scholar] [CrossRef]
- Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring ChatGPT political bias. Public Choice 2024, 198, 3–23. [Google Scholar] [CrossRef]
- Cao, Y.; Zhou, L.; Lee, S.; Cabello, L.; Chen, M.; Hershcovich, D. Assessing cross-cultural alignment between chatgpt and human societies: An empirical study. arXiv 2023, arXiv:2303.17466. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. The layered injection model of algorithmic bias as a conceptual framework to understand biases impacting the output of text-to-image models. SSRN 2025. [Google Scholar] [CrossRef]
- Moayeri, M.; Basu, S.; Balasubramanian, S.; Kattakinda, P.; Chengini, A.; Brauneis, R.; Feizi, S. Rethinking artistic copyright infringements in the era of text-to-image generative models. arXiv 2024, arXiv:2404.08030. [Google Scholar]
- Kaplan, D.M.; Palitsky, R.; Arconada Alvarez, S.J.; Pozzo, N.S.; Greenleaf, M.N.; Atkinson, C.A.; Lam, W.A. What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. J. Med. Internet Res. 2024, 26, e51837. [Google Scholar] [CrossRef]
- Duan, W.; McNeese, N.; Li, L. Gender Stereotypes toward Non-gendered Generative AI: The Role of Gendered Expertise and Gendered Linguistic Cues. Proc. ACM Hum. Comput. Interact. 2025, 9, 1–35. [Google Scholar] [CrossRef]
- Melero Lázaro, M.; García Ull, F.J. Gender stereotypes in AI-generated images. El Prof. Inf. 2023, 32, e320505. [Google Scholar] [CrossRef]
- Hosseini, D.D. Generative AI: A problematic illustration of the intersections of racialized gender, race, ethnicity. OSF Prepr. 2024. [Google Scholar] [CrossRef]
- Currie, G.; John, G.; Hewis, J. Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists. Int. J. Pharm. Pract. 2024, 32, 524–531. [Google Scholar] [CrossRef] [PubMed]
- Gisselbaek, M.; Suppan, M.; Minsart, L.; Köselerli, E.; Nainan Myatra, S.; Matot, I.; Barreto Chang, O.L.; Saxena, S.; Berger-Estilita, J. Representation of intensivists’ race/ethnicity, sex, and age by artificial intelligence: A cross-sectional study of two text-to-image models. Crit. Care 2024, 28, 363. [Google Scholar] [CrossRef]
- Rieder, B.; Sire, G. Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media Soc. 2014, 16, 195–211. [Google Scholar] [CrossRef]
- Ursu, R.M. The power of rankings: Quantifying the effect of rankings on online consumer search and purchase decisions. Mark. Sci. 2018, 37, 530–552. [Google Scholar] [CrossRef]
- Zannettou, S.; Caulfield, T.; De Cristofaro, E.; Sirivianos, M.; Stringhini, G.; Blackburn, J. Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the web. In Proceedings of the WWW ‘19: Companion Proceedings of The 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 218–226. [Google Scholar]
- Pavlíková, M.; Šenkýřová, B.; Drmola, J. Propaganda and disinformation go online. In Challenging Online Propaganda and Disinformation in the 21st Century; Gregor, M., Mlejnková, P., Eds.; Springer: Cham, Switzerland, 2021; pp. 43–74. [Google Scholar]
- Pan, C.A.; Yakhmi, S.; Iyer, T.P.; Strasnick, E.; Zhang, A.X.; Bernstein, M.S. Comparing the perceived legitimacy of content moderation processes: Contractors, algorithms, expert panels, and digital juries. Proc. ACM Hum. Comput. Interact. 2022, 6, 1–31. [Google Scholar] [CrossRef]
- Yaccarino, L. Why X Decided to Bring the Content Police in-House. 6 February 2024. Available online: https://fortune.com/2024/02/06/inside-elon-musk-x-twitter-austin-content-moderation (accessed on 1 August 2024).
- Coeckelbergh, M. LLMs, truth, and democracy: An overview of risks. Sci. Eng. Ethics 2025, 31, 4. [Google Scholar] [CrossRef]
- Lazar, S.; Manuali, L. Can LLMs advance democratic values? arXiv 2024, arXiv:2410.08418. [Google Scholar] [CrossRef]
- Spennemann, D.H.R. “Delving into”: The quantification of Ai generated content on the internet (synthetic data). arXiv 2025, arXiv:2504.08755. [Google Scholar] [CrossRef]
- Brooks, C.; Eggert, S.; Peskoff, D. The Rise of AI-Generated Content in Wikipedia. arXiv 2024, arXiv:2410.08044. [Google Scholar] [CrossRef]
- Wagner, C.; Jiang, L. Death by AI: Will large language models diminish Wikipedia? J. Assoc. Inf. Sci. Technol. 2025, 76, 743–751. [Google Scholar] [CrossRef]
- McGee, R.W. Ethics committees can be unethical: The chatgpt response. SSRN 2023. Available online: https://ssrn.com/abstract=4392258 (accessed on 1 August 2024).
- McGee, R.W. Can Tax Evasion Ever Be Ethical? A ChatGPT Answer. SSRN 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4413428 (accessed on 1 August 2024).
- Hunt, C.; Rouse, S.M. Polarization and Place-Based Representation in US State Legislatures. Legis. Stud. Q. 2024, 49, 411–424. [Google Scholar] [CrossRef]
- Forster, C.M.; Dunlop, D.A. Divided We Advertise: A Comparative Analysis of Post-Citizens United Political Advertising in an Increasingly Polarised United States. Bristol Inst. Learn. Teach. (BILT) Stud. Res. J. 2024, 28, 1–12. [Google Scholar]
- Draca, M.; Schwarz, C. How polarised are citizens? Measuring ideology from the ground up. Econ. J. 2024, 134, 1950–1984. [Google Scholar] [CrossRef]
- Hughes, S.; Spennemann, D.H.R.; Harvey, R. Printing heritage of colonial newspapers in Victoria: The Ararat Advertiser and the Avoca Mail. Bull. Bibliogr. Soc. Aust. N. Z. 2004, 28, 41–61. [Google Scholar]
- Gerard, P.; Botzer, N.; Weninger, T. Truth Social Dataset. In Proceedings of the International AAAI Conference on Web and Social Media, Limassol, Cyprus, 5–8 June 2023; pp. 1034–1040. [Google Scholar]
- Roberts, J.; Wahl-Jorgensen, K. Strategies of alternative right-wing media: The case of Breitbart News. In The Routledge Companion to Political Journalism; Routledge: London, UK, 2021; pp. 164–173. [Google Scholar]
- MohanaSundaram, A.; Sathanantham, S.T.; Ivanov, A.; Mofatteh, M. DeepSeek’s Readiness for Medical Research and Practice: Prospects, Bottlenecks, and Global Regulatory Constraints. Ann. Biomed. Eng. 2025, 53, 1754–1756. [Google Scholar] [CrossRef]
- Girich, M.; Magomedova, O.; Levashenko, A.; Ermokhin, I.; Chernovol, K. Restricting DeepSeek operations, obligating platforms to pay tips, restricting the sale of personal data, protecting intellectual property rights in AI training, anti-competitive practices online. SSRN 2025. [Google Scholar] [CrossRef]
- Henry, C. They make press barons look good. Br. J. Rev. 2025, 36, 13–18. [Google Scholar] [CrossRef]
- Wilson, G.K. Business, Politics, and Trump. In The Changing Character of the American Right, Volume II: Ideology, Politics and Policy in the Era of Trump; Springer: Berlin/Heidelberg, Germany, 2025; pp. 53–74. [Google Scholar]
- Neundorf, A.; Nazrullaeva, E.; Northmore-Ball, K.; Tertytchnaya, K.; Kim, W. Varieties of Indoctrination: The Politicization of Education and the Media around the World. Perspect. Politics 2024, 22, 771–798. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Spennemann, D.H.R. Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge 2025, 5, 20. https://doi.org/10.3390/knowledge5030020
Spennemann DHR. Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge. 2025; 5(3):20. https://doi.org/10.3390/knowledge5030020
Chicago/Turabian StyleSpennemann, Dirk H. R. 2025. "Generative Artificial Intelligence and the Future of Public Knowledge" Knowledge 5, no. 3: 20. https://doi.org/10.3390/knowledge5030020
APA StyleSpennemann, D. H. R. (2025). Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge, 5(3), 20. https://doi.org/10.3390/knowledge5030020