Next Article in Journal
A Review of Ethical Challenges in AI for Emergency Management
Previous Article in Journal
A Mathematical Model on Brain’s Ability of Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Essay

Generative Artificial Intelligence and the Future of Public Knowledge

by
Dirk H. R. Spennemann
1,2
1
Gulbali Institute, Charles Sturt University, P.O. Box 789, Albury, NSW 2640, Australia
2
Libraries Research Group, Charles Sturt University, Wagga Wagga, NSW 2678, Australia
Knowledge 2025, 5(3), 20; https://doi.org/10.3390/knowledge5030020
Submission received: 19 May 2025 / Revised: 26 August 2025 / Accepted: 12 September 2025 / Published: 17 September 2025

Abstract

Generative artificial intelligence (AI), in particular large language models such as ChatGPT, have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for use in various professions. Following the printing press and the internet, generative AI language models are the third transformative technological invention, with truly cross-sectoral impact on knowledge transmission and knowledge generation. While the printing press allowed for the transmission of knowledge that is independent of the physical presence of the knowledge holder, with publishers emerging as gatekeepers, the internet added levels of democratization, allowing anyone to publish, along with global immediacy. The development of social media resulted in an increased fragmentation and tribalization in online communities regarding their ways of knowing, resulting in the propagation of alternative truths that resonate in echo chambers. It is against this background that generative AI language models have entered public consciousness. Using the strategic foresight methodology, this paper will examine the proposition that the age of generative AI will emerge as an age of public ignorance.

1. Introduction

Even though generative artificial intelligence (AI) language models only reached wide-scale public consciousness since the public release of ChatGPT 3.5 in November 2022, much has been written on the potentially transformative nature of generative AI in various professions. Early concerns about the limitations of AI [1,2] appeared to have been overcome, if the media hype was to be believed. There was an initial flurry to examine its potential in disciplines such as agriculture [3], chemistry [4], computer programming [5], cultural heritage management [6], diabetes education [7], medicine [8,9], museum exhibitions [10], nursing education [11], radiography [12] and remote sensing in archaeology [13]. Soon after, however, deeper explorations occurred, including the application of generative AI as brainstorming tools for the development of strategic foresight scenarios [14,15,16,17].
Ever since the public launch of ChatGPT, there has been considerable public fascination with generative AI language models and the popularization of their capabilities and suitability for various professions, as well as technophobic scenarios mentioned in the public press [18,19]. The uptake and use of generative AI applications has been phenomenal. At the time of writing (May 2025), ChatGPT, holding a market share of 59.2%, was handling 1 billion queries from 122 million active users a day, with a total of 4.5 billion site visits in March 2025 alone [20].
Yet little thought appears to have been given to the implications the generative AI language models may have on the formation of public knowledge in the medium- and long-term future [21,22,23]. In this paper I will argue that generative AI is the last development in a series of technological inventions that is truly transformative in its cross-sectoral impact on knowledge transmission and public education. I will further posit that unlike the earlier seismic shifts caused by the inventions of the printing press and the internet, both of which expanded public access to knowledge, the latter may not prove to be as beneficial as currently touted.
Using strategic foresight methodology [24,25] and drawing on Jim Dator’s dictum that “any useful statement about the future appears [at first] ridiculous” [26,27], this paper will examine the polemic proposition that the age of generative AI will emerge as an age of public ignorance. Given that this paper is a deliberation, it does not follow the standard IMRAD (Introduction, methodology, results and discussion) format of papers.

2. Trajectories of the Creation of Public Knowledge

Before we consider the possible implications of generative AI on the creation of public knowledge, we need to consider the long- and short-term trajectories that define the present as we know it. For the purposes of this paper, public knowledge encompasses publicly available information on facts, practices, and social norms that are widely disseminated through communication, publications, or common experience and that are recognized as being commonly known (as opposed to specialist or expert) knowledge and that are recognized as by and large credible or true by a community.

2.1. The Pre-Digital Creation of Public Knowledge

Before the Age of Enlightenment and the subsequent Scientific Revolution, knowledge was concentrated in a few hands, essentially the clergy and later also the various guilds of professionals and artisans. As a manifestation of power and social control, both literacy and professional knowledge were carefully curated. People were generally excluded from access to the knowledge and technology held by a guild, as well as the economic opportunities this represented, unless they had been formally admitted and sworn to secrecy [28,29]. Johannes Gutenberg’s invention of the printing press (1445) allowed for the mass production of texts, ranging from Bibles to dictionaries [30]. In printed form knowledge could be passed on without the physical presence of the knowledge holders and be rapidly disseminated by book traders to all those who could read [31,32]. Yet knowledge largely continued to be curated by publishers emerged as the new gatekeepers, with commercial or political interests influencing what was deemed publishable [32,33]. In addition to standard works such as Bibles, psalters and dictionaries, the printing press soon allowed for the broadcasting of political news in the form of pamphlets. Early examples are the pamphlet publication campaigns during the Bauernkrieg (Great Peasants’ Revolt) of 1524–1525 in Germany [34] or the English Civil Wars (1641–1651) [35]. Formal publication and thus public dissemination of parts of academic knowledge commenced during the mid-seventeenth century, such as Matthäus Merian’s Historiae naturalis de quadrupetibus (natural history of quadrupeds) in 1652 [36].
During the Age of Enlightenment, formal and later compulsory public education not only raised the literacy levels of the general public but also opened the doors for a broad range of knowledge to be systematically disseminated in printed form, such as Diderot’s Encyclopédie [37]. The societal change that this entailed led to well-educated generations of educators, civil servants and professionals, aspiring to improve their own and their children’s social position through education and knowledge. In addition to the ability to enter most professions on academic merit, a proliferation of multi-volume encyclopedias meant that everybody who had the economic means to acquire a set, or to access it in the emerging public libraries, had access to broad range of carefully curated information [38]. While well-known examples are The Encyclopedia Britannica (Edinburgh, from 1768 onwards), Diderot’s Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (Paris, 1751 to 1772) or the Brockhaus Conversations-Lexikon (Leipzig 1808 onwards) [39,40], examples exist in many other languages [41]. The nineteenth century saw the development of Mechanics Institutes and similar venues of adult education [42], as well as the rise of university- and technical-college-trained professionals [43,44] who engaged in outreach, extension and public education and thereby transformed many professions that still maintained traditional practices, such as agriculture [45]. During the second part of the twentieth century, initiatives like the GI bill in the USA [46] or the Dawkins reforms of the 1980s in Australia [47] saw an expansion of the tertiary education sector with a concomitant dramatic increase in college- and university-educated professionals and civil servants [48,49]. In the closing years of the twentieth century, formal outreach (‘extension’) [50] and public education processes began to wither and gave way to Ted Talks [51].

2.2. The Creation of Public Knowledge in an Online World

Even though multivolume encyclopedias existed and often were the hallmark of educated families, their prohibitive costs meant that they only graced the shelves of upper class and aspiring upper middle class families [52]. The creation of the World Wide Web (WWW) in 1993 [53,54] spawned a transformative technology on a global scale, putting information at the fingertips of those who could afford a computer. The ubiquity of smart phones by the end of the first decade of the twenty-first century put to rest any fears of a digital divide in knowledge access [55,56]. While websites and the knowledge contained therein were initially managed via Special Interest Networks curated by academics and IT specialists [57], search engines based on web crawler algorithms soon democratized the process, not only by automatically indexing the content on the web but also by allocating ‘page ranks’ based on the connectivity of the individual pages and the number of links that pointed back to them [58].
Web-publishing as well as the associated free access to that information (subject to a computer and an internet connection) essentially democratized knowledge dissemination and knowledge acquisition [59,60]. This development revolutionized the public dissemination of knowledge, as it made content not only readily available on a global scale, but emerging online discussion groups also allowed for the development of highly specialized online communities with members sharing and pooling their knowledge. In addition, knowledge aggregators soon emerged, generated by their users as a distributed model. Examples for this are Wikipedia (since 2001) [61], Quora (since 2006) and various narrow-focus ‘wikis’ (e.g., camera-wiki.org, accessed on 1 September 2025).
Concurrent with the ever-increasing body of information, usage patterns on the WWW changed. A web initially populated and used by early adopters and ‘techno-geeks’ soon saw widespread cross-sectoral and cross-generational adoption. The emergent future-native generation (sense Inayatullah [62]) began to rely on the WWW as a primary source of information, so much so that ‘to google’ has become an accepted verb [63]. Concomitantly, users came to expect that answers to almost any question could be obtained with a high degree of immediacy, from cooking recipes to medical advice (‘ask Dr. Google’) [64,65]. Given that much of the information is provided in a largely decontextualized form, users have few avenues to assess the veracity of the information. Where information is contextualized, a casual user is likely to lack the skills and background knowledge to fully understand the implications.
The commercialization of the WWW soon saw the page ranks not being purely defined by connectivity but influenced by commercial interests, ranging from promotional revenue to behind-the-scenes business interests of the search engine providers [58,66,67,68]. Today, even though other search engines exist, Google, and to a lesser degree Bing, dominate the search engine market [68], often integrated (as the default setting) with customized web browsers offered by the same companies (e.g., Google Chrome and Microsoft Edge). While the WWW allows, at least in theory, for an anarchic ‘free-for-all’ in publishing content, in practice access to that content occurs via a search engine that, with their page ranking algorithms, effectively functions as a gate keeper [69,70]. Whilst it is possible to find any content with persistence and aided by a complex set of keyword combinations and nested search logic, this requires techno-literacy. In consequence, the majority of web searches do not progress beyond the first page of links offered up by a search engine [71,72] and most users seem to be satisfied with the fragmented, snippet-kind of information they are presented with.
In a parallel development, segmented digital communities with special interests emerged; LinkedIn (2002), Flickr (2004), Reddit (2005), Twitter (now ‘X’, 2006), ResearchGate (2008), and Instagram (2010), as well as Facebook (2004, now Meta), which was to become a social media behemoth. While some virtual communities are highly specific to segments of society, such as Flickr (photographers) or ResearchGate (academia), others are cross-sectional. Within these online communities increasingly specialized sub-communities emerged, catering for highly segmented needs. These online sub-communities facilitated three parallel developments: the generation of genuine new knowledge, for example driven by study and technical observation of collectible items (such as Camera-Wiki); the rise of social media ‘influencers’, primarily on YouTube, Instagram and, more recently, TikTok [73,74,75]; and the emergence of ‘alternative truths’ and ‘alternative facts’ [76].
The latter trends commenced with the popularization of post-modernist thought [77,78] but have become accelerated by the ease with which opinions and interpretations can be ‘published’ and disseminated via web pages and social media posts without any editorial control as to veracity. Misinformation and intentional disinformation are rife [79,80].
In the media world, cost cutting reduced, if not eliminated, the role of subeditors, which, coupled with a decline in critical and investigative journalism, resulted in a flow of unedited, and on occasion unfiltered information [81]. While this has given rise to fact checking sites such as Snopes, Politifact or AAP FactCheck [82,83] (even though their educative efficacy on public opinion may be marginal [76]), the role of subject matter experts has been devalued and the role of academics in the wider community has been diminished [84,85,86].
The social media ecosystems that developed from this became sources of knowledge and ‘truth’, with the emergence of narrow-cast ideological viewpoints bouncing inside in echo chambers devoid of divergent views [87,88,89,90]. The conspiracy theories of the ‘anti-vaxxer’ movements during the COVID-19 pandemic [91,92,93], or the alternative narratives created around the 6 January insurrection in Washington, DC (USA) are both cases in point [94,95]. The echo chamber effect is further enabled by social media algorithms that offer users copious amounts of similar text, images and videos in response to site usage patterns that are recorded and analysed in the background [96,97].
It is in these contexts that the development of generative AI models and their uptake by the general public needs to be seen.

3. The Transformative Power of Generative AI Language Models

Generative AI language models, such as OpenAI’s ChatGPT, Google’s Gemini and the Chinese DeepSeek, are deep learning models that use transformer architecture to detect the statistical connections and patterns in textual data in order to generate coherent and contextually relevant human-like responses based on the input they receive [98,99,100]. Generative AI language models are pre-trained on a large and diverse body of textual materials, such as books (both fiction and non-fiction), government documents and articles, and webpages. Pre-training, carried out by human interaction, teaches such models to anticipate the following word in a text string, by moderating statistics and patterns with linguistic patterns and semantic fields. The depth and complexity of responses is correlated with the size of the training dataset and the nature of the textual resources incorporated into that dataset.
Taking ChatGPT as an example, the language model has undergone several iterations and improvements since its formal release in 2018. ChatGPT 2.0, released in September 2019, was based on a training dataset that relied on 1.5 billion parameters and possessed the ability to provide longer segments of coherent text, including the addition of human preferences and feedback. The next release, ChatGPT 3 (June 2020), drew on a training dataset of 175 billion parameters, allowing it to execute diverse natural language tasks, such as text classification and sentiment analysis, thereby facilitating the contextual answering of questions [101]. In addition to functioning as a chatbot, the pre-training with this dataset allowed ChatGPT to draft basic contextual texts such as e-mails and programming code. ChatGPT 3.5 was released to the general public in November 2022, as a part of a free research preview to encourage experimentation [102]. GPT-4 (March 2023) exhibited responsiveness to user intentions as expressed in the questions/query tasks, a reduced probability of generating offensive or dangerous output and a greater factual accuracy [101,103,104]. The temporal cut off for the addition of training data for both ChatGPT 3.5 and GPT-4 was September 2021, which implies that ChatGPT cannot integrate or comment on events, discoveries and viewpoints that are later than that date, even though training and fine-tuning of the models is ongoing. GPT-4 underwent progressive improvements [105], with ChatGPT-4o released by OpenAI in May 2024 [106]. ChatGPT-4o is linked to OpenAI’s Dall-E text-to-image generation algorithm which allows the user to request a visualization of the concepts included in ChatGPT-4o’s answers [107,108]. Since then, OpenAI has released an updated version of ChatGPT4o that has its own integrated photorealistic image generator [109,110]. In February 2025, a new version, ChatGPT4.5, was rolled out [111].
Generative AI language models are not a monolith, however. Apart from competing public use products, such as OpenAI’s ChatGPT and Google’s Gemini and DeepSeek, the underlying technology allows it to be customized. While the open access models that captured the public imagination draw on a large dataset of public knowledge, industry-specific applications can rely on a customized and well-defined training dataset. Consider a museum setting, for example, where generative AI language models can be used to conceptualize and plan exhibitions based on museum holdings and extract and summarize pertinent data from longer documents and collections inventory databases [112], to create texts for exhibition panels, object labels and catalogue information and museum guides [113,114,115] as well as to respond to user queries, track reactions to specific exhibitions or the museum overall and to track visitor satisfaction [116,117]. Consider a business setting, such a as housing developer, where a generative AI language model is coupled with generative visual AI design. A user could interactively design a home where the prospective homeowner uses their own language to express their desires and concepts. Generative AI could prompt where needed and offer aspects of home design that have not been considered. Once fully customized with choices such as bathroom fittings, etc., the total design can not only be automatically costed out, but also a broad delivery time frame can be calculated. Consider also a governmental portal, where a generative AI language model can guide a user to navigate the labyrinth of regulations, funding opportunities and general service delivery.
While such approaches allow for a maximum of highly personalized user input and user interaction in their way of expressing themselves, a major shortcoming exists. Such approaches lack the capacity for genuine empathy, although models have been developed that mimic human relationships [118,119]. Another issue is that any human creativity is confined to the user interacting with the generative AI model, rather than a combined creativity of the user and the person answering, as would be the case in an inter-human communication.
Given that the outputs of generative AI language models are merely complex text predictions based on statistical connections and patterns in textual data that are included in their training dataset, such language models, at least at this point in time, can suffer from inverted logic phenomena [6] and are incapable of independent creative thought. Earlier models could be manipulated to answer with inverted valence to provide unethical responses [120]. Generative AI models were prone to hallucinate by generating spurious text and reasoning [121,122], and this still occurs among some models [123].
One of the key criteria for artificial intelligence to be intelligent was the Turing test, a 1950 AI benchmark devised by Alan Turing [124] where a human interrogator, while engaging in text-based conversations with a human and a machine is unable determine which is which because the machine convincingly demonstrates human-like intelligent behaviour [125,126]. While recent large language models seem to be able to pass the Turing test [127,128], this in itself does not signal general intelligence, reasoning and creativity [129]. Any apparent creativity displayed by generative AI language models, such as when providing a requested poem, is solely based on the perception of the person interacting with the language model. The reader interpreting the output within their own experiences and expectations will judge a generative AI written poem as creative and ‘fit-for-purpose’ or will dismiss it as bad poetry.
Common to the examples presented above is that the ‘knowledge’ applied by the model is owned by the entity that deploys the generative AI model(s), and that the knowledge base contained in its training dataset is finite, well circumscribed and deemed to be authoritative. All answers provided will adhere to one truth only, and given the design of the model and its training, that truth will be absolute. In industry-specific applications this may be applicable and apposite, but what about general, public settings where truth is based on a presentation of evidence and its critical examination?

4. The Creation of Public Knowledge by Generative AI Language Models

At first sight, generative AI language models appear to democratize knowledge acquisition. By being able to seemingly interact with a user in the user’s natural speech pattern and being able to provide a topical and seemingly appropriate response, this removes the need for complex and tedious research and the onus imposed on the user. Seemingly informed answers are at the very fingertips of every user who has access to the generative AI language models via a computer or a smart phone—which encompasses just about every youth and adult in the global north, and most adults in the global south.
Yet there are very fundamental differences between human knowledge and generative AI “knowledge”. Whereas human knowledge is grounded in experience, perception, understanding, and the ability to reflect, reason, and learn continuously, generative AI “knowledge” is derived from to statistically learned patterns embedded in neural network weights, not explicit facts or understanding. While human knowledge involves emotions, beliefs, context awareness, and moral judgment, AI “knowledge” as expressed in its responses is based on probability, not comprehension, and thus lacks understanding, consciousness, or awareness. Generative AI cannot verify truth or meaning. The inherent problem, and the pitfalls associated with this, is the human-like nature of the generative AI response and its seemingly comprehensive reply delivered in an authoritative tone that mimics human answers. Unless digitally and AI literate, users may believe this to be knowledgeable answer and, over time, may come to believe that the AI they are ‘conversing’ with ‘knows.’
Large language models, such as ChatGPT, allow the user to request a rerun of the same task/query if the response is deemed inadequate. The user is then prompted to comment on the adequacy of the second response, which then will allow the generative AI model to ‘learn’ [130]. In addition, the tweaking or ‘engineering’ of the prompts will result in ‘better’ responses [131,132]. Any iterative user engagement beyond the initial ‘naïve’ query will, by necessity, be shaped by the epistemological foundations of the user and the familial, communal, educational, sociopolitical and historical contexts in which they are enculturated [133,134,135,136].
It can be assumed that there will always be individuals who engage in critical enquiry and thus have the desire and the capacity to triangulate the validity and veracity of answers from multiple sources. Yet, based on the trajectory of current WWW usage, most users will be looking for a quick, one-shot answer without the need to engage in ‘laborious’ research that could be considered, ever so marginally, as ‘in-depth.’ The allure of generative AI language models is that user queries can be asked in the user’s natural way of expressing themselves rather than by entering a series of arcane keyword combinations that best summarize what the user is seeking to know. Depending on how the question is asked, the user is presented with a concise or a contextual answer—an answer that will be delivered in seconds. This immediacy is well suited to the preference for instant gratification observable in contemporary society [137]. Further elaboration, if required, occurs in the form of a ‘dialogue’ with the generative AI model which effectively mimics the user’s interpersonal communication patterns. A significant advantage of generative AI language models over standard web pages is that the response is tailored specifically to the question in the way it was asked, thereby obviating the need to screen a body of text such as a web page or a Wikipedia entry for the specific information sought [138].
Even though most web searches do not progress beyond the first page of links offered by a search engine, they still offer the user a choice with information source(s) to access. Furthermore, such webpages will contain contextual information as well as information which is beyond the scope of the initial question that was in the user’s mind. Often more than one webpage will be consulted, thus exposing the user to a wider set of information which when ‘consumed’ may provide context, thereby broadening the user’s perceptions of the issue at hand, and may well result in a different interpretation of the context, leading to new lines of questions. Questions posed to generative AI language models will provide one targeted answer, the validity of which must be taken at face value. The downside of such narrow casting of responses is that the user is not exposed to context, and receives an answer in isolation. This is not a new phenomenon, however, as the development of online versions of hardcopy publications, or online only versions, provide users with instant access to a resource but deprive them of the opportunity to vicariously browse other publications that were, traditionally, shelved under the same Dewey or Library of Congress codes.
While the response to a question posed to generative AI language can be regenerated, the result will be one answer that is broadly the same as the answer received before. The only question is whether that single answer satisfies a user’s needs and the user’s expectations of veracity.
Given that people have ‘traditionally’ used the Google search engine as an answer tool by posing a general question (e.g., What clothes should I wear when visiting Milan in January”), and then perusing weblinks that looked promising, it was inevitable that Google would develop and deploy AI. This now resides on top of every results page, providing a summary of information gleaned from the WWW. One of the implications is that users who consider themselves time-poor will only peruse that information and will no longer look up individual pages. With this development, Google has morphed from a gatekeeper ranking information into a purveyor of knowledge. Critical in this regard is that while webpages are, by and large, authored by humans and provide information or opinions that can be scrutinized, this newly generated knowledge is provided without any curation or quality control. Moreover, the fluidity of the response, continually subject to regeneration, makes third-party quality control difficult—one cannot go back to a previous state.
The underlying assumption of the user is that the information provided is trustworthy—with the trust derived from the brand image and reputation of Google as a commercial entity. Setting aside that Gemini has been found to generate gibberish [139] (as opposed to hallucinate), Gemini provides weblinks to sources, although these links are small (indicated by a paper clip). It can be surmised that only very few users will follow these to check up on the Gemini’s assertions. Furthermore, as Google functions as a gatekeeper it has the power to select some links above others—which it already does as part of revenue raising. Similar links are offered by ChatGPT.
Polemically, I posit that over time, the critical thinking of the majority of public users will decline even further and such single-answer solutions by Gemini or ChatGPT, in particular when offered in an interactive, natural language mode of delivery, will suffice and that these answers will be seen as the fount of knowledge that, given enough people being served similar information, will morph into public knowledge. I base this proposition on five trajectories:
  • Generative AI language models are suited to semi-automated repetitive and routine tasks (drafting e-mails, summarizing and extracting information from larger textual datasets, providing item selection based on semi-vague user input) that are customized to a user’s needs [140,141]. The increasing familiarity with such systems in daily work life will ‘bleed’ into daily practice in non-work settings, leading to a wide-spread uptake.
  • In an age of both instant gratification [137] and an attitude that ‘near enough is good enough,’ the bulk of the public will avail themselves of solutions that provide the immediate and most convenient answers with the least amount of effort (cognitive offloading)—especially where confidence in the abilities of AI is high [142,143].
  • Transformative technologies that satisfy this demand are poised to gain traction and dominance over alternate ‘traditional’ and labour-intensive approaches.
  • There is a worrying trend that sees critical thinking skills and information literacy in a near-terminal decline among large swathes of the populace. Evidence for this can be found in the increasingly uncritical consumption of news and information and the growing reliance on and the trust placed in the opinion of social media influencers [144,145] and the continued devaluation of academic subject matter experts. At present, many researchers, relying on years of experience and rigorous, peer reviewed research, find themselves in the position where they may well generate findings and insights into social or environmental phenomena, but that their findings are dismissed out of hand, without any evidence to the contrary, by ideologically or politically motivated commentators and social media influencers who have assumed a position of authority in online communities [146,147,148]. The past decade has shown an increased level of tribalism in the general public, where selective use of news sources, online communities that act as echo chambers, and the spruiking of alternative ‘truths’ that defy unequivocal evidence to the contrary have increasingly become normalized [149,150]. In many Western democracies there is no indication that this trend will abate anytime soon. Rather, it is bound to continue, intensify and accelerate.
  • Finally, there are multiple examples where, over time, information sources that once were derided as untrustworthy or shallow have become accepted by the general public not only as the norm but also as the primary source of information. A good example is Wikipedia which has become one of the main ‘go-to’ sites on the internet, even though its content is not created by accredited experts nor reviewed by other experts and thus is of mixed quality subject to the epistemology of the page authors, revisers and editors [151,152,153,154,155].
Even though it is possibly of little concern to the average user of the broader public, any reliance on generative AI language models has fundamental problems as any such model can only be as good as its design. ChatGPT, for example, often purports to merely strive to provide factual and neutral information and not to hold political opinions [156]. Because model specifications, algorithmic constraints and policy decisions shape the final product [157], ChatGPT and any other generative AI language model cannot be without bias. This relates to the quality of the source material that comprises the dataset, such as whether primary, secondary or even tertiary sources, such as Wikipedia, have been used to train the model [158,159]. Additional biases derive from the selection of the source material, which would have been subconsciously, if not consciously, influenced and shaped by the ideologies of the people programming, ‘feeding’ and training the system. Further biases derive from the epistemological basis of staff who training a model during the red teaming phase [160]. Consequently, while some studies suggested right-leaning moral foundations in the generated answers [161], political orientation tests, for example, showed that ChatGPT exhibits a preference for libertarian, progressive, and left-leaning viewpoints [156,162,163,164,165,166,167], with a North American slant [168]. Similar problems have been identified for text-to-image generation models [169] that are increasingly being used to generate copyright-free stock imagery [170].
Gender as well as ethnic (‘racial’) stereotypes in relation to professions and occupations are damaging as they harm a person’s self-esteem and career prospects and may lead to discrimination (from subconscious to overt). Generative AI models have been demonstrated to perpetuate stereotypes, in particular in terms of gender (e.g., mechanics are men, nurses are female) [171,172,173], ethnicity (Caucasian characters in authority roles, African Americans in service roles) [174,175,176] and favouring the representation of younger individuals [175,176]. Stereotypes are lazy generalizations demonstrating a lack of critical thinking among their perpetuators. And herein lies some of the insidiousness of the current developments. It will become increasingly difficult to challenge and rectify harmful stereotypes given the tendency of generative AI to perpetuate stereotypes and the instant gratification-fuelled uncritical acceptance of AI-generated information. Indeed, as argued elsewhere, the photorealistic representations generated by the most recent version of ChatGPT can lead to the erasure of First Nations peoples as authoritative knowledge holders.
While it can be surmised that the observed, present biases are unintentional and subconsciously reflective of the interest spheres and ideological outlook of the creators and trainers [169], it raises the uncomfortable spectre of a malevolent actor intentionally influencing the dataset to pursue an ideological, political or commercial agenda. While such control is more likely to occur in authoritarian regimes, particularly those that already exercise restrictive control and censorship over internet and social media content accessible to their citizens (e.g., China), there is no guarantee that other countries or the commercial IT behemoths (e.g., Google, Microsoft, Baidu) themselves may not engage in a similar fashion. The hidden biases in page ranking, reputedly influenced by advertising revenue, highlight that such concerns are not without foundation [177,178].
Critical here is also the fact that such a dataset is unlikely to remain static. While this was initially the case as the technologies were being refined, this was unlikely to continue in the future. Current iterations of generative AI language models will possess the capability to dynamically acquire new sources and add them to the dataset. ChatGPT, Google Gemini and Microsoft Copilot can add web references and key links to their responses. While purportedly it is the most appropriate source which is linked, the user has no means of verifying that this is in fact the case rather than being a link that was proffered not on merit but solely based on a hidden advertising-fuelled ranking. What sources are being added to the base dataset and which sources will be ‘overlooked’ will depend entirely on the algorithm deployed. Thus, it is readily conceivable that access to news sources can be confined to selected news channels with the concomitant editorial and political reporting bias.
In an age where disinformation campaigns via online troll farms are commonplace [179,180] and where content moderation seems to be on the decline [181,182], a scenario has to be contemplated where politically motivated state actors may inject disinformation content into the dataset of a generative AI language model, thereby adjusting its responses. It has been posited that this may challenge the underlying tenets of how democracy works [183,184].
In addition, further manipulation of these responses appears possible by targeted external training of the language model. As commented on above, at present users have the opportunity to regenerate a response if the initial response does not match their expectations. They are then asked to evaluate whether the regenerated version was better or worse than the initial answer. As this feedback mechanism adds a ‘learning’ element to the model, it is readily conceivable that a malevolent actor may engage an ‘army’ of users or bots to flood a generative AI language model with selected queries asked in different phrasing but with the same content and then systematically nudging the responses, through feedback from seemingly ‘different users’, into a desired direction. A considerable volume of pages currently on the WWW have not been authored solely by humans but comprise generative AI-created content uploaded by people or by automated bots. It has been estimated that between 30% and 40% of web pages are AI-generated text, diluting the content [185]. More importantly, such AI-generated content can not only be manipulated to express ‘news’ and ‘knowledge’ with a specific political or ideological slant, but through the sheer volume of these creations can flood the space with intentionally biased or disinformative text. This can create a scenario were generative AI, while drawing on content on the web, may cannibalize its own content and eventually will ‘dumb down’ as any new and genuine information will be drowned out by regurgitated and rehashed old material. But even where generative AI draws on tertiary, semi-authoritative sources, such as Wikipedia, the quality is poised to decline and such content is also increasingly generated via AI tools [186,187].
While generative AI is meant to possess safety awareness mechanisms that are designed to prevent ‘unsafe’ and unethical responses to user prompts [188,189], applications such as ChatGPT (at least its earlier versions) can be prompted to respond with inverted moral valence and thus to provide unethical responses [120]. While the older approaches to entice ChatGPT to engage in inverted ethical valence have been blocked, DeepSeek was, at the time of writing, still liable to some degree. This may well open the door to inject malicious content to the ‘learning’ of the large language model.
Finally, in moves reminiscent of George Orwell’s 1984, it is of course also possible to alter the responses of generative AI language models by flooding the zone of accessible sources or by removing or flagging unsuitable material that had been included in the training dataset, but that for whatever reason has become undesirable. In consequence of the material no longer being accessible to the model or of it being drowned out by other material, responses will exhibit stronger biases in the opposite direction. There is a real risk of a future with a single truth presented to a progressively uncritical public.

5. Is There an Off-Ramp or Are We Doomed to Be on the Road to Public Ignorance?

Before we consider whether we are doomed and are on the road to public ignorance, it is apposite to briefly consider alternate futures, as these may indicate off-ramps that we can take to avoid the spectre that has been painted above. Futures studies and strategic foresight methodology of course stipulate that there is not only one future that can be conceptualized but that trajectories point to multiple futures that diverge the further we move forward from the present [24,25].
One of these scenarios entails the continuation and expansion of the tribalization of the public sphere as exemplified by the increasing and deepening political polarization currently on display in United States politics [190,191]. This phenomenon also appears to be gaining traction in other Western democracies [192]. The appetite of the media, both broadcast and narrow-cast outlets, is for sensationalism, and thus circulation/ratings fuelling content play a major role. This is hardly a new development, however. During the nineteenth century newspaper proprietors blatantly advanced the political and economic interests of their constituency [193], a modus operandi that at the present time plays out in TV news channels and internet media. Where a standpoint is not being catered for, either in general or in the desired intensity, alternative news outlets and media systems are established (e.g., Breitbart News and ‘Truth Social’ [194,195]), attracting selected segments of society. What is different compared to the past, and what is of both particular interest and concern, is the increasing unwillingness of segments of the public to engage in a critical examination of one’s own standpoint and tolerance for the standpoints of others. While this is at present largely confined to diverging opinions and interpretations of political, social and environmental/natural events, examples such as the ‘anti-vaxxer’ movements during the COVID-19 pandemic [91,92,93] show that this can extend to other aspects of public life where ideological standpoints rather than evidence dominate discussion. A future can thus be conceptualized where competing and tribalized generative AI language models will provide users with access to knowledge that conforms with their own ideological persuasion. Some people will flock to these, as they tend to reinforce stereotypes and substantiate their own construction of the world without being challenged to justify their standpoint. By controlling the training datasets, as well as any future additions and algorisms that determine the inclusion of additional WWW sources, generative AI language models will become the ultimate echo chambers, perpetually reinforcing opinion and ‘knowledge’.
A second scenario expands the current trend of market dominance by OpenAI, Google and Microsoft. Google already dominates the search engine market, and through its Gemini AI application embedded in and presented on top of the results page acts not only as a gate keeper but increasingly also as a one-stop-shop. OpenAI dominates the public space in which generative AI is used, such as through paraphrasing, brainstorming and summarizing/synthesizing. Microsoft exercises undisputed dominance in the corporate applications space (Word, Excel, Teams) due to its integrated and closed architecture. Copilot, its own OpenAI-derived generative AI application, will dominate the corporate world due to data security concerns. OpenAI’s main Chinese rival, DeepSeek, while banned from government and educational computers in Australia, the United Kingdom and the USA [196,197], may gain influence. Common to all these is that they are controlled by corporate entities which, by and large, can operate with little governmental oversight. It can be safely posited that corporate returns will eventually colour the responses that generative AI presents to the user. It is conceivable that major shareholders in the companies will, over time, exert editorial influence, if not control, as they themselves are exposed to political pressures (e.g., [198,199]).
Common to both scenarios are that the ‘common user’ has no genuine control over the functioning of the generative AI models, nor over the data sources that the models consult to develop their own responses. While the development of encyclopedias, as well as the early versions of the WWW, placed genuine access to curated knowledge into user hands as long as the user showed agency to acquire such knowledge, the trend in generative AI dominated knowledge dissemination systems increasingly erodes the ability to exercise that agency.
As both scenarios have a distinctly dystopian feel to them, one must ask whether there is there an off-ramp or whether we are doomed to be on the road to tribalism and public ignorance? Three underlying trends seem to be propelling society on the trajectory to these dystopian futures: an increasing concentration of knowledge provision in a few hands, an increasingly uncritical population, and the devaluation of evidenced-based research carried out by researchers and specialists.
The public education system plays a pivotal role in slowing down and reversing these trends. Educators, from primary school through to university, play a critical role in instilling an understanding of the nature and value of evidence-based research among their students, by showing that divergent interpretations of a finding may be possible, but that such divergent interpretations are not based on opinion but need to be firmly founded on informed critiques that are evidence-based in themselves. This implies, however, that the teachers themselves are trained, empowered and motivated to engage in critical enquiry themselves, which, sadly, may well not be the case given recent criticism of the quality of new education graduates. Information literacy, including AI literacy, are cornerstones and should be compulsory teaching units at all levels of education. Fundamental, however, will be that the educators actively instil in their students a desire for critical thinking and foster this at every step of the way, from primary school through to university. This also involved instilling an appreciation that reliance on easy and convenient solutions, even if correct, diminish critical thinking skills and ability [142].
Unless they do so, an information-illiterate society populated by ignoramuses will be the inevitable outcome. The early signs of this are already apparent. To avoid this, present and future educators will need to be equipped with appropriate intellectual and curriculum tools. To do so requires political will: a will to make this a priority, a will to provide the required teaching resources and teacher training, and a will to appropriately fund education. Education is always political, but several recent examples in the USA have seen an increasing politicization of the education system along hardline ideological lines [200]. It has been posited that political ideologues are not interested in and are afraid of a population capable of critical thinking. Unless we ensure an AI-literate public, George Orwell’s prescient novel 1984 is about to come true, albeit in a technologically different form.
The emergent generative AI revolution is forcing our hand and as a society we have arrived at the Rubicon. Quo vadis?

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Weizenbaum, J. Computer Power and Human Reason: From Judgment to Calculation; W. H. Freeman and Company: San Francisco, CA, USA, 1976. [Google Scholar]
  2. Dreyfus, H.L. What Computers Still Can’t Do: A Critique of Artificial Reason; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  3. Biswas, S. Importance of chat GPT in Agriculture: According to chat GPT. SSRN 2023. [Google Scholar] [CrossRef]
  4. Castro Nascimento, C.M.; Pimentel, A.S. Do Large Language Models Understand Chemistry? A Conversation with ChatGPT. J. Chem. Inf. Model. 2023, 63, 1649–1655. [Google Scholar] [CrossRef]
  5. Surameery, N.M.S.; Shakor, M.Y. Use chat gpt to solve programming bugs. Int. J. Inf. Technol. Comput. Eng. (IJITC) 2023, 3, 17–22. [Google Scholar] [CrossRef]
  6. Spennemann, D.H.R. ChatGPT and the generation of digitally born “knowledge”: How does a generative AI language model interpret cultural heritage values? Knowledge 2023, 3, 480–512. [Google Scholar] [CrossRef]
  7. Sng, G.G.R.; Tung, J.Y.M.; Lim, D.Y.Z.; Bee, Y.M. Potential and pitfalls of ChatGPT and natural-language artificial intelligence models for diabetes education. Diabetes Care 2023, 46, e103–e105. [Google Scholar] [CrossRef]
  8. Bays, H.E.; Fitch, A.; Cuda, S.; Gonsahn-Bollie, S.; Rickey, E.; Hablutzel, J.; Coy, R.; Censani, M. Artificial intelligence and obesity management: An Obesity Medicine Association (OMA) Clinical Practice Statement (CPS) 2023. Obes. Pillars 2023, 6, 100065. [Google Scholar] [CrossRef]
  9. Grünebaum, A.; Chervenak, J.; Pollet, S.L.; Katz, A.; Chervenak, F.A. The exciting potential for ChatGPT in obstetrics and gynecology. Am. J. Obstet. Gynecol. 2023, 228, 696–705. [Google Scholar] [CrossRef]
  10. Spennemann, D.H.R. Exhibiting the Heritage of COVID-19—A Conversation with ChatGPT. Heritage 2023, 6, 5732–5749. [Google Scholar] [CrossRef]
  11. Qi, X.; Zhu, Z.; Wu, B. The promise and peril of ChatGPT in geriatric nursing education: What We know and do not know. Aging Health Res. 2023, 3, 100136. [Google Scholar] [CrossRef]
  12. Currie, G.; Singh, C.; Nelson, T.; Nabasenja, C.; Al-Hayek, Y.; Spuur, K. ChatGPT in medical imaging higher education. Radiography 2023, 29, 792–799. [Google Scholar] [CrossRef] [PubMed]
  13. Agapiou, A.; Lysandrou, V. Interacting with the Artificial Intelligence (AI) Language Model ChatGPT: A Synopsis of Earth Observation and Remote Sensing in Archaeology. Heritage 2023, 6, 4072–4085. [Google Scholar] [CrossRef]
  14. Bolzan, M.; Scioni, M.; Marozzi, M. Futures Studies and Artificial Intelligence: First Results of an Experimental Collaborative Approach. In Proceedings of the Scientific Meeting of the Italian Statistical Society, Bari, Italy, 17–20 June 2024; pp. 299–303. [Google Scholar]
  15. Calleo, Y.; Giuffrida, N.; Pilla, F. Exploring hybrid models for identifying locations for active mobility pathways using real-time spatial Delphi and GANs. Eur. Transp. Res. Rev. 2024, 16, 61. [Google Scholar] [CrossRef] [PubMed]
  16. Calleo, Y.; Taylor, A.; Pilla, F.; Di Zio, S. AI-assisted Real-Time Spatial Delphi: Integrating artificial intelligence models for advancing future scenarios analysis. Qual. Quant. 2025, 59 (Suppl. S2), 1427–1459. [Google Scholar] [CrossRef]
  17. Di Zio, S.; Calleo, Y.; Bolzan, M. Delphi-based visual scenarios: An innovative use of generative adversarial networks. Futures 2023, 154, 103280. [Google Scholar] [CrossRef]
  18. Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10, 49. [Google Scholar] [CrossRef]
  19. De Angelis, L.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; Rizzo, C. ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Front. Public Health 2023, 11, 1166120. [Google Scholar] [CrossRef] [PubMed]
  20. Singh, S. ChatGPT Statistics (2025): DAU & MAU Data Worldwide. 19 May 2025. Available online: https://www.demandsage.com/chatgpt-statistics/ (accessed on 25 May 2025).
  21. Li, A.; Sinnamon, L. Generative AI Search Engines as Arbiters of Public Knowledge: An Audit of Bias and Authority. Proc. Assoc. Inf. Sci. Technol. 2024, 61, 205–217. [Google Scholar] [CrossRef]
  22. Wihbey, J. AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge? SSRN 2024. [Google Scholar] [CrossRef]
  23. Brown, D.; Ellerton, P. Is AI Making Us Stupider? Maybe, According to One of the World’s Biggest AI Companies. 2025. Available online: https://theconversation.com/is-ai-making-us-stupider-maybe-according-to-one-of-the-worlds-biggest-ai-companies-249586 (accessed on 12 May 2025).
  24. Hines, A.; Bishop, P.J.; Slaughter, R.A. Thinking About the Future: Guidelines for Strategic Foresight; Social Technologies: Washington, DC, USA, 2006. [Google Scholar]
  25. van Duijne, F.; Bishop, P. Introduction to Strategic Foresight; Future Motions, Dutch Futures Society: Den Haag, The Netherlands, 2018; Volume 1, p. 67. [Google Scholar]
  26. Dunagan, J.F. Jim Dator: The Living Embodiment of Futures Studies. J. Futures Stud. 2013, 18, 131–138. [Google Scholar]
  27. Inayatullah, S. Learnings from futures studies: Learnings from Dator. J. Futures Stud. 2013, 18, 1–10. [Google Scholar]
  28. Kieser, A. Organizational, institutional, and societal evolution: Medieval craft guilds and the genesis of formal organizations. Adm. Sci. Q. 1989, 540–564. [Google Scholar] [CrossRef]
  29. Belfanti, C. Guilds, patents, and the circulation of technical knowledge: Northern Italy during the early modern age. Technol. Cult. 2004, 45, 569–589. [Google Scholar] [CrossRef]
  30. Schubring, G. Analysing Historical Mathematics Textbooks; Springer: Cham, Switzerland, 2023. [Google Scholar]
  31. Demets, L. Bruges as a multilingual contact zone: Book production and multilingual literary networks in fifteenth-century Bruges. Urban Hist. 2024, 51, 313–332. [Google Scholar] [CrossRef]
  32. Nuovo, A. Book Privileges in the Early Modern Age: From Trade Protection and Promotion to Content Regulation. In Book Markets in Mediterranean Europe and Latin America: Institutions and Strategies (15th–18th Centuries); Cachero, M., Maillard-Álvarez, N., Eds.; Springer: Cham, Switzerland, 2023; pp. 21–33. [Google Scholar]
  33. Landau, D.; Parshall, P.W. The Renaissance Print, 1470–1550; Yale University Press: New Haven, CT, USA, 1994. [Google Scholar]
  34. Frey, W.; Raitz, W.; Seitz, D.; Frey, W.; Raitz, W.; Seitz, D. Flugschriften aus der Zeit der Reformation und des Bauernkriegs. In Einführung in die Deutsche Literatur des 12. bis 16. Jahrhunderts: Bürgertum und Fürstenstaat—15./16. Jahrhundert; Westdeutscher Verlag: Opladen, Germany, 1981; pp. 38–68. [Google Scholar]
  35. Peacey, J. Politicians and Pamphleteers: Propaganda During the English Civil Wars and Interregnum; Routledge: London, UK, 2017. [Google Scholar]
  36. Spennemann, D.H.R. Matthäus Merian’s crocodile in Japan. A biblio-forensic examination of the origins and longevity of an illustration of a Crocodylus niloticus in Jan Jonston’s Historiae naturalis de quadrupetibus. Scr. Print 2019, 43, 201–239. [Google Scholar]
  37. Boto, C. The Age of Enlightenment and Education. In Oxford Research Encyclopedia of Education; Noblit, G.W., Ed.; Oxford Universoty Press: Oxford, UK, 2021. [Google Scholar]
  38. Sullivan, L.E. Circumscribing knowledge: Encyclopedias in historical perspective. J. Relig. 1990, 70, 315–339. [Google Scholar] [CrossRef]
  39. Hohoff, U. 200 Jahre Brockhaus: Geschichte und Gegenwart eines großen Lexikons. Forsching Lehre 2009, 16, 118–120. [Google Scholar]
  40. Withers, C.W. Geography in its time: Geography and historical geography in Diderot and d’Alembert’s Encyclopédie. J. Hist. Geogr. 1993, 19, 255–264. [Google Scholar] [CrossRef]
  41. Simonsen, M. The Rise and Fall of Danish Encyclopedias, 1891–2017. In Stranded Encyclopedias, 1700–2000: Exploring Unfinished, Unpublished, Unsuccessful Encyclopedic Projects; Holmberg, L., Simonsen, M., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 287–322. [Google Scholar]
  42. Inkster, I. The Social Context of an Educational Movement: A Revisionist Approach to the English Mechanics’ Institutes, 1820–1850. Oxf. Rev. Educ. 1976, 2, 277–307. [Google Scholar] [CrossRef]
  43. Bruce, R.V. The Launching of Modern American Science, 1846–1876; Plunkett Lake Press: Lexington, MA, USA, 2022. [Google Scholar]
  44. Geiger, R. The rise and fall of useful knowledge: Higher education for science, agriculture & the mechanics arts, 1850–1875. In History of Higher Education Annual: 1998; Routledge: London, UK, 2020; pp. 47–65. [Google Scholar]
  45. True, A.C. A History of Agricultural Extension Work in the United States, 1785–1923; US Government Printing Office: Washington, WA, USA, 1928.
  46. Mettler, S. Soldiers to Citizens: The GI Bill and the Making of the Greatest Generation; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  47. Croucher, G.; Woelert, P. Institutional isomorphism and the creation of the unified national system of higher education in Australia: An empirical analysis. High. Educ. 2016, 71, 439–453. [Google Scholar] [CrossRef]
  48. McClelland, C.E. The German Experience of Professionalization: Modern Learned Professions and Their Organizations from the Early Nineteenth Century to the Hitler Era; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  49. Brezis, E.S.; Crouzet, F. The role of higher education institutions: Recruitment of elites and economic growth. Inst. Dev. Econ. Growth 2006, 13, 191. [Google Scholar]
  50. Milburn, L.-A.S.; Mulley, S.J.; Kline, C. The end of the beginning and the beginning of the end: The decline of public agricultural extension in Ontario. J. Ext. 2010, 48, 7. [Google Scholar] [CrossRef]
  51. Scotto di Carlo, G. The role of proximity in online popularizations: The case of TED talks. Discourse Stud. 2014, 16, 591–606. [Google Scholar] [CrossRef]
  52. Haider, J.; Sundin, O. The materiality of encyclopedic information: Remediating a loved one–Mourning Britannica. Proc. Am. Soc. Inf. Sci. Technol. 2014, 51, 1–10. [Google Scholar] [CrossRef]
  53. Berners-Lee, T.J. Information Management: A Proposal No. CERN-DD-89-001-OC. 1989. Available online: https://web.archive.org/web/20100401051011/https://www.w3.org/History/1989/proposal.html (accessed on 1 September 2023).
  54. Berners-Lee, T. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor; Harper San Francisco: San Francisco, CA, USA, 1999. [Google Scholar]
  55. Van Dijk, J.; Hacker, K. The digital divide as a complex and dynamic phenomenon. Inf. Soc. 2003, 19, 315–326. [Google Scholar] [CrossRef]
  56. Spennemann, D.H.R. Digital Divides in the Pacific Islands. IT Soc. 2004, 1, 46–65. [Google Scholar]
  57. Spennemann, D.H.R.; Green, D.G. A special interest network for natural hazard mitigation for cultural heritage sites. In Disaster Management Programs for Historic Sites; Spennemann, D.H.R., Look, D.W., Eds.; Association for Preservation Technology, Western Chapter and Johnstone Centre, Charles Sturt University: San Francisco, CA, USA; Albury, Australia, 1998; pp. 165–172. [Google Scholar]
  58. Langville, A.N.; Meyer, C.D. Google’s PageRank and Beyond: The Science of Search Engine Rankings; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
  59. Henzinger, M.; Lawrence, S. Extracting knowledge from the world wide web. Proc. Natl. Acad. Sci. USA 2004, 101, 5186–5191. [Google Scholar] [CrossRef]
  60. Choo, C.W.; Detlor, B.; Turnbull, D. Web Work: Information Seeking and Knowledge Work on the World Wide Web; Springer Science & Business Media: Dordrecht, The Netherlands, 2013; Volume 1. [Google Scholar]
  61. Wikipedia. History of Wikipedia. 2023. Available online: https://en.wikipedia.org/wiki/History_of_Wikipedia (accessed on 1 September 2023).
  62. Inayatullah, S. Future Avoiders, Migrants and Natives. J. Futures Stud. 2004, 9, 83–86. [Google Scholar]
  63. Merriam-Webster. Google [Verb]. 2023. Available online: https://www.merriam-webster.com/dictionary/google (accessed on 1 September 2023).
  64. Lee, P.M.; Foster, R.; McNulty, A.; McIver, R.; Patel, P. Ask Dr Google: What STI do I have? Sex. Transm. Infect. 2021, 97, 420–422. [Google Scholar] [CrossRef]
  65. Burzyńska, J.; Bartosiewicz, A.; Januszewicz, P. Dr. Google: Physicians—The Web—Patients Triangle: Digital Skills and Attitudes towards e-Health Solutions among Physicians in South Eastern Poland—A Cross-Sectional Study in a Pre-COVID-19 Era. Int. J. Environ. Res. Public Health 2023, 20, 978. [Google Scholar] [CrossRef]
  66. Subba Rao, S. Commercialization of the Internet. New Libr. World 1997, 98, 228–232. [Google Scholar] [CrossRef]
  67. Fabos, B. Wrong Turn on the Information Superhighway: Education and the Commercialization of the Internet; Teachers College Press: New York, NY, USA, 2004. [Google Scholar]
  68. Australian Competition and Consumer Commission. Digital Platform Services Inquiry. Interim Report 9: Revisiting General Search Services; Australian Competition and Consumer Commission: Canberra, Australia, 2024.
  69. Nielsen, R.K. News media, search engines and social networking sites as varieties of online gatekeepers. In Rethinking Journalism Again; Peters, C., Broersma, M., Eds.; Routledge: Abingdon, VA, USA, 2016; pp. 93–108. [Google Scholar]
  70. Helberger, N.; Kleinen-von Königslöw, K.; Van Der Noll, R. Regulating the new information intermediaries as gatekeepers of information diversity. Info 2015, 17, 50–71. [Google Scholar] [CrossRef]
  71. Silverstein, C.; Marais, H.; Henzinger, M.; Moricz, M. Analysis of a very large web search engine query log. In Proceedings of the Acm Sigir Forum, Berkeley, CA, USA, 15–19 August 1999; pp. 6–12. [Google Scholar]
  72. McTavish, J.; Harris, R.; Wathen, N. Searching for health: The topography of the first page. Ethics Inf. Technol. 2011, 13, 227–240. [Google Scholar] [CrossRef]
  73. Khamis, S.; Ang, L.; Welling, R. Self-branding,‘micro-celebrity’and the rise of social media influencers. Celebr. Stud. 2017, 8, 191–208. [Google Scholar] [CrossRef]
  74. Smith, B.G.; Kendall, M.C.; Knighton, D.; Wright, T. Rise of the brand ambassador: Social stake, corporate social responsibility and influence among the social media influencers. Commun. Manag. Rev. 2018, 3, 6–29. [Google Scholar] [CrossRef]
  75. Haenlein, M.; Anadol, E.; Farnsworth, T.; Hugo, H.; Hunichen, J.; Welte, D. Navigating the new era of influencer marketing: How to be successful on Instagram, TikTok, & Co. Calif. Manag. Rev. 2020, 63, 5–25. [Google Scholar] [CrossRef]
  76. Barrera, O.; Guriev, S.; Henry, E.; Zhuravskaya, E. Facts, alternative facts, and fact checking in times of post-truth politics. J. Public Econ. 2020, 182, 104123. [Google Scholar] [CrossRef]
  77. Collins, H. Establishing veritocracy: Society, truth and science. Transcult. Psychiatry 2024, 61, 783–794. [Google Scholar] [CrossRef]
  78. Hibberd, F.J. Unfolding Social Constructionism; Springer Science & Business Media: Dordrecht, The Netherlands, 2006. [Google Scholar]
  79. Aïmeur, E.; Amri, S.; Brassard, G. Fake news, disinformation and misinformation in social media: A review. Soc. Netw. Anal. Min. 2023, 13, 30. [Google Scholar] [CrossRef]
  80. Muhammed, T.S.; Mathew, S.K. The disaster of misinformation: A review of research in social media. Int. J. Data Sci. Anal. 2022, 13, 271–285. [Google Scholar] [CrossRef]
  81. Amazeen, M.A. Journalistic interventions: The structural factors affecting the global emergence of fact-checking. Journalism 2020, 21, 95–111. [Google Scholar] [CrossRef]
  82. Robertson, C.T.; Mourão, R.R.; Thorson, E. Who uses fact-checking sites? The impact of demographics, political antecedents, and media use on fact-checking site awareness, attitudes, and behavior. Int. J. Press/Politics 2020, 25, 217–237. [Google Scholar] [CrossRef]
  83. Humprecht, E. How do they debunk “fake news”? A cross-national comparison of transparency in fact checks. Digit. J. 2020, 8, 310–327. [Google Scholar] [CrossRef]
  84. Patil, S.V. Penalized for expertise: Psychological proximity and the devaluation of polymathic experts. In Academy of Management Proceedings; Academy of Management: Valhalla, NY, USA, 2012; p. 14694. [Google Scholar]
  85. Lavazza, A.; Farina, M. The role of experts in the COVID-19 pandemic and the limits of their epistemic authority in democracy. Front. Public Health 2020, 8, 356. [Google Scholar] [CrossRef]
  86. Sinatra, G.M.; Lombardi, D. Evaluating sources of scientific evidence and claims in the post-truth era may require reappraising plausibility judgments. Educ. Psychol. 2020, 55, 120–131. [Google Scholar] [CrossRef]
  87. Garrett, R.K. Echo chambers online?: Politically motivated selective exposure among Internet news users. J. Comput. Mediat. Commun. 2009, 14, 265–285. [Google Scholar] [CrossRef]
  88. Kitchens, B.; Johnson, S.L.; Gray, P. Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS Q. 2020, 44, 1619–1649. [Google Scholar] [CrossRef]
  89. Weismueller, J.; Gruner, R.L.; Harrigan, P.; Coussement, K.; Wang, S. Information sharing and political polarisation on social media: The role of falsehood and partisanship. Inf. Syst. J. 2024, 34, 854–893. [Google Scholar] [CrossRef]
  90. Miller, S.; Menard, P.; Bourrie, D.; Sittig, S. Integrating truth bias and elaboration likelihood to understand how political polarisation impacts disinformation engagement on social media. Inf. Syst. J. 2024, 34, 642–679. [Google Scholar] [CrossRef]
  91. Zwanka, R.J.; Buff, C. COVID-19 generation: A conceptual framework of the consumer behavioral shifts to be caused by the COVID-19 pandemic. J. Int. Consum. Mark. 2021, 33, 58–67. [Google Scholar] [CrossRef]
  92. Carrion-Alvarez, D.; Tijerina-Salina, P.X. Fake news in COVID-19: A perspective. Health Promot. Perspect. 2020, 10, 290. [Google Scholar] [CrossRef] [PubMed]
  93. Bojic, L.; Nikolic, N.; Tucakovic, L. State vs. anti-vaxxers: Analysis of COVID-19 echo chambers in Serbia. Communications 2023, 48, 273–291. [Google Scholar] [CrossRef]
  94. Lee, C.S.; Merizalde, J.; Colautti, J.D.; An, J.; Kwak, H. Storm the capitol: Linking offline political speech and online Twitter extra-representational participation on QAnon and the January 6 insurrection. Front. Sociol. 2022, 7, 876070. [Google Scholar] [CrossRef]
  95. Anderson, J.; Coduto, K.D. Attitudinal and Emotional Reactions to the Insurrection at the US Capitol on January 6, 2021. Am. Behav. Sci. 2022, 68, 913–931. [Google Scholar] [CrossRef]
  96. Valenzuela, A.; Puntoni, S.; Hoffman, D.; Castelo, N.; De Freitas, J.; Dietvorst, B.; Hildebrand, C.; Huh, Y.E.; Meyer, R.; Sweeney, M.E. How artificial intelligence constrains the human experience. J. Assoc. Consum. Res. 2024, 9, 241–256. [Google Scholar] [CrossRef]
  97. Ciria, A.; Albarracin, M.; Miller, M.; Lara, B. Social media platforms: Trading with prediction error minimization for your attention. Preprints.
  98. Markov, T.; Zhang, C.; Agarwal, S.; Eloundou, T.; Lee, T.; Adler, S.; Jiang, A.; Weng, L. New and Improved Content Moderation Tooling. [via Wayback Machine]. 22 August 2023. Available online: https://web.archive.org/web/20230130233845mp_/https://openai.com/blog/new-and-improved-content-moderation-tooling/ (accessed on 28 June 2023).
  99. Collins, E.; Ghahramani, Z. LaMDA: Our Breakthrough Conversation Technology. 18 May 2021. Available online: https://blog.google/technology/ai/lamda/ (accessed on 1 September 2023).
  100. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2023, arXiv:1706.03762. [Google Scholar] [CrossRef]
  101. Ray, P.P. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet Things Cyber-Phys. Syst. 2023, 3, 121–154. [Google Scholar] [CrossRef]
  102. OpenAI. ChatGPT 3.5 (August 3 version). 3 August 2023. Available online: https://chat.openai.com (accessed on 11 September 2023).
  103. OpenAI. GPT-4. 14 March 2023. Available online: https://web.archive.org/web/20230131024235/https://openai.com/blog/chatgpt/ (accessed on 1 October 2023).
  104. OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
  105. OpenAI. GPT-4 System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
  106. OpenAI. GPT-4o System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
  107. Conway, A. What is GPT-4o? Everything You Need to Know About the New OpenAI Model that Everyone Can Use for Free. 2024, 13 May 2024. Available online: https://www.xda-developers.com/gpt-4o/ (accessed on 12 August 2025).
  108. OpenAI. Models. 2025. Available online: https://platform.openai.com/docs/models (accessed on 4 February 2025).
  109. OpenAI. Introducing 4o Image Generation. 2025. Available online: https://openai.com/index/introducing-4o-image-generation/ (accessed on 30 March 2025).
  110. OpenAI. GPT-4 Vision System Card; OpenAi: San Francisco, CA, USA, 2024. [Google Scholar]
  111. OpenAI. GPT-4.5 System Card; OpenAi: San Francisco, CA, USA, 2025. [Google Scholar]
  112. Lehmann, J. On the Use of ChatGPT in Cultural Heritage Institutions. 3 March 2023. Available online: https://mmk.sbb.berlin/2023/03/03/on-the-use-of-chatgpt-in-cultural-heritage-institutions/?lang=en (accessed on 29 June 2023).
  113. Trichopoulos, G.; Konstantakis, M.; Caridakis, G.; Katifori, A.; Koukouli, M. Crafting a Museum Guide Using GPT4. Bid Data Cogntiive Comput. 2023, 7, 148. [Google Scholar] [CrossRef]
  114. Maas, C. Was Kann ChatGPT für Kultureinrichtungen Tun? 13 May 2023. Available online: https://web.archive.org/web/20230926102318/https://www.aureka.ai/2023/05/13/was-kann-chatgpt-fuer-kultureinrichtungen-tun// (accessed on 12 August 2025).
  115. Merritt, E. Chatting About Museums with ChatGPT. 25 January 2023. Available online: https://www.aam-us.org/2023/01/25/chatting-about-museums-with-chatgpt (accessed on 29 June 2023).
  116. Ciecko, B. 9 Ways ChatGPT Can Empower Museums & Cultural Organizations in the Digital Age. 13 April 2023. Available online: https://cuseum.com/blog/2023/4/13/9-ways-chatgpt-can-empower-museums-cultural-organizations-in-the-digital-age (accessed on 29 June 2023).
  117. Frąckiewicz, M. ChatGPT in the World of Museum Technology: Enhancing Visitor Experiences and Digital Engagement. 30 April 2023. Available online: https://ts2.space/en/chatgpt-in-the-world-of-museum-technology-enhancing-visitor-experiences-and-digital-engagement/ (accessed on 29 June 2023).
  118. Zimmerman, A.; Janhonen, J.; Beer, E. Human/AI relationships: Challenges, downsides, and impacts on human/human relationships. AI Ethics 2024, 4, 1555–1567. [Google Scholar] [CrossRef]
  119. Wu, J. Social and ethical impact of emotional AI advancement: The rise of pseudo-intimacy relationships and challenges in human interactions. Front. Psychol. 2024, 15, 1410462. [Google Scholar] [CrossRef] [PubMed]
  120. Spennemann, D.H.R.; Biles, J.; Brown, L.; Ireland, M.F.; Longmore, L.; Singh, C.J.; Wallis, A.; Ward, C. ChatGPT giving advice on how to cheat in university assignments: How workable are its suggestions? Interact. Technol. Smart Educ. 2024, 21, 690–707. [Google Scholar] [CrossRef]
  121. Jesson, A.; Beltran Velez, N.; Chu, Q.; Karlekar, S.; Kossen, J.; Gal, Y.; Cunningham, J.P.; Blei, D. Estimating the hallucination rate of generative ai. Adv. Neural Inf. Process. Syst. 2024, 37, 31154–31201. [Google Scholar]
  122. Siontis, K.C.; Attia, Z.I.; Asirvatham, S.J.; Friedman, P.A. ChatGPT hallucinating: Can it get any more humanlike? Eur. Heart J. 2024, 45, 321–323. [Google Scholar] [CrossRef]
  123. Kim, Y.; Jeong, H.; Chen, S.; Li, S.S.; Lu, M.; Alhamoud, K.; Mun, J.; Grau, C.; Jung, M.; Gameiro, R. Medical hallucinations in foundation models and their impact on healthcare. arXiv 2025, arXiv:2503.05777. [Google Scholar]
  124. Turing, A.M. Computing machinery and intelligence. Mind 1950, 49, 433–460. [Google Scholar] [CrossRef]
  125. French, R.M. The Turing Test: The first 50 years. Trends Cogn. Sci. 2000, 4, 115–122. [Google Scholar] [CrossRef] [PubMed]
  126. Pinar Saygin, A.; Cicekli, I.; Akman, V. Turing test: 50 years later. Minds Mach. 2000, 10, 463–518. [Google Scholar] [CrossRef]
  127. Jones, C.R.; Bergen, B.K. Large language models pass the turing test. arXiv 2025, arXiv:2503.23674. [Google Scholar] [CrossRef]
  128. Singh, A. Consequences of the Turing Test: OpenAI's GPT-4.5. SSRN 2025. [Google Scholar] [CrossRef]
  129. Mappouras, G. Turing Test 2.0: The General Intelligence Threshold. arXiv 2025, arXiv:2505.19550. [Google Scholar] [CrossRef]
  130. Mungoli, N. Exploring the synergy of prompt engineering and reinforcement learning for enhanced control and responsiveness in chat GPT. J. Electr. Electron. Eng. 2023, 2, 201–205. [Google Scholar] [CrossRef]
  131. Lee, U.; Jung, H.; Jeon, Y.; Sohn, Y.; Hwang, W.; Moon, J.; Kim, H. Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in english education. Educ. Inf. Technol. 2023, 29, 11483–11515. [Google Scholar] [CrossRef]
  132. Jacobsen, L.J.; Weber, K.E. The promises and pitfalls of ChatGPT as a feedback provider in higher education: An exploratory study of prompt engineering and the quality of AI-driven feedback. Preprint 2023. [Google Scholar] [CrossRef]
  133. Kim, B.S. Acculturation and enculturation. Handb. Asian Am. Psychol. 2007, 2, 141–158. [Google Scholar]
  134. Alcántara-Pilar, J.M.; Armenski, T.; Blanco-Encomienda, F.J.; Del Barrio-García, S. Effects of cultural difference on users’ online experience with a destination website: A structural equation modelling approach. J. Destin. Mark. Manag. 2018, 8, 301–311. [Google Scholar] [CrossRef]
  135. Hekman, S. Truth and method: Feminist standpoint theory revisited. Signs J. Women Cult. Soc. 1997, 22, 341–365. [Google Scholar] [CrossRef]
  136. Bennett, M.J. A developmental model of intercultural sensitivity. In The International Encyclopedia of Intercultural Communication; Yun, K.Y., Ed.; John Wiley & Sons: New York, NY, USA, 2017; pp. 1–10. [Google Scholar]
  137. Mokry, N. Instant Gratification: A Decline in Our Attention and a Rise in Digital Disinformation. Ph.D. Thesis, University of Texas, Austin, TX, USA, 2024. [Google Scholar]
  138. Reeves, N.; Yin, W.; Simperl, E.; Redi, M. “The Death of Wikipedia?”—Exploring the Impact of ChatGPT on Wikipedia Engagement. arXiv 2024, arXiv:2405.10205. [Google Scholar]
  139. Barrett, B. ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw. Wired Magazine. 23 April 2025. Available online: https://www.wired.com/story/google-ai-overviews-meaning/ (accessed on 15 May 2025).
  140. Ritala, P.; Ruokonen, M.; Ramaul, L. Transforming boundaries: How does ChatGPT change knowledge work? J. Bus. Strategy, 2023; ahead-of-print. [Google Scholar] [CrossRef]
  141. Trichopoulos, G.; Konstantakis, M.; Alexandridis, G.; Caridakis, G. Large Language Models as Recommendation Systems in Museums. Electronics 2023, 12, 3829. [Google Scholar] [CrossRef]
  142. Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  143. Lee, H.-P.; Sarkar, A.; Tankelevitch, L.; Drosos, I.; Rintel, S.; Banks, R.; Wilson, N. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 1 May–26 April 2025; pp. 1–22. [Google Scholar]
  144. Kim, D.Y.; Kim, H.-Y. Trust me, trust me not: A nuanced view of influencer marketing on social media. J. Bus. Res. 2021, 134, 223–232. [Google Scholar] [CrossRef]
  145. Pop, R.-A.; Săplăcan, Z.; Dabija, D.-C.; Alt, M.-A. The impact of social media influencers on travel decisions: The role of trust in consumer decision journey. Curr. Issues Tour. 2022, 25, 823–843. [Google Scholar] [CrossRef]
  146. Mardon, R.; Cocker, H.; Daunt, K. How social media influencers impact consumer collectives: An embeddedness perspective. J. Consum. Res. 2023, 50, 617–644. [Google Scholar] [CrossRef]
  147. Baker, S.A.; Rojek, C. Lifestyle Gurus: Constructing Authority and Influence Online; John Wiley & Sons: New York, NY, USA, 2020. [Google Scholar]
  148. Arriagada, A.; Bishop, S. Between commerciality and authenticity: The imaginary of social media influencers in the platform economy. Commun. Cult. Crit. 2021, 14, 568–586. [Google Scholar] [CrossRef]
  149. Krasni, J. How to hijack a discourse? Reflections on the concepts of post-truth and fake news. Humanit. Soc. Sci. Commun. 2020, 7, 32. [Google Scholar] [CrossRef]
  150. van Dyk, S. Post-truth, postmodernism and the public sphere. Theory Cult. Soc. 2022, 39, 37–50. [Google Scholar] [CrossRef]
  151. Ruprechter, T.; Santos, T.; Helic, D. Relating Wikipedia article quality to edit behavior and link structure. Appl. Netw. Sci. 2020, 5, 61. [Google Scholar] [CrossRef]
  152. Ren, Y.; Zhang, H.; Kraut, R.E. How did they build the free encyclopedia? a literature review of collaboration and coordination among wikipedia editors. ACM Trans. Comput. Hum. Interact. 2023, 31, 1–48. [Google Scholar] [CrossRef]
  153. Borkakoty, H.; Espinosa-Anke, L. Hoaxpedia: A Unified Wikipedia Hoax Articles Dataset. arXiv 2024, arXiv:2405.02175. [Google Scholar] [CrossRef]
  154. Shenoy, K.; Ilievski, F.; Garijo, D.; Schwabe, D.; Szekely, P. A study of the quality of Wikidata. J. Web Semant. 2022, 72, 100679. [Google Scholar] [CrossRef]
  155. Amaral, G.; Piscopo, A.; Kaffee, L.-A.; Rodrigues, O.; Simperl, E. Assessing the quality of sources in Wikidata across languages: A hybrid approach. J. Data Inf. Qual. (JDIQ) 2021, 13, 1–35. [Google Scholar] [CrossRef]
  156. Rozado, D. The political biases of chatgpt. Soc. Sci. 2023, 12, 148. [Google Scholar] [CrossRef]
  157. Ferrara, E. Should chatgpt be biased? challenges and risks of bias in large language models. arXiv 2023, arXiv:2304.03738. [Google Scholar] [CrossRef]
  158. Spennemann, D.H.R. What has ChatGPT read? References and referencing of archaeological literature by a generative artificial intelligence application. arXiv 2023, arXiv:2308.03301. [Google Scholar] [CrossRef]
  159. Chang, K.K.; Cramer, M.; Soni, S.; Bamman, D. Speak, memory: An archaeology of books known to chatgpt/gpt-4. arXiv 2023, arXiv:2305.00118. [Google Scholar] [CrossRef]
  160. Spennemann, D.H.R. Non-responsiveness of DALL-E to exclusion prompts suggests underlying bias towards Bitcoin. SSRN 2025. [Google Scholar] [CrossRef]
  161. Park, P.; Schoenegger, P.; Zhu, C. “Correct answers” from the psychology of artificial intelligence. arXiv 2023, arXiv:2302.07267. [Google Scholar]
  162. Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Pauly, M. The Self-Perception and Political Biases of ChatGPT. arXiv 2023, arXiv:2304.07333. [Google Scholar] [CrossRef]
  163. Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring chatgpt political bias. SSRN 2023, 198, 3–23. [Google Scholar] [CrossRef]
  164. McGee, R.W. Is chat gpt biased against conservatives? an empirical study (February 15, 2023). SSRN 2023. [Google Scholar] [CrossRef]
  165. Hartmann, J.; Schwenzow, J.; Witte, M. The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. arXiv 2023, arXiv:2301.01768. [Google Scholar] [CrossRef]
  166. Rutinowski, J.; Franke, S.; Endendyk, J.; Dormuth, I.; Roidl, M.; Pauly, M. The Self-Perception and Political Biases of ChatGPT. Hum. Behav. Emerg. Technol. 2024, 2024, 7115633. [Google Scholar] [CrossRef]
  167. Motoki, F.; Pinho Neto, V.; Rodrigues, V. More human than human: Measuring ChatGPT political bias. Public Choice 2024, 198, 3–23. [Google Scholar] [CrossRef]
  168. Cao, Y.; Zhou, L.; Lee, S.; Cabello, L.; Chen, M.; Hershcovich, D. Assessing cross-cultural alignment between chatgpt and human societies: An empirical study. arXiv 2023, arXiv:2303.17466. [Google Scholar] [CrossRef]
  169. Spennemann, D.H.R. The layered injection model of algorithmic bias as a conceptual framework to understand biases impacting the output of text-to-image models. SSRN 2025. [Google Scholar] [CrossRef]
  170. Moayeri, M.; Basu, S.; Balasubramanian, S.; Kattakinda, P.; Chengini, A.; Brauneis, R.; Feizi, S. Rethinking artistic copyright infringements in the era of text-to-image generative models. arXiv 2024, arXiv:2404.08030. [Google Scholar]
  171. Kaplan, D.M.; Palitsky, R.; Arconada Alvarez, S.J.; Pozzo, N.S.; Greenleaf, M.N.; Atkinson, C.A.; Lam, W.A. What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. J. Med. Internet Res. 2024, 26, e51837. [Google Scholar] [CrossRef]
  172. Duan, W.; McNeese, N.; Li, L. Gender Stereotypes toward Non-gendered Generative AI: The Role of Gendered Expertise and Gendered Linguistic Cues. Proc. ACM Hum. Comput. Interact. 2025, 9, 1–35. [Google Scholar] [CrossRef]
  173. Melero Lázaro, M.; García Ull, F.J. Gender stereotypes in AI-generated images. El Prof. Inf. 2023, 32, e320505. [Google Scholar] [CrossRef]
  174. Hosseini, D.D. Generative AI: A problematic illustration of the intersections of racialized gender, race, ethnicity. OSF Prepr. 2024. [Google Scholar] [CrossRef]
  175. Currie, G.; John, G.; Hewis, J. Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists. Int. J. Pharm. Pract. 2024, 32, 524–531. [Google Scholar] [CrossRef] [PubMed]
  176. Gisselbaek, M.; Suppan, M.; Minsart, L.; Köselerli, E.; Nainan Myatra, S.; Matot, I.; Barreto Chang, O.L.; Saxena, S.; Berger-Estilita, J. Representation of intensivists’ race/ethnicity, sex, and age by artificial intelligence: A cross-sectional study of two text-to-image models. Crit. Care 2024, 28, 363. [Google Scholar] [CrossRef]
  177. Rieder, B.; Sire, G. Conflicts of interest and incentives to bias: A microeconomic critique of Google’s tangled position on the Web. New Media Soc. 2014, 16, 195–211. [Google Scholar] [CrossRef]
  178. Ursu, R.M. The power of rankings: Quantifying the effect of rankings on online consumer search and purchase decisions. Mark. Sci. 2018, 37, 530–552. [Google Scholar] [CrossRef]
  179. Zannettou, S.; Caulfield, T.; De Cristofaro, E.; Sirivianos, M.; Stringhini, G.; Blackburn, J. Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the web. In Proceedings of the WWW ‘19: Companion Proceedings of The 2019 World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 218–226. [Google Scholar]
  180. Pavlíková, M.; Šenkýřová, B.; Drmola, J. Propaganda and disinformation go online. In Challenging Online Propaganda and Disinformation in the 21st Century; Gregor, M., Mlejnková, P., Eds.; Springer: Cham, Switzerland, 2021; pp. 43–74. [Google Scholar]
  181. Pan, C.A.; Yakhmi, S.; Iyer, T.P.; Strasnick, E.; Zhang, A.X.; Bernstein, M.S. Comparing the perceived legitimacy of content moderation processes: Contractors, algorithms, expert panels, and digital juries. Proc. ACM Hum. Comput. Interact. 2022, 6, 1–31. [Google Scholar] [CrossRef]
  182. Yaccarino, L. Why X Decided to Bring the Content Police in-House. 6 February 2024. Available online: https://fortune.com/2024/02/06/inside-elon-musk-x-twitter-austin-content-moderation (accessed on 1 August 2024).
  183. Coeckelbergh, M. LLMs, truth, and democracy: An overview of risks. Sci. Eng. Ethics 2025, 31, 4. [Google Scholar] [CrossRef]
  184. Lazar, S.; Manuali, L. Can LLMs advance democratic values? arXiv 2024, arXiv:2410.08418. [Google Scholar] [CrossRef]
  185. Spennemann, D.H.R. “Delving into”: The quantification of Ai generated content on the internet (synthetic data). arXiv 2025, arXiv:2504.08755. [Google Scholar] [CrossRef]
  186. Brooks, C.; Eggert, S.; Peskoff, D. The Rise of AI-Generated Content in Wikipedia. arXiv 2024, arXiv:2410.08044. [Google Scholar] [CrossRef]
  187. Wagner, C.; Jiang, L. Death by AI: Will large language models diminish Wikipedia? J. Assoc. Inf. Sci. Technol. 2025, 76, 743–751. [Google Scholar] [CrossRef]
  188. McGee, R.W. Ethics committees can be unethical: The chatgpt response. SSRN 2023. Available online: https://ssrn.com/abstract=4392258 (accessed on 1 August 2024).
  189. McGee, R.W. Can Tax Evasion Ever Be Ethical? A ChatGPT Answer. SSRN 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4413428 (accessed on 1 August 2024).
  190. Hunt, C.; Rouse, S.M. Polarization and Place-Based Representation in US State Legislatures. Legis. Stud. Q. 2024, 49, 411–424. [Google Scholar] [CrossRef]
  191. Forster, C.M.; Dunlop, D.A. Divided We Advertise: A Comparative Analysis of Post-Citizens United Political Advertising in an Increasingly Polarised United States. Bristol Inst. Learn. Teach. (BILT) Stud. Res. J. 2024, 28, 1–12. [Google Scholar]
  192. Draca, M.; Schwarz, C. How polarised are citizens? Measuring ideology from the ground up. Econ. J. 2024, 134, 1950–1984. [Google Scholar] [CrossRef]
  193. Hughes, S.; Spennemann, D.H.R.; Harvey, R. Printing heritage of colonial newspapers in Victoria: The Ararat Advertiser and the Avoca Mail. Bull. Bibliogr. Soc. Aust. N. Z. 2004, 28, 41–61. [Google Scholar]
  194. Gerard, P.; Botzer, N.; Weninger, T. Truth Social Dataset. In Proceedings of the International AAAI Conference on Web and Social Media, Limassol, Cyprus, 5–8 June 2023; pp. 1034–1040. [Google Scholar]
  195. Roberts, J.; Wahl-Jorgensen, K. Strategies of alternative right-wing media: The case of Breitbart News. In The Routledge Companion to Political Journalism; Routledge: London, UK, 2021; pp. 164–173. [Google Scholar]
  196. MohanaSundaram, A.; Sathanantham, S.T.; Ivanov, A.; Mofatteh, M. DeepSeek’s Readiness for Medical Research and Practice: Prospects, Bottlenecks, and Global Regulatory Constraints. Ann. Biomed. Eng. 2025, 53, 1754–1756. [Google Scholar] [CrossRef]
  197. Girich, M.; Magomedova, O.; Levashenko, A.; Ermokhin, I.; Chernovol, K. Restricting DeepSeek operations, obligating platforms to pay tips, restricting the sale of personal data, protecting intellectual property rights in AI training, anti-competitive practices online. SSRN 2025. [Google Scholar] [CrossRef]
  198. Henry, C. They make press barons look good. Br. J. Rev. 2025, 36, 13–18. [Google Scholar] [CrossRef]
  199. Wilson, G.K. Business, Politics, and Trump. In The Changing Character of the American Right, Volume II: Ideology, Politics and Policy in the Era of Trump; Springer: Berlin/Heidelberg, Germany, 2025; pp. 53–74. [Google Scholar]
  200. Neundorf, A.; Nazrullaeva, E.; Northmore-Ball, K.; Tertytchnaya, K.; Kim, W. Varieties of Indoctrination: The Politicization of Education and the Media around the World. Perspect. Politics 2024, 22, 771–798. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Spennemann, D.H.R. Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge 2025, 5, 20. https://doi.org/10.3390/knowledge5030020

AMA Style

Spennemann DHR. Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge. 2025; 5(3):20. https://doi.org/10.3390/knowledge5030020

Chicago/Turabian Style

Spennemann, Dirk H. R. 2025. "Generative Artificial Intelligence and the Future of Public Knowledge" Knowledge 5, no. 3: 20. https://doi.org/10.3390/knowledge5030020

APA Style

Spennemann, D. H. R. (2025). Generative Artificial Intelligence and the Future of Public Knowledge. Knowledge, 5(3), 20. https://doi.org/10.3390/knowledge5030020

Article Metrics

Back to TopTop