1. Introduction
In the past few decades, the internet has evolved into an indispensable tool for individuals seeking information across various domains such as healthcare, education, and daily decision making [
1,
2,
3,
4]. The convenience, immediacy, and accessibility offered by digital platforms have created an unprecedented information ecosystem, where users can quickly find answers to almost any question [
5,
6,
7]. However, the quality, accuracy, and trustworthiness of the information found online are not always assured [
8,
9,
10,
11].
This paradox of abundance and uncertainty has raised important questions about the challenges users face when attempting to use the internet as a reliable resource. These challenges include the widespread proliferation of misinformation, disparities in digital literacy, inequitable access to technology, and motivational barriers to seeking reliable information [
12,
13,
14,
15]. Ethical concerns surrounding content regulation and monitoring further complicate the issue of ensuring the reliability of online information, especially with the advent of artificial intelligence [
16,
17,
18].
The involvement of artificial intelligence (AI) has played an increasingly prominent role in shaping how information is disseminated, moderated, and consumed online in recent years [
19,
20]. AI-driven algorithms now curate content, flag misinformation, and moderate user interactions, raising both opportunities and challenges in the quest for reliable online information [
21,
22,
23,
24]. AI systems can improve the efficiency of content moderation and help users navigate the vast information landscape; however, concerns remain about algorithmic biases, lack of transparency, and the ethical implications of AI-driven decision making [
25,
26].
This entry paper provides an overview of these challenges by summarizing the existing literature on the barriers to effective information use on the internet. It examines key obstacles, such as misinformation, digital literacy disparities, motivational issues, access to technology, and the increasing influence of AI in content regulation and moderation. This review of current research and public discourse highlights the complex factors that shape users’ ability to navigate and critically engage with online information.
2. The Rise of the Internet
2.1. Historical Context
The internet was initially developed for communication between research institutions but has undergone profound transformation since its inception. Its early roots were grounded in the exchange of scientific research and military communication. Beginning in the 1960s, the Advanced Research Projects Agency Network (ARPANET), funded by the U.S. Department of Defense, laid the groundwork for a networked world by allowing a small group of academic institutions to share resources and data [
27]. This closed network gradually expanded in the 1980s with the creation of the Transmission Control Protocol/Internet Protocol (TCP/IP), which facilitated broader connectivity.
The democratization of the internet began with the launch of the World Wide Web in 1991. Developed by Tim Berners-Lee, the Web introduced a standardized way of linking documents, making it easier for users to navigate the rapidly growing network of information. This standardization shifted the internet from a tool for specialists to a public resource for communication and information exchange [
28]. The explosion of the Web, combined with advances in internet infrastructure and the development of broadband technologies, contributed to a surge in internet use by individuals and businesses alike. By the 2000s, the internet had shifted from a specialized tool to a ubiquitous, global platform, with internet penetration rates growing rapidly in many regions [
29].
The advent of smartphones in the late 2000s further accelerated this transformation, allowing for the rapid dissemination of information through mobile networks. By 2024, it is estimated that over 65% of the world’s population will have internet access, a significant leap from just 20% in the early 2000s [
30,
31]. This global shift has turned the internet into a critical infrastructure for not only communication but also for the dissemination of knowledge, entertainment, and commerce. Arguably, the way in which individuals interact with information and each other has been fundamentally changed [
32].
2.2. The Evolution of Online Information
As internet access became more widespread, so did the ways individuals consume and interact with information. A relatively static platform for retrieving research papers and institutional data has transformed into a dynamic space where users are both content consumers and creators [
33]. The rise of websites, blogs, and social media platforms has blurred the lines between professional content and user-generated information [
34]. Platforms such as Facebook, YouTube, and X allow everyday users to contribute to the flow of information, creating a decentralized knowledge ecosystem where anyone with internet access can participate [
35,
36].
This shift has led to an unprecedented number of different perspectives, which has challenged the ability to verify the accuracy and credibility of information [
37,
38]. With traditional media, content is filtered through editorial standards and professional gatekeepers. In contrast, the internet allows for unregulated information sharing. Social media algorithms, designed to maximize user engagement, often prioritize sensational content that garners high levels of interaction regardless of its accuracy [
15,
39,
40]. The result has been the spread of misinformation and disinformation, particularly on topics such as public health, politics, and science [
15,
41,
42].
This decentralized and user-driven nature of the internet has introduced what scholars refer to as the “post-truth era”, where the distinction between fact and opinion is increasingly blurred [
42]. This confusion is particularly evident in the context of social media, where the most engaging content tends to be shared more widely than carefully vetted information [
43]. In fact, studies have shown that false information spreads faster and more broadly on platforms like X, where emotionally charged or sensational content is more likely to be shared [
44,
45]. This evolution in online information consumption has therefore raised critical questions about how to maintain the integrity and reliability of information in an increasingly crowded and unregulated digital space.
2.3. Challenges of Information Overload
Information overload is a concept first introduced in the 1960s [
46] and further developed by Alvin Toffler [
47]. This idea has taken on new meaning in the digital age. Today, the sheer volume of information available online has created an environment where users struggle to process and filter relevant data. Online search engines such as Google provide access to millions of results within seconds, and sifting through this information to find credible sources is a daunting task [
48,
49]. Social media platforms exacerbate this issue by continuously presenting users with a barrage of content (news, opinions, advertisements, and entertainment—all competing for attention) [
50].
The consequences of information overload are profound as research shows that when individuals are faced with too much information, they often default to heuristic processing. They use mental shortcuts to make decisions rather than engaging in deeper, more critical thinking [
51]. This strategy can lead to a preference for easily accessible information, and the avoidance of careful evaluation of the credibility of sources [
39]. Additionally, users may suffer from “confirmation bias”, selectively seeking out information that aligns with their pre-existing beliefs [
52].
One of the primary concerns with information overload is its impact on public trust in information. As the volume of available content grows, so does skepticism about which sources are reliable. In this environment, false information and conspiracy theories thrive, often drowning out verified, factual content. Studies suggest that information overload can lead to feelings of anxiety, frustration, and even disengagement from the search for accurate information [
48]. As users become overwhelmed, they may be more likely to rely on algorithmically suggested content, which is curated for engagement rather than credibility [
19]. This phenomenon, combined with the decentralized nature of content creation, has raised important questions about how to address the cognitive and social impacts of information overload in the digital age.
3. The Impact of Misinformation and Disinformation
3.1. Defining Misinformation and Disinformation
Misinformation and disinformation are two related but distinct phenomena. They have become increasingly prevalent in the digital age and affect how individuals use and interpret information online [
37]. Misinformation refers to false or inaccurate information that is shared without the intent to deceive, often as a result of misunderstanding or lack of verification [
38]. In many cases, misinformation arises from genuine attempts to share information. Inaccuracies proliferate due to the ease with which content can be shared across digital platforms, frequently without fact-checking or source verification [
37]. Misinformation can be inadvertently passed along by users who believe they are sharing valid content, but who are actually contributing to the unintentional spread of falsehoods.
In contrast, disinformation involves the deliberate creation and distribution of false information. Disinformation has the explicit aim of misleading or manipulating the audience whether for political, economic, or social purposes [
53]. Disinformation is often used as a tool in orchestrated campaigns designed to mislead the public, sow discord, or undermine trust in institutions. Whereas misinformation spreads primarily through ignorance or error, disinformation is crafted with a particular strategic agenda and is far more insidious and challenging to counter. Disinformation campaigns are frequently carried out by political actors, organizations, or state entities. Common tactics are to exploit the algorithmic tendencies of social media platforms to ensure widespread dissemination [
54,
55].
For instance, both types of information ran rampant during the COVID-19 pandemic. Misinformation about the virus’s spread, treatments, and vaccines often emerged from misunderstanding or fear. Disinformation campaigns, in contrast, deliberately sought to undermine public trust in health authorities, promote unverified treatments, or manipulate political narratives [
11,
56]. These campaigns were designed not only to mislead but also to fracture public consensus and weaken institutional responses. Therefore, addressing both misinformation and disinformation requires a nuanced understanding of their distinct causes, methods of dissemination, and impacts on public opinion and behavior.
3.2. The Spread of Misinformation and Disinformation
The digital environment has accelerated the dissemination of both misinformation and disinformation. Social media platforms, search engines, and online forums enable rapid content sharing without rigorous verification processes, leading to the unchecked proliferation of false information [
38]. The algorithmic structures of platforms such as Facebook, X, and YouTube are designed to prioritize content that generates high engagement—such as through likes, shares, and comments—regardless of its accuracy [
15]. These algorithms favor sensational or emotionally charged content, which is more likely to go viral when compared to more measured, factual information. In traditional media, gatekeeping mechanisms, such as editorial review, help filter content for accuracy; however, these checks are largely absent in online communication. As a result, both types of false information can spread rapidly, often going viral before fact-checkers or regulatory measures can intervene [
54].
Vosoughi et al. [
15] found that false news stories often include both misinformation (unintentional falsehoods) and disinformation (deliberate falsehoods). The authors reported that false news stories are 70% more likely to be retweeted than true stories. The psychological appeal of shocking or surprising content captures attention and makes the information more likely to be shared, even when it is unverified. Users also have a tendency to share content that aligns with their pre-existing beliefs, a phenomenon known as confirmation bias [
42]. Such biases regularly result in individuals propagating false information without critically assessing its veracity. Disinformation is particularly problematic, as it is intentionally designed to manipulate and mislead users for specific political, social, or economic purposes [
57]. These campaigns often leverage the same algorithmic tendencies that amplify misinformation, but with a strategic objective to influence public opinion, elections, or health behaviors.
The use of bots and automated accounts to amplify disinformation is another growing concern. During critical events, such as elections or global crises like the COVID-19 pandemic, coordinated efforts to spread disinformation can have significant impacts on public discourse and decision making [
58]. For instance, disinformation campaigns during the pandemic used bots to spread misleading content about the origins of the virus, vaccine safety, and the effectiveness of government responses [
56,
59]. These dynamics underscore the need for greater oversight of social media algorithms and more robust content moderation practices to mitigate the spread of both misinformation and disinformation.
Beyond social media, misinformation is frequently encountered in online repositories, blogs, and even news outlets that may not adhere to stringent editorial guidelines. Predatory journals serve as another example, often targeting professionals with pseudoscientific studies disguised as credible research. By recognizing the broad scope of misinformation sources, individuals equipped with strong digital information literacy skills can better navigate and critically assess these diverse platforms. Predatory journals often operate without rigorous peer review, publishing pseudoscientific or inaccurate studies that can mislead both the public and professionals. In some cases, these journals produce research that appears credible but lacks scientific validity, contributing to misinformation across various fields. For example, during the COVID-19 pandemic, certain journals published unverified treatment studies, leading to public confusion and undermining trust in legitimate science [
60,
61]. These dynamics emphasize the importance of identifying multiple sources of digital misinformation beyond social media alone.
4. Digital Literacy as a Tool Against Misinformation
As misinformation and disinformation continue to proliferate online, digital literacy emerges as an essential skill for empowering individuals to navigate and critically engage with information. This section explores the concept of digital literacy, its key components, and its critical role in helping individuals assess the credibility of information, recognize misinformation, and use digital content responsibly.
4.1. Defining Digital Literacy
Digital literacy involves more than basic technological proficiency; it refers to the skills necessary to critically engage, evaluate, and create content in the digital landscape [
13]. Digital literacy also encompasses the ability to critically assess the credibility of online sources, recognize misinformation, and engage with content in a responsible manner [
62]. Both technical skills, such as using search engines or social media platforms, and cognitive skills like critical thinking, problem solving, and information evaluation are essential.
At its core, digital literacy requires individuals to distinguish between reliable and unreliable sources, verify information through multiple channels, and understand the ethical implications of sharing or engaging with content [
63]. These skills are becoming increasingly vital as the internet grows more saturated with conflicting and often misleading information. The rise in misinformation and disinformation, as previously discussed, underscores the importance of digital literacy in today’s digital landscape. Without these competencies, individuals are vulnerable to manipulation, often unwittingly spreading false information or making decisions based on inaccurate data.
Moreover, digital literacy extends to understanding the broader social, political, and economic implications of digital technologies. For example, users need to be aware of how algorithms curate the content they see, and amplify certain perspectives while suppressing others [
64]. This awareness is a key component of navigating digital space responsibly and avoiding the echo chambers that can distort one’s understanding of the world. Therefore, digital literacy is an indispensable tool in ensuring that individuals can use online information responsibly and effectively, both for personal decision making and for contributing meaningfully to the broader digital ecosystem.
4.2. Barriers to Developing Digital Literacy
Despite the increasing need for digital literacy, numerous barriers prevent individuals from developing these critical skills. A primary barrier is inequality in access to education and digital tools, which often correlates with socioeconomic status, geographic location, and age [
14]. These inequalities disproportionately affect marginalized populations, including older adults, rural communities, and economically disadvantaged groups and limit their ability to engage effectively with digital content. Many individuals in these groups lack access to high-speed internet, modern digital devices, or educational programs that would enable them to improve their digital literacy skills [
65,
66]. Furthermore, lower levels of education are directly associated with gaps in digital literacy [
14]. Individuals who have had limited formal education may not have been exposed to critical thinking or information evaluation skills, both of which are essential for navigating today’s complex digital landscape [
32]. In environments where misinformation and disinformation flourish, such individuals are more susceptible to false information and less able to verify the accuracy of the content they encounter.
Another significant barrier is the pace of technological change. Even individuals who are digitally literate may struggle to keep up with rapidly evolving platforms, tools, and trends [
67,
68]. Users must constantly adapt and update their skills as digital platforms continuously update their interfaces, introduce new features, or adopt new algorithms [
69]. Digital literacy is a persistent and ongoing process of learning and adaptation, not a one-time achievement.
There are a number of factors that directly impact digital literacy. Age is an obvious one, and the knowledge gap between different generations is particularly acute between younger users who have grown up with digital technologies and older users who may be less familiar with the technology [
62]. Moreover, digital literacy is also affected by language barriers and cultural differences. In multilingual and multicultural societies, access to online information and educational resources in one’s native language may be limited, making it more difficult for non-native speakers to fully engage with digital content [
70,
71,
72]. These barriers contribute to a digital divide that not only limits individuals’ access to online resources but also affects their ability to participate meaningfully in the digital economy.
4.3. Bridging the Digital Literacy Gap
Addressing the digital literacy gap requires targeted interventions aimed at various age groups, social demographics, and geographic regions. Educational institutions play a crucial role in fostering digital literacy, particularly by integrating digital skills and critical thinking into the curriculum from an early age. Hobbs [
73] highlighted that digital literacy programs in schools should include access to digital libraries and media resources to help students develop skills to critically evaluate online information, verify sources, and engage responsibly with digital content. These competencies are increasingly important as young people rely on the internet for research, social interaction, and entertainment.
Public institutions, such as libraries and community centers, are also essential resources for improving digital literacy, especially in underserved or rural areas. Bertot et al. [
74] argued that libraries are uniquely positioned to address the digital literacy gap by offering accessible educational programs and resources such as workshops on internet safety, online research techniques, and critical evaluation skills. These programs would provide access to digital tools that help individuals of all ages improve their digital skills [
75]. Additionally, community organizations play an important role in supporting local digital literacy campaigns, raising awareness about common pitfalls in online information-seeking behaviors, including confirmation bias, echo chambers, and the influence of social media algorithms [
76].
Media literacy campaigns are critical for educating the public on how to assess and navigate online information. These campaigns teach users to recognize tactics used in misinformation and disinformation, helping them verify facts, cross-check sources, and engage critically with online content. Hobbs [
73] further emphasized that effective media literacy efforts empower individuals to make informed decisions in a digital landscape where misinformation proliferates, especially on social media platforms. Such initiatives are vital in ensuring that individuals can make informed decisions based on credible information given the proliferation of misinformation on social media platforms.
At the policy level, governments and non-governmental organizations have a role to play in promoting digital literacy by developing programs and policies that foster inclusivity in the digital world. Public policy initiatives could focus on improving access to technology for underserved populations. Such initiatives could also be aimed at reducing the cost of digital tools, and creating national digital literacy programs to ensure that individuals from all backgrounds have the opportunity to improve their digital skills [
32]. Partnerships between the public and private sectors could also help bridge the gap, with technology companies providing resources and training to underserved communities.
5. The Digital Divide and Accessibility Issues
5.1. Understanding the Digital Divide
The digital divide refers to the gap between those individuals who have access to modern information and communication technologies and those people who do not have such access. These disparities often reflect broader societal inequalities in socioeconomic status, education level, geographic location, and even age [
77]. This divide has a direct impact on individuals’ ability to find, assess, and use reliable information online, affecting everything from education to healthcare access and civic engagement.
The digital divide manifests in two primary ways. The first-level digital divide concerns disparities in access to the necessary infrastructure, such as broadband internet, mobile networks, and computing devices. Urban areas in more developed countries often enjoy fast, reliable internet access, while rural and low-income regions may still lack basic connectivity [
78]. In developing countries, these gaps are particularly pronounced, leaving large segments of the population without the ability to engage in the digital economy or access essential online services [
79]. In these areas, barriers such as outdated infrastructure, poor signal coverage, or lack of technological investments prevent equitable access to the internet. The second-level digital divide refers to differences in how individuals use technology once they gain access. This disparity includes differences in digital literacy—the ability to effectively navigate, evaluate, and create information through digital platforms. In this case, people with lower education levels, less exposure to technology, or fewer technical skills may find it difficult to effectively use online resources, even when access to the internet is available [
80].
In this way, the digital divide creates a two-fold problem: many individuals are left out of the digital ecosystem entirely due to infrastructure gaps, and those people who are connected may nonetheless lack the skills to fully benefit from the information available online. As society becomes increasingly reliant on digital tools, the divide risks perpetuating social inequalities and excluding marginalized populations from the benefits of the digital age.
5.2. Socioeconomic, Geographic, and Demographic Barriers
In addition to the divide created by differences in digital literacy, the affordability of technology remains a significant barrier. This issue is particularly acute for individuals in low-income and underserved communities, especially those residing in developing countries, where digital technology adoption is often limited by economic and infrastructural constraints. For instance, while global smartphone adoption has increased over the past decade, the costs associated with devices, internet subscriptions, and data plans remain prohibitively high for many people in low-income regions. In several developing countries, the cost of one gigabyte of mobile data can represent a substantial portion of an individual’s monthly income, making sustained internet use unaffordable for much of the population [
30,
81]. This economic barrier prevents entire populations from participating in the digital economy, accessing vital information, or benefiting from digital healthcare and education services. Moreover, individuals who cannot afford high-end devices may face technological limitations. Cheaper devices may not support all necessary applications or may offer slower processing speeds, further impeding their ability to participate in the digital environment fully.
Gender and age are additional factors that influence digital accessibility. Studies show that women in developing countries are significantly less likely to have access to digital tools compared to men [
82]. Similarly, older adults, both in high-income and low-income countries, may face higher barriers to digital adoption due to lower digital literacy levels or limited access to affordable technology [
83]. These demographic divides exacerbate digital exclusion, leaving women and older populations with fewer opportunities to engage in the digital landscape.
Infrastructure is another critical factor that exacerbates the digital divide. Many rural areas still lack reliable high-speed broadband even in high-income countries. Residents find it difficult to engage in online activities that require a stable internet connection, such as streaming educational videos, attending virtual meetings, or accessing telemedicine services [
84]. These gaps in infrastructure not only limit personal use of the internet. There are also broader social and economic consequences, such as reducing opportunities for business development and working remotely and disturbing access to global markets in rural regions.
The issue of infrastructure is particularly evident in healthcare, where telemedicine has become an increasingly important tool for providing services in underserved regions. Poor internet connectivity in rural areas means that telemedicine is often unavailable to the populations that could benefit from it the most. Individuals in rural areas are significantly less likely to use telemedicine services due to poor internet access. This lack of access is further compounded by gender and age-related barriers, as studies indicate that women and older adults are less likely to use telemedicine services due to limited digital access and skills [
85,
86]. This digital divide in healthcare has serious implications for health equity, as individuals in digitally underserved areas may face delays in accessing care, exacerbating existing health disparities.
5.3. Strategies to Close the Digital Divide
Efforts to bridge the digital divide must adopt a multifaceted approach, and address infrastructure gaps, affordability challenges, and digital literacy. To close this divide, national governments, international organizations, and the private sector must work collaboratively to implement strategies that ensure equitable access to technology. Any other approach is likely to be unsuccessful.
Governments play a critical role in developing and improving infrastructure, particularly in underserved and rural areas. Public initiatives such as expanding broadband networks, investing in fiber-optic technologies, and establishing public Wi-Fi programs can significantly boost internet accessibility [
79]. These infrastructure development projects should prioritize rural as well as economically disadvantaged areas where digital access lags behind. Fortunately, public–private partnerships have successfully expanded digital infrastructure. For instance, companies like Facebook have launched projects to provide internet access in remote regions, utilizing innovative methods such as drones and satellites [
87]. These solutions offer a way to bypass the physical challenges of building traditional infrastructure in hard-to-reach locations.
Addressing affordability is another key component in bridging the digital divide. Government subsidies, affordable data plans, and access to cost-effective devices are essential to making digital access more equitable. Programs that subsidize internet costs or offer discounted rates to low-income households can have a profound impact on increasing accessibility. Moreover, offering affordable devices through government-sponsored initiatives or private-sector programs ensures that more individuals have the necessary hardware to participate fully in the digital environment [
88,
89].
Beyond infrastructure and affordability, addressing the second-level digital divide requires targeted interventions to improve digital literacy. Schools, libraries, and community organizations are well positioned to offer digital skills training, helping individuals navigate the internet, verify sources, and protect themselves from misinformation [
90]. Public libraries have become hubs for digital literacy education, providing workshops on topics ranging from basic internet usage to advanced online research techniques [
75]. Similarly, community centers can play a vital role in reaching underserved populations by offering training on how to engage responsibly with online content [
91].
6. Motivational and Cognitive Factors in Information Seeking
In addition to digital literacy and access, motivational and cognitive factors play a significant role in shaping how individuals seek and engage with information online. This section examines how intrinsic and extrinsic motivations, along with psychological tendencies and cognitive biases, influence the quality of information-seeking behavior in the digital landscape.
6.1. Role of Motivation in Information Seeking
The pursuit of reliable information in the digital landscape is not solely dependent on technological access or digital literacy. Motivation serves as a critical determinant in the depth and quality of individuals’ engagement with information [
92]. The intrinsic and extrinsic motivations that drive users to engage critically with the internet can vary significantly. Theories such as Information Foraging Theory [
93] suggest that individuals are likely to seek out information when they perceive that the cognitive costs of acquiring it are outweighed by the benefits. Examples would include resolving uncertainty or enhancing their knowledge base. This cost–benefit analysis becomes skewed in digital environments, where information is abundant and readily accessible. The low threshold for information acquisition can lead to an over-reliance on passive consumption, and individuals may default to easily digestible content rather than seeking out more nuanced or authoritative sources [
94].
Psychological phenomena such as confirmation bias further complicate the motivation to seek out accurate and diverse information [
52]. Individuals, consciously or unconsciously, gravitate toward information that corroborates their pre-existing beliefs, which diminishes the likelihood of engaging with content that challenges their perspectives [
52]. In the digital age, confirmation biases are exacerbated by algorithmic curation on social media platforms. Users effectively become trapped in echo chambers and their exposure to contradictory viewpoints can become almost nonexistent [
95]. As a result, a user’s initial motivation becomes the basis for a mechanism that reinforces their biases, wherein individuals are consistently validated by familiar and agreeable content.
Additionally, the sheer volume of information available online can create a paradox. Rather than encouraging deeper inquiry, the abundance of data may disincentivize users from critically evaluating their sources. With the illusion of easy access to reliable information, users may feel less compelled to engage in active, effortful information-seeking practices. The result is a landscape in which convenience trumps credibility, leaving users vulnerable to misinformation or shallow understandings of complex issues [
96].
6.2. Cognitive Load and Information Processing
There is always some mental effort associated with processing information. However, the strain on cognitive resources is particularly problematic in the case of the internet, where individuals can have a deluge of digital content available immediately to them [
97,
98]. Sweller [
98] argued that cognitive load theory can help understand the users’ dilemma. When individuals are faced with high cognitive load, whether due to the complexity of the information or the need to process multiple conflicting sources, they are more likely to resort to heuristics or cognitive shortcuts. These heuristics streamline decision making but at the cost of thorough analysis. In such scenarios, users tend to prioritize information that is readily accessible or emotionally salient, irrespective of its factual accuracy.
This reliance on heuristics is particularly prevalent in social media ecosystems, where rapid-fire content consumption is normalized [
99]. Platforms that inundate users with an unrelenting stream of posts, updates, and notifications amplify cognitive strain, making it increasingly difficult to engage in meaningful, critical evaluation. Instead, users may gravitate toward content that resonates on an emotional level and favor the immediate gratification of relatable or emotionally charged narratives. The more arduous task of verifying facts or considering alternative perspectives are abandoned [
100]. The preference for emotionally driven content further exacerbates the spread of misinformation, as such content is often shared and consumed more rapidly than evidence-based information [
15].
The consequences of cognitive overload extend beyond the mere consumption of unreliable information [
48,
101]. High cognitive load can actively diminish motivation to engage with complex or conflicting data and can foster a tendency to avoid or selectively process information that challenges entrenched beliefs [
97,
102]. Kahan [
103] suggested that individuals who experience cognitive fatigue are less inclined to scrutinize information that contradicts their ideological predispositions, thus perpetuating selective exposure and deepening societal polarization. In this context, the motivation to seek reliable information becomes subsumed by the desire to reduce cognitive dissonance. Ultimately, the user is led to passive information engagement rather than the active, critical inquiry required to navigate a multifaceted digital landscape.
6.3. Strategies to Encourage Active Information Seeking
Given the challenges posed by cognitive load and motivational barriers, it is imperative to develop strategies that encourage active information-seeking behaviors. One of the most effective means of doing so is optimizing the presentation of information. However, the “news-find-me perception” phenomenon, as discussed by Gil de Zúñiga and Diehl [
104], complicates this effort. This perception leads individuals to believe that essential information will find them automatically through their media consumption, peers, and social networks, reducing their motivation to seek information actively.
Addressing this passive approach to information acquisition is crucial for fostering critical engagement with content. Lurie [
105] highlighted the importance of structuring information in a clear, concise, and navigable format, which can reduce the cognitive effort required to process complex material. For example, online platforms can simplify the user interface and offer digestible summaries or visual aids. Such changes can promote critical engagement and encourage users to delve deeper into credible sources without feeling overwhelmed by the volume or complexity of the content.
Educational interventions also play a vital role in cultivating a culture of active information seeking. Intervention that promotes digital literacy and critical thinking skills can equip individuals with the cognitive tools necessary to navigate the digital landscape responsibly. Educational campaigns that focus on the tangible risks of misinformation, whether in public health, politics, or social movements, can increase awareness of the importance of seeking reliable information [
42]. Through illustrating real-world examples of the consequences of misinformation, such campaigns enhance users’ motivation to critically assess the information they encounter.
Furthermore, platform design itself can incentivize more active forms of information seeking. Online platforms that integrate fact-checking features, provide content warnings, or highlight diverse perspectives can foster a more critical approach to information consumption. Additionally, incorporating features that allow users to verify information directly through cross-referencing tools or credible source citations can significantly enhance the quality of their engagement [
106,
107].
7. Content Moderation and Ethical Challenges
As the internet continues to expand as a platform for knowledge sharing, it also presents complex ethical and logistical challenges in moderating digital content. This section explores the inherent difficulties in monitoring online content, the limitations of automated systems, and the ethical dilemmas surrounding free speech, transparency, and accountability in content regulation.
7.1. Challenges in Monitoring Digital Content
The internet’s role as a vast repository of information, combined with its decentralized nature, has made it a powerful tool for knowledge dissemination and a breeding ground for misinformation and disinformation. Monitoring and regulating online content effectively pose significant challenges due to the sheer volume of data generated, the diversity of platforms, and the global nature of the internet. One of the most pressing challenges is the lack of comprehensive content regulation and monitoring. Some platforms, such as Facebook, YouTube, and X, have implemented content moderation policies. However, these measures often fall short in managing the vast quantities of information produced every second [
16]. The deluge of content makes it difficult to maintain quality control, particularly when harmful or misleading information can spread quickly.
The global nature of the internet further complicates these efforts, as regulatory frameworks in one country may not apply to content hosted in another. There are legal and operational complexities for online platforms that operate across borders. For example, regulations such as the General Data Protection Regulation in the European Union establish specific standards for user privacy. However, these rules may not align with content policies in other regions, such as the United States, where protections for free speech are broader [
16,
108]. The lack of universal standards for content regulation exacerbates the challenge of monitoring online information effectively.
Content moderation on major platforms heavily relies on automated systems and algorithms to manage the overwhelming amount of data [
109]. These algorithms are efficient in flagging certain types of content, such as explicit material or known sources of misinformation, yet they are far from foolproof. Automated moderation systems frequently produce both false positives (incorrectly flagging legitimate content as harmful) and false negatives (failing to identify harmful content), which can have significant consequences [
110]. Additionally, reliance on automated systems raises concerns about transparency and accountability. Users often receive little to no explanation for why their content was flagged or removed, which can foster a sense of mistrust in the platform. When legitimate content is mistakenly flagged as harmful, it can have serious implications, such as stifling important political discourse or censoring marginalized voices [
110]. These difficulties highlight the need for a more balanced approach to content moderation, one that incorporates both technological solutions and human oversight.
7.2. Ethical Challenges in Content Moderation
Efforts to regulate online content and reduce the spread of misinformation raise several ethical dilemmas, particularly in balancing the protection of public safety with safeguarding freedom of speech. On one hand, content moderation is essential for curbing the dissemination of harmful or false information. Incorrect information can lead to real-world consequences and influence serious national events, such as elections, or exacerbate public health crises [
17]. For example, misinformation about vaccines during the COVID-19 pandemic had a direct impact on public health and contributed to vaccine hesitancy, which undermined global health efforts [
11,
107]. In such cases, content moderation becomes a public safety necessity. However, overly aggressive content moderation policies run the risk of infringing upon individual rights to free expression, particularly in democratic societies where diverse viewpoints are critical to open discourse. When moderation is too restrictive, it can suppress legitimate speech, including dissenting opinions or minority perspectives, and lead to concerns about censorship [
17]. This uneasiness is particularly relevant in the context of political speech, where platforms may be accused of bias or unevenly applying moderation standards, depending on the topic or user group.
The use of AI in content moderation introduces another layer of ethical complexity. AI systems, while efficient, often lack the nuanced understanding required to accurately interpret the context of certain posts [
111]. As a consequence, legitimate content is erroneously removed, disproportionately affecting marginalized or minority groups whose perspectives may not be adequately represented in the training data used to develop these systems. The opacity of AI-driven moderation processes also reduces transparency, as users often have little recourse to appeal decisions made by algorithms. This lack of transparency diminishes trust in the system and raises questions about the accountability of online platforms in moderating content fairly and justly [
16].
Moreover, AI-driven moderation systems are subject to biases embedded in their design. These biases can perpetuate existing inequalities, such as under-representing voices from marginalized communities or favoring content that aligns with the dominant cultural norms encoded into the system [
64]. Thus, while AI can enhance the efficiency of content moderation, it must be used with caution to avoid reinforcing structural biases and exacerbating existing inequities in online spaces.
7.3. Improving Content Moderation with AI and Human Oversight
Platforms must adopt more sophisticated approaches that combine the strengths of both AI and human oversight to address the limitations of current content moderation systems and improve the reliability of online information. AI systems are adept at identifying patterns of misinformation and flagging harmful content at scale. However, human moderators are essential for providing the contextual judgment that algorithms lack [
110]. This hybrid model allows for a more nuanced approach to content moderation, ensuring that legitimate content is not mistakenly removed while harmful material is effectively curtailed.
A critical component of improving content moderation is increasing transparency in the decision-making process. Platforms should provide clear guidelines for users on how content is evaluated and removed, along with opportunities for users to appeal moderation decisions. Transparency is essential for maintaining user trust and ensuring that content moderation practices are perceived as fair and unbiased [
112]. Furthermore, platforms can enhance their content moderation systems by partnering with third-party fact-checking organizations, such as the International Fact-Checking Network. These independent entities can provide an additional layer of verification for contentious or complex content, reducing the spread of misinformation while maintaining a commitment to free speech [
38]. Fact-checking partnerships can also help mitigate the limitations of AI by incorporating expert human judgment into the process of evaluating content accuracy. By combining these elements, platforms can create more reliable and equitable systems for managing online content, fostering a safer and more informed digital environment.
8. Conclusions and Future Directions
In navigating today’s complex digital landscape, individuals encounter numerous challenges and barriers, from discerning reliable information to developing the skills necessary for effective digital engagement. This manuscript highlights the critical need for enhanced digital literacy to address these challenges, with particular emphasis on understanding the nature of misinformation and fostering resilience against it. Key aspects of digital literacy include not only the technical skills to use digital tools but also the evaluative skills necessary for critical engagement with online content. As highlighted throughout this discussion, the rapid spread of misinformation and disinformation through social media and other digital platforms, such as predatory journals, creates a digital environment fraught with risks for the misinterpretation and manipulation of information.
Educational institutions, public libraries, and policymakers all play essential roles in advancing digital literacy. Schools and universities should integrate digital literacy training into their curricula, focusing on critical evaluation skills, responsible content creation, and awareness of algorithmic influences on information accessibility. This comprehensive approach can help mitigate the spread of misinformation and equip individuals with tools to verify sources, enhancing their capacity to make informed decisions online. Libraries, as community information hubs, are also pivotal, providing accessible digital literacy resources that cater to a diverse audience and support the broader goal of equitable access to credible information.
In addition to these practical steps, it is essential to address the “digital divide”, which affects access to digital literacy resources. This divide often disproportionately impacts rural areas and marginalized communities. Programs aimed at bridging this gap, such as culturally tailored digital resources and multilingual content, are necessary to foster inclusive digital literacy. Moving forward, digital literacy research must evolve to address emerging technologies and the continuous changes in digital media consumption. Further studies could examine the efficacy of specific digital literacy interventions across different demographics, particularly in underserved communities. Additionally, exploring the impacts of AI-driven content recommendation systems and misinformation on user perceptions can deepen our understanding of digital literacy needs in modern society.
In summary, by adopting a multi-faceted approach to digital literacy, stakeholders across sectors can create a more informed, resilient, and digitally literate society. Even though we acknowledge that this entry is based on a non-systematic review of the existing literature, which may introduce bias, this paper underscores the need for an integrated response to digital literacy challenges, with the ultimate aim of empowering individuals to navigate the digital landscape responsibly and confidently. Future research should consider more systematic approaches, such as scoping reviews, to provide a comprehensive understanding of these challenges and their solutions.