Next Article in Journal
Exploring Residents’ Perceptions of the Socio-Cultural Benefits of Tourism Development in the Mountain Area
Previous Article in Journal
Higher Education as a Bridge between China and Nepal: Mapping Education as Soft Power in Chinese Foreign Policy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Toward a Comprehensive Model of Fake News: A New Approach to Examine the Creation and Sharing of False Information

1
University Library, California State University, Northridge, CA 91330, USA
2
Department of Psychology, California State University, Northridge, CA 91330, USA
*
Author to whom correspondence should be addressed.
Societies 2021, 11(3), 82; https://doi.org/10.3390/soc11030082
Submission received: 4 May 2021 / Revised: 5 June 2021 / Accepted: 29 June 2021 / Published: 16 July 2021

Abstract

:
The authors discuss a new conceptual model to examine the phenomenon of fake news. Their model focuses on the relationship between the creator and the consumer of fake news and proposes a mechanism by which to determine how likely users may be to share fake news with others. In particular, it is hypothesized that information users would likely be influenced by seven factors in choosing to share fake news or to verify information, including the user’s: (1) level of online trust; (2) level of self-disclosure online; (3) amount of social comparison; (4) level of FoMO anxiety; (5) level of social media fatigue; (6) concept of self and role identity; and (7) level of education attainment. The implications reach into many well-established avenues of inquiry in education, Library and Information Science (LIS), sociology, and other disciplines, including communities of practice, information acquiring and sharing, social positioning, social capital theory, self-determination, rational choice (e.g., satisficing and information overload), critical thinking, and information literacy. Understanding the multiple root causes of creating and sharing fake news will help to alleviate its spread. Relying too heavily on but one factor to combat fake news—education level, for example—may have limited impact on mitigating its effects. Establishing thresholds for a certain combination of factors may better predict the tendency of users to share fake news. The authors also speculate on the role information literacy education programs can play in light of a more complex understanding of how fake news operates.

1. Introduction

Despite the recent attention given to fake news by researchers in numerous academic disciplines such as computer science, library and information science, psychology, and sociology, as well as its identification as a growing problem in mass communications, politics, education, and society at large, engagement with fake news continues to proliferate [1,2,3]. Recent discussions of higher education’s inability to teach students how to identify fake news and conspiracy theories, have appeared in prominent magazines and newspapers such as The Chronicle of Higher Education, Los Angeles Times, and The Atlantic [4,5,6]. The criticisms are not unwarranted but tend to focus primarily on pedagogical failures—especially those of librarians (but not professors) and their reliance on information literacy strategies like the CRAAP test—rather than wider systemic causes of fake news. This limited scope unfortunately blunts the largely valid criticisms and concerns of an ongoing “infodemic” [7] and reduces the ability of educators to affect meaningful change. Certainly, if such longstanding traditional educational approaches are insufficient to alleviate the causes of fake news, it is also vital to reconsider the wider root causes of the phenomenon and how to better neutralize its threat.
Numerous questions remain unanswered both in the wider conversations regarding fake news and in the specific research about it. First, what is fake news? Is it an offshoot of misinformation, rumor, lies or propaganda, or is it something altogether new and different? How is something determined to be false and why? Who is largely responsible for the spread of false information? Why exactly is it spreading? What accountability, systemic or societal, exists to curtail it? Is it always created or shared voluntarily and maliciously or are there other circumstances and conditions that also foster its spread? Given the wide reach of the Internet, what common issues and evidence can be drawn out from various cultures and countries undergoing the same problems with online extremist and pathological behaviors?
As an educator, one is also led to consider whether and why information literacy in general, and the CRAAP test in particular, are now considered ineffective tools for curtailing fake news. One wonders how realistic it is that librarians—traditionally the gatekeepers to and selectors of various types of information since the 19th century—could help alleviate such a widespread information problem that largely exists outside a library’s physical and conceptual boundaries. Does the very fact that most information is now offered through and accessed from the open web render educational institutions, including libraries, more irrelevant in the fight against fake news? What can educators do better to counteract the deleterious effects of false information on students as they transition to contributing members of society? Finally, how can higher education overall come to assess more clearly the reasons for the spread of fake news, to examine the main participants’ motivations and drives, and then to develop cooperative strategies to improve the situation?
This paper will touch upon some but not all of these difficult questions, seeking to come to a clearer understanding and direction for the study of fake news in its various manifestations. If it is true that “[o]verhauling a 20th century curriculum for a digital 21st century requires a group effort” [7], then there is surely a need for a more comprehensive model to come to a full reckoning of what fake news is. The authors propose one such model, identifying both the creators as well as the users as fake news’ primary drivers, and devising a clearer sense of what impels information users to share it online. This paper builds upon the authors’ previous research on university faculty attitudes toward fake news [8,9]; furthermore, the model described herein is an elaboration of the definition of fake news used in both of those studies. The authors, in turn, hypothesize that specific factors may predict a user’s tendency to share fake news. Identifying these factors and finding the threshold levels that would spur the sharing of fake news would be useful to help reduce its spread. The model also speculates upon the creator’s role and considers how researchers might examine the intended outcomes for sharing a piece of fake news and the actual results of doing so, providing a method of better quantifying fake news’ efficacy and impact.

Fake News

The term fake news gained widespread notoriety in the United States during the years of the Trump Administration from 2017–2021, often directed intentionally at legitimate news organizations in order to discredit them. The use of the term by the former president served the specific purpose of tarring certain news organizations that reported on his misdeeds or portrayed him in a negative light. Less about the actual content and more about attacking the free press, fake news became the insult of a would-be authoritarian. Yet, fake news is not a new phenomenon and is far more complex than the former-president’s narrow, self-serving conception of it. In fact, it has existed in any number of guises and applications for centuries prior to and after the printing press’ invention, including as propaganda, rumors, misinformation, disinformation, lies, libel and blackmail, and political spin [10] (pp. 5–6). Fallis provides a wide discussion of disinformation that includes visual, true, side-effect and adaptive disinformation types, that include instances of fake news within it [11]. Wardle and Derakhshan situate fake news within a larger context of ‘information disorder,’ which has three elements (agent, message, interpreter) and three phases (creation, production, distribution) (p. 22) [12]. Rumor studies have a long history with modern scholarly analysis dating back to the 1940s [13]. McNair examines the news media’s role in the spread of fake news from the late 1800s through 1920s, providing historical examples of the term’s use and development over time [14]. Researchers have also examined similar phenomena in such areas as psychology, sociology, political science and information science. Balmas examines how fake news affects users’ feelings of inefficacy, alienation, and cynicism [15]. Allcott and Gentzkow look at the impact of fake news on presidential elections, though especially the 2016 election [16]. McGivney et al. examine how information literacy can be employed to help empower students in combatting fake news’ effects [17].
The critical and theoretical consensus on defining fake news remains somewhat unclear, however. Some researchers take a limited, format-specific view of the idea, considering it primarily the sharing of an online informational news story under false pretenses with the primary aim of fooling readers. Mustafaraj and Metaxas define it as “online falsehoods formatted and circulated in a way as to make them appear authentic and legitimate to the readers” [18] (p. 2). In a similar vein, Golbeck et al. call it “information, presented as a news story that is factually incorrect and designed to deceive” [19] (p. 17). Brennen writes, “Fake news is made-up news, manipulated to look like credible journalistic reports that are designed to deceive us” [20] (p. 1). Yet, the formats are malleable. Gupta et al. demonstrate that fake news is amplified from the purely print and online news format when manipulated visuals such as deep fake videos and falsified images are included [21]. Regardless of format, however, Kim and Kim (2020) write that although “there are subtle differences in definition between fake news, rumors, and disinformation…they have the common features as fabricated content that is different from the facts as a core concept element in theory building” [13] (p.2). This focus on the dynamic of deceived reader falls for false content has multiple advantages for study. First, it allows researchers to focus on shared conceptualizations of what is objectively real and what is false; it allows them to narrow down the motives for creating fake news to trickery, deception, and false narratives. It also allows researchers to focus purely on the medium of the news story within the context of digital online information and mass communications.
However, fake news also appears to expose an epistemological weakness at the heart of online information itself and how we create knowledge from it. As Caplan et al. argue “the challenge and limitations of defining ‘fake news’ is as much due to our inability to consistently assess real versus fake, as it is due to our inability to simply define news” [22] (p. 14). Gray et al. similarly find fake news to be evidence of a “structural uncanny” in online systems and see the phenomenon as a “social disturbance precipitated by a variety of false, misleading or problematic online content” [23] (p. 335). Tandoc et al. argue that the audience ‘co-constructs’ fake news, but nevertheless depends upon whether the audience perceives something fake as real [24].
Clearly the term, simple as it may appear on the surface, implies far more than just false news stories. Indeed, as Weiss et al. describe in their study, a simplified description fails to account for the other purposes, applications, causes, and effects of fake news as well as the reasons it spreads. These different variables range from fake news emerging as a side-effect of information overload; as a result of bad-faith rhetoric in a poisoned public discourse; the currency of a contextless post-truth society; new versions of propaganda, disinformation, and misinformation; the literary fruits of parody and satire; and even the result of catharsis arising from political theater [9].
This raises the issue, then, of how people are able to detect and contextualize fake news. While traditional media outlets, publishers, and libraries have generally established the rules and guidelines of truthful discourse, the shift to online information has made this more difficult to manage. Shu et al. provide a clear breakdown of Knowledge-based and Style-based classifications of fake news that allow one to detect it through expert-, crowdsourcing-, computational-, deception- and objectivity-oriented methods [25]. They also propose a data-oriented, feature-oriented, model-oriented, and application-oriented models to help with the detection of fake news. de Beer and Matthee further identify a hybrid approach to identifying fake news, which combines human judgement with machine learning [26]. These numerous methods help to frame the context of fake news within not only technological and systematic parameters, but also within psychological and sociological ones. Despite the technological facility of generating fake news, one ultimately cannot ignore the fact that fake news spreads through the deliberate exploitation of well-known human behaviors and ‘individual vulnerabilities’, which have been shown to make people more naturally susceptible to misinformation in general and fake news in particular. Such behaviors include confirmation bias (heuristic of self-confirmation), the bandwagon effect, the frequency heuristic, echo chambers, and information utility [26].
As part of their ongoing research into fake news, the authors surveyed a random sample of 400 faculty members (18.88% of all faculty) of various ranks and appointments at a master’s granting state university in California (cf. Weiss et al., for more details of the survey and its results). When asked to define fake news, many of the faculty responses (69 total) revealed widely variable conceptualizations of the phenomenon. Some, for example, describe it straightforwardly as “news that is provably false;” “news made up to sway a certain group of people,” or “news that isn’t accurate;” others place it in the wider realm of epistemology, as “false information that is made to appear as if it’s truthful & publicized in the media;” “misinformation disseminated through social media and news media for purposes of making a point;” or simply “lying.” Still others note the specific examples or behavioral contexts surrounding its dissemination, including “Breitbart Russia bots [and] Unsubstantiated social media;” “Things that are designed to sway opinions;” “Trump calling anything he doesn’t like fake news;” or even “the intended delivery of wrong information to media” (taken from authors’ dataset). All of these characterizations of fake news ring true, certainly; all of them, taken individually, are also incomplete. Additionally, when asked about who among their students might be susceptible to fake news, the same faculty were more consistent in their belief that those who spend too much time on social media are more susceptible to falling for fake news. Only a few faculty hinted at other issues, including student motivation (i.e., “laziness”, “passively consume”, “less engaged”), environments (i.e., “conservative religious households” and “Fox news watchers”), or age and education levels (i.e., “undergraduates” or “younger non-working full time students”). These are certainly part of the issue, but nevertheless present a rather incomplete picture.
Indeed, the narrow model of the purposefully deceived reader of false news misses the complex tapestry of motives and uses of fake news and as a result does not account for the full environment that fosters its spread and deepens its impact. To omit satirical content such as fake news stories from The Onion, for example, ignores a huge swath of what could be considered ‘fake’. Satire often exaggerates and spells out mistruths in order to advocate for the truth. Even though the motives for creating satire are different, the results can often end up the same: the reader of a false narrative becomes convinced of an untruth. In notable examples, several prominent politicians over the last few years have fallen for satirical fake news stories, often to comical effect. Attempting to reconcile these widely differing visions of and motivations behind fake news has compelled the authors to develop a more comprehensive definition. To accommodate this complexity and the multiple variables surrounding the use of, spread, and motivations behind fake news, Weiss et al. propose that fake news should be defined as “the phenomenon of information exchange between an actor and acted upon that primarily attempts to invalidate generally accepted conceptions of truth for the purpose of altering established power structures” [9] (p. 7).
What is central to fake news is this exchange of information used for the specific purpose of altering shared values of truth—either to bolster truth in the case of satire or to destabilize it in the case of disinformation. This information exchange is not solely confined to a malicious activity based upon the assumed uncalculating ignorance of an unwitting reader. There may be, in actuality, tangible and intangible motives as well as tradeoffs from both sides of the fake news relationship that drive each side. The outcomes from peddling and consuming fake news might serve specific purposes and provide desired outcomes—however arbitrary or cynical—to the user. In other words, the sharers of fake news are not in all cases entirely fooled; they may in fact be using fake news toward specific purposes that, while certainly contrary to shared values of trust and cooperation in a civil society, are nevertheless calculated and even rational in some circumstances. Furthermore, the actor, the one who intentionally creates fake news may in turn become the acted upon. The same could be argued for the information user who becomes the generator of fake news in the form of blog posts, memes, or other formats containing false information.
This conception of fake news is bolstered by the results other research. In Mustafaraj and Metaxas’ analysis of Facebook group users, they find that fake news had been spread by anonymous accounts infiltrating groups already conversing with each other online for the purpose of inducing them to share misinformation across their networks [27]. Their findings reveal direct and purposeful action, getting at the realpolitik zero-sum mentality of the purveyors of disinformation and their focus on destabilizing the power structures in a society through exploiting areas of user trust within close-knit online groups and subcultures.
Ultimately, any comprehensive model will need to examine the phenomenon of fake news through multiple disciplinary lenses. As a result, the authors’ model incorporates a necessarily eclectic approach and must adopt various theoretic perspectives despite the risk of ‘watering down’ the efficacy of the model. The following will examine each of these elements.

2. Fake News Model Overview

2.1. A New Comprehensive Model for Fake News

As shown in Figure 1 below, the authors’ model of fake news focuses on the duality of the actor and the acted upon. The assumption is that one party, the actor, creates or willfully develops content for a specific purpose in mind and for a particular aim. The actor’s activity and its aim or goal then turns something—such as a false piece of information, or an untested theory—into fake news to be shared and spread. The acted upon, or the information consumers and users, have the choice of taking the time to verify the information (and thus disproving or confirming its accuracy) or believing it and subsequently sharing it with others. The authors argue that the acted upon would evince specific characteristics that demonstrate their likelihood of sharing the fake news.
The actor and the acted upon can also exchange these roles at different times, depending upon the circumstances, their aims, or their purposes. This means that fake news becomes applicable to all users of information, both willful hands-on proactive agents looking to persuade or fool others as well as the unwitting users reacting to the information they have encountered. The model poses and frames several important questions for future research:
  • Which characteristics and conditions are evident in users most and least likely to share fake news?
  • What do the creators of fake news intend by sharing it?
  • What combination of factors between the ‘actor’ and ‘acted upon’ most align the desired outcomes of creating fake news with the actual results? (i.e., how effective or successful is a fake news story in attaining its intended goal?)
  • What characteristics of fake news make it more or less likely to be shared?
  • What factors contribute to an actor becoming acted upon? Conversely, what factors contribute to the acted upon becoming actors/agents in sharing fake news?

2.2. The Actor and Agency

The actor, the primary co-driver of the fake news phenomenon, holds agency in the form of their willful actions that contain specific goals and desired outcomes. The actor side of the model (see Figure 2 below) shows that the development of fake news is bounded by a person’s willed actions, such as to spread propaganda, report a rumor, promote a theory, or create a parody. By ‘promotion of theory’, the authors mean not only truthful untested ideas but also conspiracies and pseudoscience based on things outside the realm of accepted fact. The aims and goals of these willful actions are directly related to the person’s perception of factual information and whether they intend to distort it or strengthen it. These choices are shown in the red lines, which link to the goal of distorting factual information; and in the blue lines, which link to the goal of supporting or enabling factual information. The context, too, is dependent upon how the person feels about power, for usually the intention of a conspiracy theory is to weaken power by sowing distrust in established facts.
Looking at the relationship between willed actions and goals or aims, the authors attempt to hypothesize the likelihood of an actor’s intention for a certain goal. To spread propaganda, for example, one might assume a negative association with the desire to enable facts to alter power structures. An association might be predicted, conversely, for the fake news creator’s intention to distort facts for the sake of altering power, as in spreading false information to manipulate the outcome of an election. Overall, however, the actor’s desired outcome for creating fake news would likely be the sharing of false information. However, how might this outcome be predicted among the willed actions of the actor? How does one align the desired outcome with the actual results of the information user (i.e., the fake news was shared or it was fact-checked and rejected)? It is this unclear zone between desired outcomes and actual real-world outcomes that the authors believe would be worthwhile examining, especially as a method of determining how successful a fake news story was in meeting its aim. Understanding the motives behind the creation of fake news as well as its perceived success in relation to its intended goals might help societies figure out how to counteract its effects.

2.3. The ‘Acted Upon’

The other side of the model focuses on the ‘acted upon,’ or the user and consumer of information and potential sharer of fake news. The user is conceived as playing a role in choosing to either consume and then pass along fake news to others or to verify and act accordingly (i.e., abandon the information; confirm and share; etc.). What this model helps to outline is the set of each user’s personal characteristics, their internal workings and motivations, their education levels, and their self-created roles or identities. Each characteristic can help determine whether users are more or less likely to believe and then pass along fake news. To do this, the authors propose the following factors to determine the likelihood of an information user sharing fake news:
(1)
Users’ level of trust online;
(2)
Users’ level of online self-disclosure;
(3)
Users’ amount of social comparison;
(4)
Users’ level of ‘Fear Of Missing Out’ (FOMO) anxiety;
(5)
Users’ level of social media fatigue;
(6)
Users’ concepts of self and their role identity;
(7)
Users’ educational level/attainment.
The first five of these seven characteristics listed are found in Talwar et al., who have developed a model to examine the propensity of users to share information [28]. Their research is extremely promising, finding evidence of support in varying degrees for each of their criteria. Given the positive associations found, this could be a very useful approach to examining the mechanism by which fake news spreads. Additionally, each of the seven criteria listed above are already well-researched within several disciplines, including social media behavior, information fatigue, social comparison, self-determination, and rational choice theories. The authors believe that these seven factors will help create a predictable ‘profile,’ as it were, of the likely consumer of fake news. Each of these are described in detail below.
The ‘level of online trust’ a user holds has been examined in particular through the lens of Mayer et al.’s ‘integrative model of organizational trust,’ [29] later revisited in 2007 by Schoorman, Mayer, & Davis [30]. DuBois et al. [31], Grabner-Kräuter and Bitter [32], Krasnova et al. [33], and Grosser et al. [34] each examine trust as a factor in the consumption and sharing of information online, including in terms of social capital theory, personal privacy risk, rumors, and gossip. In LIS research, trust has been examined by Wenger through the concept of “communities of practice” [35], which rely on mutual engagement “when members…build trust and relationships with one another through regular interactions” [36] (p. 105). The use of relationship building in online social media is of especial interest. Researchers might find important insight regarding how trust is formed and how it might be quantified to understand what threshold level of trust would need to be surpassed in order to be likely or less likely to share false information. As Talwar et al. suggest, “the association between online trust and fake news sharing behavior can be anticipated….it can be argued that social media users having high trust in the information and news shared…are likely to share fake news with others and are less likely to authenticate the news before sharing” [28] (p. 75). Although their study focuses on the use of one social media platform, WhatsApp, researchers might try a similar approach with other online social media platforms.
‘Self-disclosure’ refers to sharing personal information with others online. Pressure of peers and the desire to gossip and share false information may be in fact related, and “in the light of gaining popularity or the attention of others, social media users may share news that is exciting and sensational, without any concern for its being fake or true” [28] (p. 76). In LIS literature, this has been studied to some degree as far back as the early 2000s. Rioux’s information acquiring and sharing (IA&S) and SIF-FOW (Sharing information found for others on the web) models suggest that if users find information “perceived as useful or desirable…[that] would also address the information needs of someone they knew,” they would be likely to share this information. Understanding this behavior in the context of fake news is important [37].
‘Social comparison’ examines the tendency of people to compare themselves to others. This is a long-standing theory that has existed since the 1950s and has been used extensively in psychology and sociology [38]. However, the online environment has exacerbated this tendency, as social media “platforms have provided new and exciting means for people to practice social comparisons online” [28] (p. 76). Notable research in this area includes Cramer et al. and Wert and Salovey [39,40]. In LIS information behavior research, much has focused on the social status as well as the professional occupation of the information user via social positioning theory [41] (p. 294), though not so much on whether information users are comparing themselves to others. Social capital theory may also be a promising overlap, as it may provide an area to study in terms of how some users might be more likely to engage in social comparison than others [42] (p. 325).
The ‘Fear of missing out’ (FoMO) is a relatively new coinage, but it speaks to the anxieties people harbor when viewing social media and events that are largely happening beyond their control. The study of FoMO has roots in self-determination theory and is defined as an ongoing anxiety and deep suspicion among those on social media that their friends and acquaintances may be having more rewarding experiences than they [43]. More recent studies include Przybylski et al. [44], Alt [45], Beyens et al. [46], and Blackwell et al. [47]. FoMO has been linked with the tendency to share gossip and personal information and may contribute to people acting more recklessly online [48]. Self-determination theory has also been adopted in the LIS field, notably Ryan and Deci [49]. The theory as applied in LIS finds that people are motivated when they feel competent and acting in self-determined fashions; however, negative feedback and external pressures reduce their intrinsic motivations to complete a task [50] (p. 244). This might be applied, then to the concept of FoMO, where people are acting out of anxiousness and sharing gossipy information, rather than verifying truthful information, without regard for others. They are ultimately acting only in regard to their perceived standing among others in terms of personal agency.
‘Social media fatigue’ (SMF) “is defined as a subjective experience that comprises negative emotions such as anger, disappointment, tiredness, exhaustion, and reduced energy, resulting from continuous use of online social media” [51]. Notably, social media fatigue might be examined more clearly and directly with an LIS approach. There is a long history in LIS research of investigation into the very same types of problems information seekers face, but under different names; these include information overload [52,53,54]; satisficing [55]; and the long-established model the principle of least effort [56].
The principle of least effort in particular posits that people try to expend the smallest amount of energy possible when seeking out new information. The authors believe that SMF, information overload, satisficing, and least effort, have significant overlap. Using these theories to posit a hypothesis, it may be predicted that people experiencing information overload or SMF are more likely to share fake news—whether it’s because of tiredness, the principle of least effort, satisficing to meet specific limited ends, and so on. LIS research, in particular the well-established models in information behavior, can be used to further strengthen the understanding of how and why users would be more or less likely to share fake news. The research in these areas may be promising entranceways into further examining a person’s tendency to share fake news rather than authenticating information.

2.4. New Additions: Self-Identification, Role, and Education Attainment

However, in the authors’ opinion, Talwar et al. have not gone far enough to identify all of the major components of user information behavior. In addition to linking Social Media Fatigue theories more clearly to long-standing LIS research, the authors have also amended their model to incorporate two more variables, a user’s self-identification or ‘role’ within a society, and a user’s overall education attainment levels (see Figure 3 below).
In ‘Self-identification’, it is theorized that information users are constantly changing and developing their sense of self and their personal identifications. Prabha et al. posit Role Theory as a clear way to understand the differences between certain users’ approach to information seeking [55]. They find that the role of the person (e.g., university student or faculty member) contributes to an information user’s propensity to stop looking for more information, even if results are clearly insufficient. Weiss et al. also allude to the issue of roles influencing behavior in their previous study, noting that faculty are less willing to admit they would be susceptible to fake news possibly because of their pride in the role of professor [9].
Taking it a step or two further, “social positioning” allows for more complexity in the development of user identities. The theory “allows for the exploration of such complexities in the development of individuals’ identities” and assumes that online information users “are active developers of their identities” [57] (p. 335). Self-identification of the online user could therefore play a significant role in how people approach sharing or fact-checking fake news. One would be able to hypothesize whether the person’s development of identity corresponds with their willingness to share false information, rumor, and conspiracy. If a right-wing commentator were to share a false news story, it is likely that the person who belongs to that right-wing cohort would be more willing to believe the information and share it. Conversely, the person would disbelieve the information from someone outside the group. This is an effect observed not only in English-speaking online communities, but also in research conducted into Japanese conspiracy theorists. Ogasawara suggests that Cass Sunstein’s ‘the daily me’ creation of personal identities accounts for the false information and conspiracies right wing online users (Neto-uyo) come to believe and share [58,59].
Finally, ‘education level’ may have some influence on how likely people are to share fake news. Librarians in particular have a tendency to emphasize education’s ability to minimize the use of fake news, focusing especially on the impact of information literacy via the ACRL Framework for Information Literacy for Higher Education [60,61,62]. Certainly, a number of factors related to education attainment come into play. As Weiss et al. find, subject mastery might potentially diminish the impact of fake news, especially if comparing the weaker research methods of students to the robust fact-checking habits of scholars [9]. Environmental factors are sometimes affected by one’s educational attainment, as well, leading to areas of relative information ‘poverty’ [63]. In more highly educated circles, people may rely on their networks and information grounds that serve as information brokers or gatekeepers, helping them to cope with unvetted information and rumors [64].
It could be also argued that a higher level of educational attainment might be associated with less sharing of fake news, as the more informed person is likely to identify and refute false claims. Along these lines, Allcott, Gentzhow and Yu [65] and Flynn, Nyhan, and Reifler [66] assert that the uneducated may lean toward accepting misinformation more readily than individuals with some form of education able to discern fact from fiction. As philosopher Adam Smith famously writes, “the more they are instructed, the less liable they are to the delusions of enthusiasm and superstition” [67] (p. 520). However, the link to information literacy must be better explored, especially as education is only one element in the overall picture of who shares fake news and why.

3. Discussion: Implications for Research and Potential Directions

The authors are led to consider what this might imply about future research into fake news. What topics within education, sociology, and LIS research might be better explored or reexamined through adopting this model? The following will outline a few avenues of inquiry in information science research.

3.1. ‘Solving’ Fake News with Critical Thinking and Information Literacy: The Limits to Relying Entirely on Educational Outcomes

One of the long-standing ways that universities and academic libraries approach problems related to false information is through the development of critical thinking skills in students, a major goal of the American higher education system [68]. Advancing these skills fulfills the goal of higher education to develop a responsible citizenry. Yet, in an increasingly complex information-based society, individuals need to base their judgments and decisions on accurate evidence [69]. This becomes more difficult as the amount of easily accessible information proliferates and the amount of online data grows exponentially from petabytes to zettabytes and beyond. Critical thinking has also been seen as an essential component to helping students gain purpose and exhibit “self-regulatory judgment” [70]. The specific desired outcomes of critical thinking have included better skills in evaluating the arguments of others as well as their own, resolving conflicts, and reaching rational decisions about complex topics and issues [71]. With such lofty aims in mind, it is no wonder those in higher education aim to use critical thinking to counteract the problems related to fake news, conspiracy theory, misinformation and the like. The solutions to many of the problems of fake news often dovetail with the aims of critical thinking.
Taking things a step further, information literacy has been championed as a helpful subset of critical thinking skills for use in academic institutions and libraries, emphasizing how students can recognize information quality, authenticity, and credibility [72,73]. Cooke (2018) argues that librarians need to be “called upon to use our information literacy skills to help debunk and decipher fake news … to help our communities and constituents become critical and savvy information consumers” [74] (p. 8). The emphasis on savviness and critical consumers of information anticipates online users’ reliance upon information technologies and social media. Some of the problem is that terms and their implementation are uneven. Zimmerman and Ni suggest that “while the significance of information literacy ability has been made apparent through scholarship, the ways in which information literacy is discussed within the academic canon are not as clear” [75] (p. 2). Obviously, there is more to the story than merely applying critical thinking skills to solving this problem. Even agreeing on what information literacy means and what it entails will likely not make much dent in the issue.
Furthermore, despite the best efforts of educators, librarians, and scholars, problems persist with relying too heavily upon critical thinking and information literacy to combat fake news. They appear to be ineffective tools as currently applied when dealing with false information and fake news. Dixon finds, for example, that students do not arrive at universities and colleges with similar levels of information literacy [76], resulting in inequalities that can perpetuate academic barriers [77,78]. The recent editorials from the various newspapers and magazines [4,5,6,7] cited above shows the very public dissatisfaction of educators dealing with the spread of fake news and conspiracy theory websites such as QAnon. Their arguments about critical thinking and information literacy draw out the ineffectiveness of the current pedagogical approach to solving the crisis, which relies heavily upon one-time lessons unevenly given only to those classes that request it. This form of library instruction unfortunately has a weak impact on student achievement [79]. A better alternative would be longer, more intensive credit-bearing courses. However, national studies have established that the number of academic libraries offering such courses in the U.S. is as low as 19% despite their demonstrated positive impact [80].
It has become clear that overemphasizing information literacy, especially as it is currently implemented, as the main cure for fake news is counterproductive. Certainly, as the model suggests, education attainment plays some role in one’s tendency to share fake news or to check facts, but one must be cautioned that it is not the only factor at play. For this reason, the authors suggest that the university’s wide emphasis on critical thinking combined with librarianship’s narrower emphasis on information literacy may not be sufficient to tackle the problem of who shares fake news and why. For example, the ability of individuals to determine credible sources—i.e., the process an individual takes to find respectable sources—is often undermined by online information itself [81]. Information literacy in general falls short in supporting individuals to properly identify and determine credible sources, especially if authority itself is under assault [82].
There needs to be, instead, a rethinking of how critical thinking in general, and information literacy in particular may be better employed to mitigate the propensity to share fake news while preserving the larger context. However, to do this, it first must be seen as one of several factors and therefore as a contribution to a larger effort of pinpointing and encouraging the behaviors and conditions that lead people to verify information and reject fake news. What is ultimately missing is a better understanding of users and their various motives, backgrounds, behaviors, and circumstances that contribute to the creation and sharing of false information. Additionally, missing are the fake news creator’s motives, aims, and desired outcomes that may have direct impact on the users as well.

3.2. Meeting in the Muddled Middle

This model also attempts to reconcile the two sides of the fake news relationship, examining how producers become consumers, and consumers become producers. Similarly, the model proposes to examine how well the creator’s desired outcomes (i.e., false information is shared; a parody/satirical piece is acknowledged and understood) meet the actual results (i.e., the fake news story is believed and shared; the reader fails to get the joke and takes it seriously)?

3.2.1. When Does the Acted upon Become the Actor (and Vice-Versa)?

One needs to come to an understanding not only of the user of fake news but also its various creators. What factors may cause a person to act upon the fake news they receive? Once a user comes to believe in a falsehood, how likely would the reader be to not only share but also create new false news or fake information on their own? Conversely, what factors exist that might cause the creator of fake news to become a consumer of fake news as well? Despite the initial cynical motives, would a creator of fake news subsequently be more or less likely to become a believer and sharer of fake news? Furthermore, what are the conditions that might cause such behavior and how might it be predicted?
As Fister describes in her recent examination of information literacy’s flawed implementation,
Those who spend their time in the library of the unreal have an abundance of something that is scarce in college classrooms: information agency. One of the powers they feel elites have tried to withhold from them is the ability to define what constitutes knowledge. They don’t simply distrust what the experts say; they distrust the social systems that create expertise. They take pleasure in claiming expertise for themselves, on their own terms.
[5]
In this case, creators and consumers of fake news and conspiracy theories work in overlapping grounds, feeding off each other, taking advantage of the social circles they have created and inhabited from the ground up, simultaneously consuming and producing their own ‘knowledge,’ regardless of (or in spite of) factual basis. Years before the social media takeover of the Internet, Pawley addresses this conflict at the heart of information and information literacy, suggesting that the schism between consumer of information and the producer of information is a looming problem: “Just how nonelite groups produce and disseminate information is not well understood in LIS. Yet, unless we make an effort to find out, and then to make use of this understanding in shaping what we mean and how we practice information literacy, we will fail to enlist nonelite ‘consumers-as-producers’ of information in processes of production and recontextualization” (p. 446) [83]. The modern web has proven her correct, especially as users and creators of content spin false information and fake news through their daily online information behaviors.
One caveat, however. The relationship between actor and acted upon may be somewhat difficult to establish definitively. There exists an imbalance in the two sides of this new model, especially in terms of how one determines and even identifies creators of false information. It is easier to identify the receivers and subsequent spreaders of fake news, but perhaps less so to identify the creators, who might prefer to remain anonymous or to cover their tracks given the social taboos and even illegality of fraudulent behavior. Fallis’ examinations of disinformation might provide guidance in this matter, however [84]. Within his specific definition of disinformation, his model suggests that disinformation is “misleading information that is intended to be (or at least foreseen to be) misleading.” The misleading information, in this reading, is essentially close to lying. As a result, the development of an actor’s profile in the fake news model might be found within a game theoretic model of deceptive lying. As Fallis writes, “According to this model, whether a person will disinform depends on the expected costs and benefits. In particular, it depends on the costs of not being believed (weighted by the probability that this will happen) as compared with the benefits of being believed (weighted by the probability that this will happen).” Creation of fake news might indeed lie within some of these factors and could be examined within this context of game theory. Ultimately, despite the lack of identifiable actors, and even some that are indirect or lurking behind the scenes, one might estimate the conditions that raise levels of fake news and disinformation creation.

3.2.2. How Effective Is Fake News?

Finally, how can researchers know whether fake news has served its purpose? This question focuses more on the actual exchange of information and its real-world outcomes, leading one to ask: What combination of factors between the ‘actor’ and the ‘acted upon’ most align the desired outcomes with the actual results? What, for example, is the likelihood of an actor to perceive that a fake news story was successful? What would the threshold for ‘success’ look like for a piece of fake news? What level or amount of sharing would be essential for a fake news story to be seen as a successfully deployed piece of misinformation?
To answer these questions, researchers need to look at what happens in the relationship between the actor and acted upon when fake news is shared. Yet, it is unclear how one can determine whether fake news has been effective in its intended purpose. Researchers might want to know what types of fake news, as well as their component parts, would be more likely to instigate desired outcomes? Knowing the characteristics of a successful fake news story would allow educators and researchers to find better strategies to neutralize the negative effects.

4. Conclusions

The authors believe their model for fake news could be employed to help better predict conditions that may lead to the sharing of fake news. As it is rooted in long-established theories from multiple disciplines, including education, sociology and LIS research in information behavior and information literacy, the authors believe that the model could be effective in predicting the spread of fake news. Ultimately the model demonstrates the dual relationship of the actor and acted-upon working with each other; taken together, the two sides of the model help to address several important questions related to fake news.
For example, in determining which characteristics and conditions are evident in users most and least likely to share fake news, the model a proposes that combination of seven factors measured at certain levels in each person could predict an online user’s tendencies to share fake news (e.g.,: (1) level of online trust; (2) level of self-disclosure online; (3) amount of social comparison; (4) level of FoMO anxiety; (5) level of social media fatigue; (6) concept of self and role identity; and (7) level of education attainment). The authors also ask what the creators of fake news, or the ‘actors’ in the model, intend by sharing it. This aspect of the model suggests that the development of fake news is bounded by the actor’s willed actions, such as to spread propaganda, report a rumor, promote a theory, or create a parody. The ultimate aim, however, is to alter power structures by either distortion or enabling of factual information. The model also might help us examine what combination of factors between the ‘actor’ and ‘acted upon’ most align the desired outcomes of creating fake news with the actual results. In other words, it might help to show how effective or successful a fake news story might be in attaining its intended goal. It also might help determine the characteristics of fake news that make it more or less likely to be shared. Finally, one might want to determine which factors contribute to an actor becoming acted upon, or, conversely, which factors contribute to the acted upon becoming actors/agents in sharing fake news.
There are admittedly some limitations to the model as it currently exists. The precise combination of factors that best mitigate or alleviate fake news is unknown at this time. The testing of the model is in its beginning phases and it is unclear how effective it will be. The authors would also like to point out that the new proposed model is based upon a previous research project that was limited in its approach and surveyed only a small sample of faculty on one campus. The model also needs more development in the areas proposed, but especially in areas where the motives to create and share fake news are unclear. It is also unclear at this time how one measures the efficacy of a piece of fake news. Finding out how well a piece of false information performs in an information environment is especially important as it may help to identify and neutralize its effect. Finally, this is a broad, multidisciplinary approach to solving the complex problem of fake news. Any misrepresentations or omissions of theories are truly due to the authors’ errors and admittedly imperfect knowledge of certain disciplines. Further examination of these areas is surely warranted.
However, by determining the multiple root causes of sharing fake news through this new comprehensive model, the authors expect to find effective methods to alleviate its spread. It is furthermore believed that relying too heavily on just one or two factors, while ignoring or underplaying the remaining factors, would likely result in a limited reduction in fake news. For that reason, a wider multidisciplinary approach to understanding fake news is surely justified.

Author Contributions

Conceptualization, A.P.W., A.A., E.P.G. and A.T.K.; writing—original draft preparation, A.P.W.; writing—review and editing, A.P.W., A.A., E.P.G. and A.T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kornbluh, K.; Goldstein, A.; Weiner, E. New Study by Digital New Deal Finds Engagement with Deceptive Outlets Higher on Facebook Today than Run-Up to 2016 Election. The German Marshall Fund of the United States, 2020. Available online: https://www.gmfus.org/blog/2020/10/12/new-study-digital-new-deal-finds-engagement-deceptive-outlets-higher-facebook-today (accessed on 6 July 2021).
  2. Lee, T. The global rise of “fake news” and the threat to democratic elections in the USA. Public Adm. Policy Asia Pac. J. 2019, 22, 15–24. [Google Scholar] [CrossRef] [Green Version]
  3. McDonald, K. Unreliable News Sites More than Doubled Their Share of Social Media Engagement in 2020. NewsGuard. 2020. Available online: https://www.newsguardtech.com/special-report-2020-engagement-analysis/ (accessed on 6 July 2021).
  4. Fielding, J. Rethinking CRAAP: Getting students thinking like fact-checkers in evaluating web sources. Coll. Res. Libr. News 2019, 80, 620. [Google Scholar] [CrossRef] [Green Version]
  5. Fister, B. The Librarian War against QAnon. The Atlantic. 2021. Available online: https://www.theatlantic.com/education/archive/2021/02/how-librarians-can-fight-qanon/618047/ (accessed on 20 June 2021).
  6. Wineburg, S.; Breakstone, J.; Ziv, N.; Smith, M. Educating for Misunderstanding: How Approaches to Teaching Digital Literacy Make Students Susceptible to Scammers, Rogues, Bad Actors, and Hate Mongers; Working Paper A-21322; Stanford History Education Group/Stanford University: Stanford, CA, USA, 2020; Available online: https://purl.stanford.edu/mf412bt5333 (accessed on 20 June 2021).
  7. Wineburg, S.; Ziv, N. Op-ed: Why can’t a generation that grew up online spot the misinformation in front of them? Los Angeles Times. 6 November 2020. Available online: https://www.latimes.com/opinion/story/2020-11-06/colleges-students-recognize-misinformation (accessed on 8 July 2021).
  8. Alwan, A.; Garcia, E.; Kirakosian, A.; Weiss, A. Fake news and libraries: How teaching faculty in higher education view librarians’ roles in counteracting the spread of false information. Can. J. Libr. Inf. Pract. Res. under review.
  9. Weiss, A.; Alwan, A.; Garcia, E.P.; Garcia, J. Surveying fake news: Assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int. J. Educ. Integr. 2020, 16, 1–30. [Google Scholar] [CrossRef] [Green Version]
  10. Burkhardt, J.M. Combating Fake News in the Digital Age; ALA TechSource: Chicago, IL, USA, 2017. [Google Scholar]
  11. Fallis, D. What is disinformation? Libr. Trends 2015, 63, 401–426. [Google Scholar] [CrossRef] [Green Version]
  12. Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making; Council of Europe Report DGI; Consejo de Europa: Bruselas, Belgium, 2017; Available online: https://bit.ly/3gTqUbV (accessed on 25 May 2021).
  13. Kim, S.; Kim, S. The Crisis of Public Health and Infodemic: Analyzing Belief Structure of Fake News about COVID-19 Pandemic. Sustainability 2020, 12, 9904. [Google Scholar] [CrossRef]
  14. McNair, B. Fake News: Falsehood, Fabrication and Fantasy in Journalism; Routledge: New York, NY, USA, 2018. [Google Scholar]
  15. Balmas, M. When fake news becomes real: Combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun. Res. 2014, 41, 430–454. [Google Scholar] [CrossRef]
  16. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. Available online: https://pubs.aeaweb.org/doi/pdfplus/10.1257/jep.31.2.211 (accessed on 21 June 2021). [CrossRef] [Green Version]
  17. McGivney, C.; Kasten, K.; Haugh, D.; DeVito, J.A. Fake news & information literacy: Designing information literacy to empower students. Interdiscip. Perspect. Equal. Divers. 2017, 3, 1–18. Available online: http://journals.hw.ac.uk/index.php/IPED/article/viewFile/46/30 (accessed on 6 July 2021).
  18. Mustafaraj, E.; Metaxas, P.T. The Fake News Spreading Plague: Was it Preventable? Cornell University Library. arXiv 2017, arXiv:1703.06988. [Google Scholar]
  19. Golbeck, J.; Mauriello, M.; Auxier, B.; Bhanushali, K.H.; Bonk, C.; Bouzaghrane, M.A.; Buntain, C.; Chanduka, R.; Cheakalos, P.; Everett, J.B.; et al. Fake news vs satire: A dataset and analysis. In Proceedings of the 10th ACM Conference on Web Science, Amsterdam, The Netherlands, 27–30 May 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 17–21. [Google Scholar] [CrossRef]
  20. Brennen, B. Making Sense of Lies, Deceptive Propaganda, and Fake News. J. Media Ethics 2017, 32, 179–181. [Google Scholar] [CrossRef]
  21. Gupta, A.; Lamba, H.; Kumaraguru, P.; Joshi, A. Faking sandy: Characterizing and identifying fake images on twitter during hurricane sandy. In WWW ‘13 Companion: Proceedings of the 22nd International Conference on World Wide Web, Proceedings of the WWW ‘13: 22nd International World Wide Web Conference, Rio de Janeiro, Brazil, 13–17 May 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 719–736. [Google Scholar] [CrossRef]
  22. Caplan, R.; Hanson, L.; Donovan, J. Dead reckoning: Navigating content moderation after “fake news”. Data Soc. 2018. Available online: https://datasociety.net/pubs/oh/DataAndSociety_Dead_Reckoning_2018.pdf (accessed on 23 May 2021).
  23. Gray, J.; Bounegru, L.; Venturini, T. ‘Fake news’ as infrastructural uncanny. New Media Soc. 2020, 22, 317–341. [Google Scholar] [CrossRef]
  24. Tandoc, E.; Lim, Z.; Ling, R. Defining “Fake News”: A typology of scholarly definitions. Digit. J. 2017, 6, 1–17. [Google Scholar] [CrossRef]
  25. Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
  26. De Beer, D.; Matthee, M. Approaches to identify fake news: A systematic literature review. In Integrated Science in Digital Age; Springer: Cham, Switzerland, 2020; pp. 13–22. [Google Scholar] [CrossRef]
  27. Mustafaraj, E.; Metaxas, P.T. Fake News spreading plague. In Proceedings of the 2017 ACM on Web Science Conference, Troy, MI, USA, 25–28 June 2017. [Google Scholar]
  28. Talwar, S.; Dhir, A.; Kaur, P.; Zafar, N.; Alrasheedy, M. Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. J. Retail. Consum. Serv. 2019, 51, 72–82. [Google Scholar] [CrossRef]
  29. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  30. Schoorman, F.D.; Roger, C.; Mayer, R.C.; Davis, J.H. An Integrative Model of Organizational Trust: Past, Present, and Future. Acad. Manag. Rev. 2007, 32, 344–354. [Google Scholar] [CrossRef] [Green Version]
  31. DuBois, T.; Golbeck, J.; Srinivasan, A. Predicting trust and distrust in social networks. In Proceedings of the 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, Boston, MA, USA, 9–11, October 2011; IEEE Computer Society: Boston, MA, USA, 2011; pp. 418–424. [Google Scholar] [CrossRef] [Green Version]
  32. Grabner-Kräuter, S.; Bitter, S. Trust in online social networks: A multifaceted perspective. Forum Soc. Econ. 2013, 44, 48–68. [Google Scholar] [CrossRef] [Green Version]
  33. Krasnova, H.; Spiekermann, S.; Koroleva, K.; Hildebrand, T. Online social networks: Why we disclose. J. Inf. Technol. 2010, 25, 109–125. [Google Scholar] [CrossRef]
  34. Grosser, T.J.; Lopez-Kidwell, V.; Labianca, G. A social network of positive and negative gossip in organizational life. Group Organ. Manag. 2010, 35, 177–212. [Google Scholar] [CrossRef]
  35. Wenger, E. Communities of practice: Learning as a social system. Syst. Think. 1998, 9, 2–3. [Google Scholar] [CrossRef]
  36. Davies, E. Communities of practice. In Theories of Information Behaviors; Fisher, K.E., Erdelez, S., McKechnie, L.E.F., Eds.; Information Today, Inc.: Medford, NJ, USA, 2006; pp. 104–109. [Google Scholar]
  37. Rioux, K. Information acquiring-and-sharing. In Theories of Information Behavior; Fisher, K., Erdelez, S., McKechnie, L., Eds.; Information Today: Medford, NJ, USA, 2005; pp. 169–173. [Google Scholar]
  38. Festinger, L. A theory of social comparison processes. Hum. Relat. 1954, 7, 117–140. [Google Scholar] [CrossRef]
  39. Cramer, E.M.; Song, H.; Drent, A.M. Social comparison on Facebook: Motivation, affective consequences, self-esteem, and Facebook fatigue. Comput. Hum. Behav. 2016, 64, 739–746. [Google Scholar] [CrossRef]
  40. Wert, S.R.; Salovey, P. A social comparison account of gossip. Rev. Gen. Psychol. 2004, 8, 122–137. [Google Scholar] [CrossRef] [Green Version]
  41. Sundin, O.; Hedman, J. Professions and occupational identities. In Theories of Information Behavior; Fisher, K., Erdelez, S., McKechnie, L., Eds.; Information Today: Medford, NJ, USA, 2005; pp. 293–297. [Google Scholar]
  42. Johnson, C.A. Nan Lin’s theory of social capital. In Theories of Information Behavior; Fisher, K., Erdelez, S., McKechnie, L., Eds.; Information Today: Medford, NJ, USA, 2005; pp. 323–327. [Google Scholar]
  43. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Plenum Press: New York, NY, USA, 1985. [Google Scholar]
  44. Przybylski, A.K.; Murayama, K.; DeHaan, C.R.; Gladwell, V. Motivational, emotional, and behavioral correlates of fear of missing out. Comput. Hum. Behav. 2013, 29, 1841–1848. [Google Scholar] [CrossRef]
  45. Alt, D. College students’ academic motivation, media engagement and fear of missing out. Comput. Hum. Behav. 2015, 49, 111–119. [Google Scholar] [CrossRef]
  46. Beyens, I.; Frison, E.; Eggermont, S. I don’t want to miss a thing: Adolescents’ fear of missing out and its relationship to adolescents’ social needs, Facebook use, and Facebook related stress. Comput. Hum. Behav. 2016, 64, 1–8. [Google Scholar] [CrossRef]
  47. Blackwell, D.; Leaman, C.; Tramposch, R.; Osborne, C.; Liss, M. Extraversion, neuroticism, attachment style and fear of missing out as predictors of social media use and addiction. Pers. Indiv. Differ. 2017, 116, 69–72. [Google Scholar] [CrossRef]
  48. Buglass, S.L.; Binder, J.F.; Betts, L.R.; Underwood, J.D.M. Motivators of online vulnerability: The impact of social network site use and FoMO. Comput. Hum. Behav. 2017, 66, 248–255. [Google Scholar] [CrossRef] [Green Version]
  49. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemp. Educ. Psychol. 2001, 25, 54–56. [Google Scholar] [CrossRef]
  50. Watters, C.; Duffy, J. Motivational factors for interface design. In Theories of Information Behavior; Fisher, K., Erdelez, S., McKechnie, L., Eds.; Information Today: Medford, NJ, USA, 2005; pp. 242–246. [Google Scholar]
  51. Ravindran, T.; Yeow Kuan, A.C.; Hoe Lian, D.G. Antecedents and effects of social network fatigue. J. Assoc. Infor. Sci. Technol. 2014, 65, 2306–2320. [Google Scholar] [CrossRef]
  52. Blair, A. Too Much to Know: Managing Scholarly Information Before the Modern Age; Yale University Press: New Haven, CT, USA, 2010. [Google Scholar]
  53. Eppler, M.; Mengis, J. The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Inf. Soc. 2004, 20, 325–344. [Google Scholar] [CrossRef]
  54. Good, A. The Rising Tide of Educated Aliteracy. The Walrus. 2017. Available online: https://thewalrus.ca/the-rising-tide-of-educated-aliteracy/ (accessed on 12 December 2020).
  55. Prabha, C.; Silipigni-Connaway, L.; Olszewski, L.; Jenkins, L.R. What is enough? Satisficing information needs. J. Doc. 2007, 63, 74–89. [Google Scholar] [CrossRef] [Green Version]
  56. Zipf, G.K. Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology; Addison-Wesley: Cambridge, MA, USA, 1949. [Google Scholar]
  57. Given, L.M. Social Positioning. In Theories of Information Behavior; Fisher, K., Erdelez, S., McKechnie, L., Eds.; Information Today: Medford, NJ, USA, 2005; pp. 334–338. [Google Scholar]
  58. Ogasawara, M. The Daily Us (vs. Them) from Online to Offline: Japan’s Media Manipulation and Cultural Transcoding of Collective Memories. J. Contemp. East. Asia 2019, 18, 49–67. [Google Scholar]
  59. Sunstein, C. Republic.Com; Princeton University Press: Princeton, NJ, USA, 2001. [Google Scholar]
  60. Batchelor, O. Getting out the truth: The role of libraries in the fight against fake news. Ref. Serv. Rev. 2017, 45, 143–148. [Google Scholar] [CrossRef]
  61. De Paor, S.; Heravi, B. Information literacy and fake news: How the field of librarianship can help combat the epidemic of fake news. J. Acad. Librariansh. 2020, 46, 102218. [Google Scholar] [CrossRef]
  62. Gardner, M.; Mazzola, N. Fighting Fake News: Tools and Resources to Combat Disinformation. Knowl. Quest 2018, 47, 6. [Google Scholar]
  63. Chatman, E.A. The impoverished life-world of outsiders. J. Am. Soc. Inf. Sci. 1996, 47, 193–206. [Google Scholar] [CrossRef]
  64. Williamson, K. Discovered by Chance: The role of incidental information acquisition in an ecological model of information use. Libr. Inf. Sci. Res. 1998, 20, 23–40. [Google Scholar] [CrossRef]
  65. Allcott, H.; Gentzkow, M.; Yu, C. Trends in the diffusion of misinformation on social media. Res. Politics 2019, 6. [Google Scholar] [CrossRef] [Green Version]
  66. Flynn, D.J.; Nyhan, B.; Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics. Adv. Political Psychol. 2017, 38, 127–150. [Google Scholar] [CrossRef]
  67. Smith, A. An Inquiry into the Nature and Causes of the Wealth of Nations. With a Memoir of the Author’s Life; Clark, A.G., Ed.; 1776/1848; George Clark and Son: Aberdeen, Scotland, 1848. [Google Scholar]
  68. Roth, M.S. Beyond critical thinking. Chron. High Educ. 2010, 56, B4–B5. Available online: https://www.chronicle.com/article/beyond-critical-thinking/ (accessed on 8 July 2021).
  69. Renaud, R.D.; Murray, H.G. A comparison of a subject-specific and a general measure of critical thinking. Think Ski. Creat. 2008, 3, 85–93. [Google Scholar] [CrossRef]
  70. Behar-Horenstein, L.S.; Niu, L. Teaching critical thinking skills in higher education: A review of the literature. J. Coll. Teach. Learn. 2011, 8, 25–42. Available online: https://clutejournals.com/index.php/TLC (accessed on 8 July 2021). [CrossRef]
  71. Allegretti, C.L.; Frederick, J.N. A model for thinking critically about ethical issues. Teach. Psychol. 1995, 22, 46–48. [Google Scholar] [CrossRef]
  72. Hobbs, R. Multiple visions of multimedia literacy: Emerging areas of synthesis. In International Handbook of Literacy and Technology; McKenna, M.C., Labbo, L.D., Kieffer, R.D., Reinking, D., Eds.; Temple University: Philadelphia, PA, USA, 2006; pp. 15–28. [Google Scholar]
  73. Schuster, S.M. Information literacy as a core value. Biochem. Mol. Biol. Educ. 2007, 35, 372–373. [Google Scholar] [CrossRef]
  74. Cooke, N.A. Fake News and Alternative Facts: Information Literacy in a Post-Truth Era; ALA Editions: Chicago, IL, USA, 2018. [Google Scholar]
  75. Zimmerman, M.S.; Ni, C. What we talk about when we talk about information literacy. IFLA J. 2021. [Google Scholar] [CrossRef]
  76. Dixon, J. First impressions: LJ’s first year experience survey. Libr. J. 2017. Available online: https://www.libraryjournal.com/?detailStory=first-impressions-ljs-first-year-experience-survey (accessed on 8 July 2021).
  77. Buzzetto-Hollywood, N.; Wang, H.; Elobeid, M.; Elobaid, M. Addressing Information literacy and the digital divide in higher education. Interdiscip. J. E Ski. Lifelong Learn. 2018, 14, 77–93. [Google Scholar] [CrossRef] [Green Version]
  78. Cullen, R. The digital divide: A global and national call to action. Electron. Libr. 2003, 21, 247–257. [Google Scholar] [CrossRef]
  79. Wong, S.H.R.; Cmor, D. Measuring association between library instruction and graduation GPA. Coll. Res. Libr. 2011, 72, 464–473. [Google Scholar] [CrossRef] [Green Version]
  80. Jardine, S.; Shropshire, S.; Koury, R. Credit-bearing information literacy courses in academic libraries: Comparing peers. Coll. Res. Libr. 2018, 79, 768–784. [Google Scholar] [CrossRef] [Green Version]
  81. Cooke, N.A. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr. Q. 2017, 87, 211–221. [Google Scholar] [CrossRef]
  82. Bluemle, S.R. Post-Facts: Information Literacy and Authority after the 2016 Election. Portal Libr. Acad. 2018, 18, 265–282. [Google Scholar] [CrossRef]
  83. Pawley, C. Information literacy: A contradictory coupling. Libr. Q. Inf. Community Policy 2003, 73, 422–452. Available online: https://www.jstor.org/stable/4309685 (accessed on 19 June 2021).
  84. Fallis, D. A conceptual analysis of disinformation. iConference 2009. Available online: https://www.ideals.illinois.edu/handle/2142/15205 (accessed on 15 June 2021).
Figure 1. A comprehensive model for the spread of fake news, showing dual actor/acted-upon relationship; available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. [License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Figure 1. A comprehensive model for the spread of fake news, showing dual actor/acted-upon relationship; available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. [License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Societies 11 00082 g001
Figure 2. Detail of fake news model showing ‘actor’ side of the duality, including willed actions, goals, desired outcomes of creating fake news. Available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Figure 2. Detail of fake news model showing ‘actor’ side of the duality, including willed actions, goals, desired outcomes of creating fake news. Available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Societies 11 00082 g002
Figure 3. Detail of fake news model showing the ‘acted upon,’ or information consumer side of the duality, including Talwar’s five characteristics and two new ones (self-identification/roles and education level), along with research theories and results. Available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. [License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Figure 3. Detail of fake news model showing the ‘acted upon,’ or information consumer side of the duality, including Talwar’s five characteristics and two new ones (self-identification/roles and education level), along with research theories and results. Available at this url: http://scholarworks.csun.edu/handle/10211.3/218953, accessed on 6 July 2021. [License: Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)].
Societies 11 00082 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Weiss, A.P.; Alwan, A.; Garcia, E.P.; Kirakosian, A.T. Toward a Comprehensive Model of Fake News: A New Approach to Examine the Creation and Sharing of False Information. Societies 2021, 11, 82. https://doi.org/10.3390/soc11030082

AMA Style

Weiss AP, Alwan A, Garcia EP, Kirakosian AT. Toward a Comprehensive Model of Fake News: A New Approach to Examine the Creation and Sharing of False Information. Societies. 2021; 11(3):82. https://doi.org/10.3390/soc11030082

Chicago/Turabian Style

Weiss, Andrew P., Ahmed Alwan, Eric P. Garcia, and Antranik T. Kirakosian. 2021. "Toward a Comprehensive Model of Fake News: A New Approach to Examine the Creation and Sharing of False Information" Societies 11, no. 3: 82. https://doi.org/10.3390/soc11030082

APA Style

Weiss, A. P., Alwan, A., Garcia, E. P., & Kirakosian, A. T. (2021). Toward a Comprehensive Model of Fake News: A New Approach to Examine the Creation and Sharing of False Information. Societies, 11(3), 82. https://doi.org/10.3390/soc11030082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop