Skip to Content
InformationInformation
  • Article
  • Open Access

28 October 2022

Analysis of the Impact of Age, Education and Gender on Individuals’ Perception of Label Efficacy for Online Content

and
1
College of Innovation and Technology, University of Michigan-Flint, Flint, MI 48502, USA
2
Department of Computer Science, North Dakota State University, Fargo, ND 58105, USA
*
Authors to whom correspondence should be addressed.
This article belongs to the Special Issue Digital Privacy and Security

Abstract

Online content is consumed by most Americans and is a primary source of their news information. It impacts millions’ perception of the world around them. Problematically, individuals who seek to deceive or manipulate the public can use targeted online content to do so and this content is readily consumed and believed by many. The use of labeling as a way to alert consumers of potential deceptive content has been proposed. This paper looks at factors which impact its perceived trustworthiness and, thus, potential use by Americans and analyzes these factors based on age, education level and gender. This analysis shows that, while labeling and all label types enjoy broad support, the level of support and uncertainty about labeling varies by age and education level with different labels outperforming for given age and education levels. Gender, alternately, was not shown to have a tremendous impact on respondents’ perspectives regarding labeling; however, females where shown to support labeling more, on average, but also report more uncertainty.

1. Introduction

The internet has been a powerful force to connect the world. It has provided a voice for those without access to traditional forms of mass communications and a means for dissidents to organize against governments that they consider to be oppressive. It provides everyone connected to it the potential to communicate with the masses. However, the same mechanisms that provide these benefits also can create problems, when used for nefarious means.
A growing number of incidents show the power of online content to manipulate the public—for political and other purposes—with misinformation and disinformation. Deceptive online content has been blamed for interference with the 2016 U.S. presidential election [1], the Brexit vote [2] and elections in other countries around the world [3]. It has driven physical violence, such as an armed standoff in a pizza parlor [4], and has been used by multiple foreign influence campaigns [5].
The threat here is significant. Keys [6] has termed the current era as being one of “post-truth” while Lee [7] has described fake news as a “sinister force” that is a threat to democracy. Tong et al. [8] contend that a “weaponization of fake news” has occurred. With 55% of Americans indicating that they get at least some of their news from social media [9] and 75% indicating that they have believed fake headlines [10,11], the scope of the problem is pronounced.
Labeling has been proposed as a possible solution to this issue. Fuhr et al. [12] proposed a nutrition-style label which Lespagnol et al. [13], Vincentius et al. [14], and others have proposed additions to. Prior work has analyzed the need for online content labeling [15] and the perception of labeling data by university community members [16]. A broader study, using a United States population representative sample, has also been analyzed to assess American’s perspectives with regard to online labeling [17]. U.S. population representative data has also been analyzed to assess consumers’ perception of labeling efficacy, based on their income level, party affiliation and level of internet usage [18] and to assess how factors impact content trustworthiness differently, based on age, education and gender [19].
This paper builds on this prior work by looking at how age, education and gender impact the perception of online content labeling efficacy. It continues, in Section 2, with a discussion of prior work that informs the work presented herein. Section 3 presents data regarding the study instrument used to collect the data analyzed herein and respondents’ demographics. Section 4, Section 5 and Section 6 present analysis for three types of labels (informational, warning and supplemental information) and Section 7 analyzes broader trends across the data presented for specific labels. The paper concludes and discusses potential areas of future work, in Section 8.

2. Background

This section provides a review of prior work in three areas which serve as a foundation for the work presented herein. First, a discussion of online deceptive content and the problems it poses is presented. Then, product labeling is discussed, in Section 2.2. Finally, labeling’s potential use for combatting deceptive online content is reviewed, in Section 2.3.

2.1. Online Deceptive Content and Its Impact

At one point, the term ‘fake news’ was used to refer to content that publishers and readers knew was comedically false [20]. While the content might have been presented in a similar format to news content, it was not designed to fool people (though it sometimes did [21]). More recently, the term has been used to refer to deliberately deceptive content which is designed to be manipulative [22].
For many, the term fake news became well known during the 2016 U.S. presidential election. Grinberg et al. [23] estimated that 6% of news content was fake during this time period and Lazer et al. [24] estimated that Americans had, on average, consumed between one and three fake articles. Bovet and Makse [25] determined that, during the election, a quarter of tweets were “fake or extremely biased news” Fake news was also prevalent in the Brexit movement [2,5] and in least 20 other countries [3].
The impact of fake news spans across society. College students, for example, indicated that they expected social media news to be inaccurate [26]; however, despite this, individuals in the 18 to 29 year-old age group use social media more frequently than others and indicate trusting it more [26,27]. Fake news can confuse members of the public of all ages [28], has started an armed standoff [29] and has even been used to circulate inaccurate and potentially dangerous health information [30].

2.2. Product Labeling

Warning and information labels are used on numerous products. Information labels, such as the nutrition facts labels placed on food items (shown in Figure 1a) and energy labels (shown in Figure 1b) placed on electronic devices, seek to provide consumers with information in a standardized format to allow them to make decisions and comparisons between products. Warning labels are also placed on products, such as alcohol and tobacco, to promote healthy consumption decisions. However, the goal of warning labels is typically to limit consumption of the product, either in general or by a potentially vulnerable subgroup.
Figure 1. (a) Nutrition Facts label format (modified from [31]), left, and (b) energy guide label format [32], right.
Tobacco warning labels have been shown to be effective at communicating how dangerous the product is and preventing youth from starting smoking [33]. The current cigarette packaging labels in the United States date back to 1984 [34] and carry a text-based surgeon general’s warning [35]. Labels containing images have been shown to have more impact than text warnings. The FDA proposed “graphic” labels [36] (an example of which is shown in Figure 2); however, these labels were not implemented due to objections from tobacco companies [37], which were upheld by the courts [37,38] which found that the packaging requirements violated the First Amendment of the United States Constitution [39].
Figure 2. Example of the FDA’s proposed cigarette labels in 2011 [40].
The FDA proposed new labels, in 2019 [39] (examples of which are shown in Figure 3), which were planned to launch in June of 2021. These labels build upon the graphical approach shown in Figure 1. Their required use has been delayed several times [41]. Similar efforts have been undertaken by other countries. New Zealand’s Smoke-free Environments Regulations of 1999, for example, require tobacco products to include a graphic health warning [42]. While the law was challenged by the tobacco industry, it was ultimately adopted and had significant support from the public [42].
Figure 3. Examples of the cigarette labels proposed in 2019 [43].
Labeling has also been implemented, in the United States, for movies, television and music. MPAA rating labels are placed on movies and V-Chip ratings [44,45,46] are assigned to television programs. Some music, with explicit lyrics, carries a warning label to that effect [47]. Many movies also carry an anti-piracy warning from the U.S. Federal Bureau of Investigation which warns consumers about the risks of piracy to attempt to deter it [48]. All of these content labeling systems involved government coordination and collaboration with industry, to varying degrees.

2.3. Online Content Labeling

Labeling may be similarly valuable for online content to aid in information consumption decision-making. Lazer et al. [24] suggested that consumers could be aided by both preventing their exposure to deceptive content and helping them evaluate it.
Deceptive content hosting websites, though, may be uninterested in self-regulation and resistant to industry and government labeling. These sites may prefer that consumers consume their misinformation due to ideological [49] or advertising revenue generation [50] goals. Government mandated online content labeling, in the United States, may face considerable legal challenges. The decision preventing the FDA from requiring graphic health cigarette warnings was due to free speech concerns [39] of a potentially less protected nature (product sales [51]) than online content.
U.S. law is not the only consideration, of course, as online deceptive content is inherently an international challenge. In the United States, government required content labeling may face constitutional challenges as an infringement upon publishers’ free speech rights [52]. Numerous other countries have their own regulations that must also be considered. The People’s Republic of China, for example, has a law, the Information Network and Internet Security, Protection and Management Regulations of 1997, which proscribes “making falsehoods or distorting the truth, spreading rumors, destroying the order of society” which may dictate the removal of misinformation. If information is censored by the government content labeling may be unneeded as the content will no longer be available for others’ viewing [53].
Ethiopia, Cote d’Ivoire and Malawi also have laws that proscribe publishing false information [54]. Bangladesh created a law “to control the spread of online misinformation” [55] and Indonesian laws threaten jail sentences, of up to a decade, for “spreading false information or news that intentionally causes public disorder” [56]. Alternately, the European Union has created a framework for “digital platforms’ self-regulation” [56]. Other countries’ laws vary. Yadav et al. [57] identified and analyzed over 100 national laws which have different requirements and scopes.
While online content labeling can draw from several sources, it presents numerous challenges. A key challenge is how to determine what label to assign to a given article.
Deceptive content must first be identified before it can be labeled with a warning. Numerous techniques are possible (see [58,59]). Approaches can be manual, automatic or combine both. Articles’ style, authors and distributors, and even network analysis can be used to identify deceptive content [60]. Wang demonstrated an automated approach, using machine learning with manually annotations. Automated technique examples include machine learning techniques with and without manual annotations [61], natural language processing [62], deep [63], mixed graph [64] and graph-attention [65] neural networks and neural stacking [66]. Techniques which analyze social networks [67], signal detection [68], and emotion cognizance [69] have also been proposed. Shao et al. suggested [70] that a multi-modal ensemble approach may provide the benefits of both single mode and multi-modal analysis and outperform other approaches. Rapti et al. [71], have also proposed a model for considering fake news using a “disinformation blueprint” which may allow deceptive content to be identified more holistically.
Approaches to identifying deceptive content using influence analysis [72,73] have been proposed, such as Budak, Agrawal and Abbadi’s [74] “competing cascades dissipating in a network” method, and the use of a heuristic based on degree centrality [74]. Suchia et al. [75] proposed an approach to detect rumors that piggyback alongside legitimate news stories but add incorrect information. Fairbanks et al. [76], noting the prevalence of politically charged deceptive content, created a technique that classifies text as containing “liberal words”, “conservative words”, and “fake news words”. The fake news words category, though, was shown to be unreliable.
Taxonomies for labeling have been proposed by Tandoc, Lim and Ling [10] (who developed a system including “satire”, “parody”, “fabrication”, “manipulation”, “propaganda”, and “advertising”) and Bakir and McStay [77]. Online content publishers have also created their own systems. Twitter introduced Birdwatch, which is based on manual evaluation of Twitter posts by other users [78]. Wikipedia has published a list of news sources that includes reliability information (https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources) (accessed on 26 October 2022).

3. Survey and Respondents

A survey was conducted with a goal of understanding Americans’ news content consumption decision making perceptions. The survey instrument and the data collection process are discussed in Section 3.1 and the labels whose efficacy was evaluated are discussed in Section 3.2. Respondent demographics are discussed in Section 3.3. Finally, Section 3.4 discusses the analysis methodology used herein.

3.1. Survey Instrument and Data Collection

The survey utilized in [16] was modified for use for this study. It was edited to reduce the target response time to 15 min and to combine the three surveys, which were administered independently for [16]. Questions which were redundant between the surveys were removed and the revised survey was reviewed by the authors and Qualtrics staff. As part of Qualtrics standard procedure, a limited pilot was used to validate the instrument. As no issues were detected during the pilot study, the pilot responses were included in the dataset, based on Qualtrics’ standard practices.
For each proposed label type, respondents were presented with the label and description of how it would appear when browsing social media. For each label, participants were asked the same five questions regarding the its helpfulness: whether or not they found it annoying, whether they would use it, whether they believed other people would use it, and whether they believed it would be helpful in judging the trustworthiness of news articles. These question categories and the text of the questions from the survey instrument are presented in Table 1.
Table 1. Survey instrument questions for each label instrument. Respondents were presented with each proposed label instrument and were asked the following questions.
By asking “would you find this label helpful”, the survey identified the general positive or negative attitude of the participant towards using the label, without asking specifically where this sentiment comments from. The remaining questions help to establish the source of this perception. For example, a participant may find the label to be useful for judging trustworthiness yet find it annoying and unlikely to be utilized in practice. This could suggest a problem with the design of the label rather than the type of information being presented in it. Some label styles present a larger amount of information than others, providing more details at the cost of being larger. Responses regarding “usefulness for judging trustworthiness” can be compared to perceptions of “annoyingness” to observe trade-off between brevity and verbosity. All of this information helps to inform the design of future labeling mechanisms.
The specific topic presented in the labels, “Trouble at High Speed West Middle School”, was chosen to be an apolitical topic which would not influence respondents’ attitude toward the label. While sounding news-like, it avoids addressing a real-world issue and uses a fictitious school name. The headline is meant to avoid distracting from the label design itself and thus biasing responses. Were the headline to focus on a particular news item (for example, about the 2020 US presidential election), respondents’ responses may be confounded by being based on both their opinions regarding the topic and the label design. A key area for future work will involve testing the efficacy of labels in a real-world setting with real instances of legitimate news and misinformation. This study seeks to characterize attitudes towards the label instruments themselves without such confounding concerns.
The data analyzed herein was collected by Qualtrics International Inc. using a quota-based stratified sampling technique using the survey instrument modified from [16]. The recruiting plan was targeted to obtain population proportionate participation, based on gender, age, income level and political affiliation.
The survey was administered in October of 2021 and approximately 550 responses were collected. Of these, 500 are part of the population representative sample. As respondents were offered a completion-based incentive, most responses are complete. In this paper, all responses which answer the relevant demographic and response questions are included in the analysis.

3.2. News Article Labels

The informational labels in the study, which are discussed in Section 4, utilize the labeling categories (title, author, authority, etc.) originally proposed by Fuhr et al. [12], as discussed in [16]. Informational labels 1 and 2 each provide the label categories and their values without any further explanation. These can be seen as ‘pure’ informational labels, where the user must interpret the information, as no interpretation is provided by the label.
Informational labels 1 and 3 also include the article’s original headline, image, and introductory text. This preserves more of the original article’s elements which are intended to be attractive to the user and draw them into clicking the link and viewing the article. This is similar to how nutrition facts are added to the side of a container while still including the product’s branding information and imagery. Informational label 3 provides additional supporting information for each label category, helping the user to interpret it.
Unlike informational labels 1 and 3, informational label 2 appears as a pop-up, covering some of the original article’s elements. Relevant information, such as the title is retained; but the article’s image and summary text are not visible. Like the cigarette labeling design, shown in Figure 2, this style of label blocks potentially attractive advertising elements for the article, such as the image. The goal of this is to allow the user to make a decision without being emotionally persuaded by factors other than the information about the article.
Warning labels 1, 2, and 3 alert the user that “The information in this article is advertised as fact. However, the information has not been verified by any trustworthy sources”. This goes further than the informational labels, warning the user to be on guard, should they decide to view the article. In each case, the user can still allowed to proceed by clicking the forward button.
Warning label 1 appears as a pop-up, preventing the user from seeing the article’s elements (similar to how the cigarette warnings in Figure 2 block half of the front of the carton). Warning label 2 appears beneath the normal headline elements of the article, making it less intrusive. Warning label 3 is presented as an intermediary webpage which is displayed after clicking on an article but before viewing its contents. This is similar to the intermediary page generated by some web browsers when clicking an unsafe link (e.g., one which may lead to computer viruses).
Finally, a supplemental informational label is presented. This style of label provides specific supporting fact-checked information which is directly related to the claims of the article. Rather than making any statement as to the veracity of the article’s claims, it simply makes it easier for the user to compare those claims to facts from trusted sources. This style of label is similar to those used by Twitter and YouTube during the 2020 US presidential election, where tweets or videos making claims about the election results would sometimes be augmented with links to supplementary information from well-known news sources [15].

3.3. Respondent Demographics

Due to the population representativeness goal, respondents are well distributed across demographic groups. Approximately 51% were female and 49% were male. Only a small number of respondents indicated a non-binary gender (less than 1%). Because of the small sample size, non-binary gender’s impact could not be analyzed further.
Respondents from ten age groups (starting at 18 years of age) were included in the study. The breakdown of respondents amongst these age groups is presented in Table 2.
Table 2. Respondents’ age distribution [17].
Respondents from seven educational levels participated in this study. The distribution of respondents between education levels is presented in Table 3. High school graduates, who have not completed a college degree, comprised just under 50% of the study population. Nearly a quarter of respondents held a bachelor’s degree. Associate’s and master’s degree holders each comprised just over 10% of respondents. High school graduates without collegiate education and doctoral degree holders also comprised small parts (less than 5% each) of the survey population.
Table 3. Respondents’ education distribution [17].

3.4. Analysis Methodology

The Qualtrics online system and Microsoft Excel software were used to perform data analysis. Each question was analyzed in terms of three demographic characteristics (age, education and gender) to ascertain the extent to which each demographic characteristic impacted respondents’ perceptions of each label. This data is presented and analyzed in Section 4, Section 5 and Section 6. Section 7 considers trends present across the multiple demographic groups and questions.

7. Broader Analysis and Analysis of Implications

This section discusses trends across the different label types, demographics and questions. Notably, respondents were overall very positive about the use of labels. In most cases, the majority of respondents indicated answers supportive of the use of labels, such finding them helpful, not annoying, indicating that they and others would use them and saying that they would be useful for evaluating articles’ trustworthiness.
Of course, some labels were better received than others. In the informational labels, for example, the third informational label was the best received by the youngest age groups, with approximately 70% of those between 18 and 34 finding the first informational label helpful (not considering those indicating uncertainty), versus an average of approximately 75% for the second informational label and 85% for the third. Notably, different trends existed between these labels as well, for these groups. The first had relative similarity between the three age groups (18–24, 25–29 and 30–34), while the second exhibited a downward trend with age and the third had an increase between the first two age groups, followed by a decline between the second and third labels. Most labels exhibited a drop in support at the 35–39 demographic; however, this was notably less pronounced for warning label 3, which has only a small difference between the 30–34 and 35–39 age groups and continues falling from the 35–39 vale at the 40–49 age levels. The supplemental information label shows a drop at 35–39; however, it continues dropping at 40–44, while—in many other cases, such as warning label 2—the support rebounds in the next age level up.
Table 4 provides an overview of the trends present, by demographic, for all of the label types and questions. Notably, there is not a consistent theme of declining or increasing by age or education level. In some cases, no clear trend is present. In others, conflicting trends are seen for a given metric at different age or education levels. Differences in trend type are also present across the different labels and questions.
Table 4. Overview of trends in responses and respondents’ demographics.
Overall, the age-correlated responses show the most variability between responses. The education level data (which, of course, does have an implicit but imperfect correlation with age), shows a more moderate level of fluctuations. The gender-correlated data, on the other hand, shows that there is a limited amount of difference between genders, for most questions, with several label-question combinations having results between males and females which differ between them.
Uncertainty is also measured and, in many cases, decreases—at least partially across the range—with additional age or education. Males and females exhibit different levels of uncertainty across various label and question combinations; however, there is not a consistent pattern to which gender is more or less uncertain that perfectly correlates with specific labels or question types. In general, though, females indicate greater levels of uncertainty (having greater uncertainty reported in 25 out of 35 label question combinations). Females also indicate stronger support for labeling (indicated by greater yes responses for all questions, except annoyingness, and no for annoyingness), responding with support in 28 out 35 label-question combinations.
For all labeling categories, the annoyingness level is either the same for both males and females or higher for males than females. Conversely, the reverse is observed with regard to helpfulness, across all label styles.
There are also gender differences by label style. More males than females indicated that they would use informational label 2, while females indicated this more with respect to all of the other label styles. Females also indicated being more confident than males that others would use each labeling style (including informational label 2). Finally, except for informational label 1, more females than males indicated that each label style would be useful in judging the trustworthiness of a news article.
While some gender-difference is shown in specific label preference, the trend is broader than being related to any single label. This demonstrates that the higher level of support shown by females is likely unrelated to specific elements of the design of particular labels.
The lack of a clear pattern of responses or the presence of conflicting patterns is present for many of the demographic-analyzed individual label question responses. Of the 105 demographic-question-label combinations, just under a third (33) have no clear pattern or evidence of conflicting trends. Slightly more (36) of the combinations have no clear pattern or conflicting trends related to uncertainty. In approximately two-thirds (22) of these, there is a lack of a clear pattern (or conflicting trends) in both the demographic responses and the uncertainty.
Considering the four categories that are associated with label support (all except annoyingness), 24 demographic-question-label combinations have a decreasing association of support with increased age or education level. Four of the annoyingness demographic-question-label combinations show an increase with age/education, a similar indication of support-declining with increasing age or education. Alternately, only six combinations (outside of the annoyingness question, which has three support-increasing decrease response combinations) show a trend of increasing with greater age or education. Only one demographic-question-label combination (informational label 2′s self-use) has only minimal change amongst levels.
This data suggests that the age and education demographics of an online content labeling system user are very important, when choosing the type of label to use, to maximize the efficacy of the system. However, the limited number of overarching trends, which run the entire spectrum of the age or education range, mean that system designers and administrators will need to make nuanced decisions based on specific users’ demographics. The data presented herein, when multiple label types’ absolute values are compared for particular demographic values, can inform these decisions. Of course, these initial heuristic decisions should also be refined based on the behavior of a given user, learned over time, as any given user’s behaviors may not align perfectly with others in the particular demographic group being assessed.

8. Conclusions and Future Work

This paper has analyzed data from a national study of American’s attitudes towards online content labels, in terms of age, education level and gender. It has shown that females are more supportive of labels, generally, than males; however, they also indicate greater confusion regarding their efficacy. Additionally, while females show more support, the difference in support levels between the two genders is—for many labels and considerations—relatively limited. The impact of gender on label efficacy appears to be broader than an association with specific label styles and elements, as females evidence stronger support than males across label styles and survey questions, with a very limited number of exceptions.
In terms of education level and age, it has been shown that the perceived efficacy of labels and support for them generally decreases with age; however, a majority of respondents at all ages and education levels indicated support for the labels (when excluding responses indicating uncertainty). Label annoyingness, was shown to have a positive correlation, for four labels. This perhaps indicates that some respondents found the information to be unneeded for their age and experience. A few labels were shown to have a positive correlation between age/education and support.
As youth have been identified as a key demographic that may benefit from online content labeling, it is beneficial that this study shows that the labels may be particularly useful for this demographic. Furthermore, the study has identified certain labels that may be particularly beneficial for younger users, such as informational label 3. Other age and education levels, though, may be better served with other labels.
It is clear that age and education level have a significant impact on label efficacy; however, the impact is more nuanced than an overarching trend. In some cases, conflicting trends are shown at different points along the age or education level spectrum, which may indicate gaining more (or less) benefit, up until a point, and then having that benefit decline. There may also be generational and lifestyle factors that are responsible for some of the discontinuous changes within the data. There is also a possibility of unknown confounding variables being present. In any case, the data presented and analyzed herein can inform label-selection decision making, based on the demographics of the individual being targeted to use the label.
Building upon this work, needed future work includes conducting observations of respondent’s decision making when using a simulated system to ascertain whether individuals predicted behaviors and their actual ones align, with regard to the topic of this study. A variety of activities are also needed in the broader context of online content labeling. These include the development of new and enhanced technologies to detect intentionally deceptive content, new labels designs to assess the efficacy of and policy analysis to consider how content labeling can be most effectively implemented in real-world environments.

Author Contributions

Conceptualization, J.S. and M.S.; methodology, J.S. and M.S.; resources, J.S.; writing—original draft preparation, J.S. and M.S.; writing—review and editing, J.S. and M.S.; project administration, J.S.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

Partial support for this work was provided by the NDSU Challey Institute for Global Innovation and Growth. Funding for the article publication charge was provided by the Hayek Fund for Scholars at the Institute for Human Studies at George Mason University.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of North Dakota State University (protocol IRB0003884, approved 23 September 2021).

Data Availability Statement

A data release, via a data journal publication, is planned once initial analysis of all data is complete.

Acknowledgments

Thanks are given to Jade Kanemitsu from Qualtrics International Inc. for the management of the data collection process. Thanks are also given to Ryan Suttle, Scott Hogan and Rachel Aumaugher who developed many of the questions that were used in this study during their earlier work (which was presented in [16]). Thanks are given to Bob Fedor who generated an earlier set of figures using this data (which were not used in this paper).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Allcott, H.; Gentzkow, M. Social Media and Fake News in the 2016 Election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef]
  2. Bastos, M.T.; Mercea, D. The Brexit Botnet and User-Generated Hyperpartisan News. Soc. Sci. Comput. Rev. 2017, 37, 38–54. [Google Scholar] [CrossRef]
  3. Cunha, E.; Magno, G.; Caetano, J.; Teixeira, D.; Almeida, V. Fake News as We Feel It: Perception and Conceptualization of the Term “Fake News” in the Media. Lect. Notes Comput. Sci. Incl. Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinform. 2018, 11185, 151–166. [Google Scholar] [CrossRef]
  4. Aisch, G.; Huang, J.; Kang, C. Dissecting the #PizzaGate Conspiracy Theories-The New York Times. New York Times. 2016, Volume 10, p. 2. Available online: https://www.nytimes.com/interactive/2016/12/10/business/media/pizzagate.html (accessed on 1 September 2022).
  5. McGaughey, E. Could Brexit be Void? King’s Law J. 2018, 29, 331–343. [Google Scholar] [CrossRef]
  6. Keyes, R. The Post-Truth Era: Dishonesty and Deception in Contemporary Life; St. Martin’s Press: New York, NY, USA, 2004. [Google Scholar]
  7. Lee, T. The global rise of “fake news” and the threat to democratic elections in the USA. Public Adm. Policy 2019, 22, 15–24. [Google Scholar] [CrossRef]
  8. Tong, C.; Gill, H.; Li, J.; Valenzuela, S.; Rojas, H. “Fake News Is Anything They Say!”—Conceptualization and Weaponization of Fake News among the American Public. Mass Commun. Soc. 2020, 23, 755–778. [Google Scholar] [CrossRef]
  9. More Americans Are Getting Their News From Social Media. Available online: https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/#589ec4d43e17 (accessed on 1 February 2020).
  10. Tandoc, E.C.; Lim, W.; Ling, R. Defining “Fake News” A typology of scholarly definitions. Digit. J. 2018, 6, 137–153. [Google Scholar] [CrossRef]
  11. Silverman, C.; Singer-Vine, J. Most Americans Who See Fake News Believe It, New Survey Says. BuzzFeed News. 2016. Available online: https://www.buzzfeednews.com/article/craigsilverman/fake-news-survey (accessed on 1 September 2022).
  12. Fuhr, N.; Giachanou, A.; Grefenstette, G.; Gurevych, I.; Hanselowski, A.; Jarvelin, K.; Jones, R.; Liu, Y.; Mothe, J.; Nejdl, W.; et al. An Information Nutritional Label for Online Documents. ACM SIGIR Forum 2018, 51, 46–66. [Google Scholar] [CrossRef]
  13. Lespagnol, C.; Mothe, J.; Ullah, M.Z. Information Nutritional Label and Word Embedding to Estimate Information Check-Worthiness. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, Paris, France, 21–25 July 2019; pp. 941–944. [Google Scholar]
  14. Vincentius, K.; Aggarwal, P.; Sahan, A.; Högden, B.; Madan, N.; Bangaru, A.; Schwenger, C.; Muradov, F.; Aker, A. Information Nutrition Labels: A Plugin for Online News Evaluation. In First Workshop on Fact Extraction and VERification; Association for Computational Linguistics: Brussels, Belgium, 2018; pp. 28–33. [Google Scholar]
  15. Spradling, M.; Straub, J.; Strong, J. Protection from ‘Fake News’: The Need for Descriptive Factual Labeling for Online Content. Future Internet 2021, 13, 142. [Google Scholar] [CrossRef]
  16. Suttle, R.; Hogan, S.; Aumaugher, R.; Spradling, M.; Merrigan, Z.; Straub, J. University Community Members’ Perceptions of Labels for Online Media. Future Internet 2021, 13, 281. [Google Scholar] [CrossRef]
  17. Straub, J.; Spradling, M. Americans’ Perspectives on Online Media Warning Labels. Behav. Sci. 2022, 12, 59. [Google Scholar] [CrossRef] [PubMed]
  18. Straub, J.; Spradling, M.; Fedor, B. Assessment of Consumer Perception of Online Content Label Efficacy by Income Level, Party Affiliation and Online Use Levels. Information 2022, 13, 252. [Google Scholar] [CrossRef]
  19. Straub, J.; Spradling, M.; Fedor, B. Assessment of Factors Impacting the Perception of Online Content Trustworthiness by Age, Education and Gender. Societies 2022, 12, 61. [Google Scholar] [CrossRef]
  20. Ott, B. Some Good News about the News: 5 Reasons Why ‘Fake’ News is Better than Fox ‘News’-Flow. Flow A Crit. Forum Telev. Media Cult. 2005, 2, 316–317. [Google Scholar]
  21. Kim, S. All the Times People Were Fooled by the Onion. Available online: https://abcnews.go.com/International/times-people-fooled-onion/story?id=31444478 (accessed on 4 February 2022).
  22. Saez-Trumper, D. Fake Tweet Buster: A Webtool to Identify Users Promoting Fake News on Twitter. In Proceedings of the 25th ACM Conference on Hypertext and Social Media, ACM, Santiago, Chile, 1–4 September 2014. [Google Scholar]
  23. Grinberg, N.; Joseph, K.; Friedland, L.; Swire-Thompson, B.; Lazer, D. Fake news on Twitter during the 2016 U.S. presidential election. Science 2019, 363, 374–378. [Google Scholar] [CrossRef] [PubMed]
  24. Lazer, D.M.J.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 3, 1094–1096. [Google Scholar] [CrossRef]
  25. Bovet, A.; Makse, H.A. Influence of fake news in Twitter during the 2016 US presidential election. Nat. Commun. 2019, 10, 1657. [Google Scholar] [CrossRef]
  26. Shearer, E.; Matsa, K.E. News Use across Social Media Platforms 2018. Available online: https://www.pewresearch.org/journalism/2018/09/10/news-use-across-social-media-platforms-2018/ (accessed on 21 September 2021).
  27. Fatilua, J. Who trusts social media? Comput. Human Behav. 2018, 81, 303–315. [Google Scholar] [CrossRef]
  28. Balmas, M. When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation, and Cynicism. Commun. Res. 2014, 41, 430–454. [Google Scholar] [CrossRef]
  29. Kang, C.; Goldman, A. Washington Pizzeria Attack, Fake News Brought Real Guns; New York Times: New York, NY, USA, 2016. [Google Scholar]
  30. Haithcox-Dennis, M. Reject, Correct, Redirect: Using Web Annotation to Combat Fake Health Information—A Commentary. Am. J. Health Educ. 2018, 49, 206–209. [Google Scholar] [CrossRef]
  31. U.S. Food and Drug Administration Changes to the Nutrition Facts Label. Available online: https://www.fda.gov/food/food-labeling-nutrition/changes-nutrition-facts-label (accessed on 14 December 2020).
  32. U.S. Department of Energy Estimating Appliance and Home Electronic Energy Use. Available online: https://www.energy.gov/energysaver/estimating-appliance-and-home-electronic-energy-use (accessed on 10 January 2022).
  33. Hammond, D. Health warning messages on tobacco products: A review. Tob. Control. 2011, 20, 327–337. [Google Scholar] [CrossRef] [PubMed]
  34. Lomeli, N.; Funke, D. Fact Check: Cigarette Warning Labels in US Haven’t Changed Since 1984; USA Today: Tysons Corner, VA, USA, 2022. [Google Scholar]
  35. Hiilamo, H.; Crosbie, E.; Glantz, S.A. The evolution of health warning labels on cigarette packs: The role of precedents, and tobacco industry strategies to block diffusion. Tob. Control 2014, 23, e2. [Google Scholar] [CrossRef] [PubMed]
  36. Hensley, S. Be Warned: FDA Unveils Graphic Cigarette Labels. NPR Website. 2011. Available online: https://www.npr.org/sections/health-shots/2011/06/21/137316580/be-warned-fda-unveils-graphic-cigarette-labels (accessed on 1 February 2020).
  37. CBS News Judge Blocks FDA Requirement for Graphic Tobacco Warning Labels. Available online: https://www.cbsnews.com/news/judge-blocks-fda-requirement-for-graphic-tobacco-warning-labels/ (accessed on 1 March 2022).
  38. Ingram, D.; Yukhananov, A.U.S. Court Strikes down Graphic Warnings on Cigarettes. Available online: https://www.reuters.com/article/us-usa-cigarettes-labels/u-s-court-strikes-down-graphic-warnings-on-cigarettes-idUSBRE87N0NL20120824 (accessed on 1 March 2022).
  39. U.S. Food & Drug Administration FDA Proposes New Required Health Warnings with Color Images for Cigarette Packages and Advertisements to Promote Greater Public Understanding of Negative Health Consequences of Smoking. Available online: https://www.fda.gov/news-events/press-announcements/fda-proposes-new-required-health-warnings-color-images-cigarette-packages-and-advertisements-promote (accessed on 1 March 2022).
  40. FDA Label Imaegs. Available online: https://web.archive.org/web/20120302084657/http://www.fda.gov/downloads/TobaccoProducts/Labeling/CigaretteWarningLabels/UCM259974.zip (accessed on 1 March 2022).
  41. Craver, R. Tobacco Manufacturers Gain Three More Months before Graphic-Warning Labels Required on Cigarette Packs|Local|Journalnow.com. Available online: https://journalnow.com/business/local/tobacco-manufacturers-gain-three-more-months-before-graphic-warning-labels-required-on-cigarette-packs/article_fd8915b6-8f43-11ec-aad6-2f790b9bdb5a.html (accessed on 1 March 2022).
  42. Hoek, J.; Wilson, N.; Allen, M.; Edwards, R.; Thomson, G.; Li, J. Lessons from New Zealand’s introduction of pictorial health warnings on tobacco packaging. Bull. World Health Organ 2010, 88, 861–866. [Google Scholar] [CrossRef] [PubMed]
  43. U.S. Food & Drug Administration Cigarette Labeling and Health Warning Requirements|FDA. Available online: https://www.fda.gov/tobacco-products/labeling-and-warning-statements-tobacco-products/cigarette-labeling-and-health-warning-requirements (accessed on 1 March 2022).
  44. Motion Picture Association Inc.; National Association of Theatre Owners Inc. CLASSIFICATION AND RATING RULES; Sherman Oaks, California, 2020. Available online: https://www.filmratings.com/Content/Downloads/rating_rules.pdf (accessed on 27 October 2022).
  45. WELCOME TO FilmRatings.com. Available online: https://www.filmratings.com/ (accessed on 1 February 2020).
  46. The V-Chip: Options to Restrict What Your Children Watch on TV|Federal Communications Commission. Available online: https://www.fcc.gov/consumers/guides/v-chip-putting-restrictions-what-your-children-watch (accessed on 1 February 2020).
  47. Harrington, R. Record Industry Unveils Lyrics Warning Label. Available online: https://www.washingtonpost.com/archive/lifestyle/1990/05/10/record-industry-unveils-lyrics-warning-label/6fc30515-ac8a-4e5d-9abd-a06a34cb54f2/ (accessed on 28 February 2022).
  48. U.S. Federal Bureau of Investigation FBI Anti-Piracy Warning Seal. Available online: https://www.fbi.gov/investigate/white-collar-crime/piracy-ip-theft/fbi-anti-piracy-warning-seal (accessed on 1 March 2022).
  49. Baptista, J.P.; Gradim, A. Understanding Fake News Consumption: A Review. Soc. Sci. 2020, 9, 185. [Google Scholar] [CrossRef]
  50. Braun, J.A.; Eklund, J.L. Fake News, Real Money: Ad Tech Platforms, Profit-Driven Hoaxes, and the Business of Journalism. Digit. J. 2019, 7, 1–21. [Google Scholar] [CrossRef]
  51. Rostron, A. Pragmatism, Paternalism, and the Constitutional Protection of Commercial Speech. Vt. Law Rev. 2012, 37, 527–589. [Google Scholar]
  52. United States Constitution, First Amendment.
  53. U.S. Embassy Beijing New PRC Internet Regulation. Available online: https://irp.fas.org/world/china/netreg.htm (accessed on 28 February 2022).
  54. Diagne, A.; Finlay, A.; Gaye, S.; Gichunge, W.; Pretorius, C.; Schiffrin, A.; Cunliffe-Jones, P.; Onumah, C. Misinformation Policy in Sub-Saharan Africa; University of Westminster Press: London, UK, 2021; p. 224. [Google Scholar] [CrossRef]
  55. Haque, M.M.; Yousuf, M.; Alam, A.S.; Saha, P.; Ahmed, S.I.; Hassan, N. Combating Misinformation in Bangladesh. Proc. ACM Hum. Comput. Interact 2020, 4, 130. [Google Scholar] [CrossRef]
  56. Carson, A.; Fallon, L. Fighting Fake News: A Study of Online Misinformation Regulation in the Asia Pacific; La Trobe University: Melbourne, Australia, 2021. [Google Scholar] [CrossRef]
  57. Yadav, K.; Erdoğdu, U.; Siwakoti, S.; Shapiro, J.N.; Wanless, A. Countries have more than 100 laws on the books to combat misinformation. How well do they work? Bull. At. Sci. 2021, 77, 124–128. [Google Scholar] [CrossRef]
  58. Kumar, P.J.S.; Devi, P.R.; Sai, N.R.; Kumar, S.S.; Benarji, T. Battling Fake News: A Survey on Mitigation Techniques and Identification. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 829–835. [Google Scholar] [CrossRef]
  59. Sharma, K.; Qian, F.; Jiang, H.; Ruchansky, N.; Zhang, M.; Liu, Y. Combating fake news: A survey on identification and mitigation techniques. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–42. [Google Scholar] [CrossRef]
  60. Zhou, X.; Zafarani, R. A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities. ACM Comput. Surv. 2020, 53, 1–40. [Google Scholar] [CrossRef]
  61. Wang, W.Y. “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection. arXiv 2017, arXiv:1705.00648,. [Google Scholar]
  62. De Oliveira, N.R.; Pisa, P.S.; Lopez, M.A.; de Medeiros, D.S.V.; Mattos, D.M.F. Identifying Fake News on Social Networks Based on Natural Language Processing: Trends and Challenges. Information 2021, 12, 38. [Google Scholar] [CrossRef]
  63. Deepak, S.; Chitturi, B. Deep neural approach to Fake-News identification. Procedia Comput. Sci. 2020, 167, 2236–2243. [Google Scholar] [CrossRef]
  64. Guo, Z.; Yu, K.; Jolfaei, A.; Li, G.; Ding, F.; Beheshti, A. Mixed Graph Neural Network-Based Fake News Detection for Sustainable Vehicular Social Networks. IEEE Trans. Intell. Transp. Syst. 2022, 1–13. [Google Scholar] [CrossRef]
  65. Yuan, H.; Zheng, J.; Ye, Q.; Qian, Y.; Zhang, Y. Improving fake news detection with domain-adversarial and graph-attention neural network. Decis. Support Syst. 2021, 151, 113633. [Google Scholar] [CrossRef]
  66. Koloski, B.; Stepišnik-Perdih, T.; Pollak, S.; Škrlj, B. Identification of COVID-19 Related Fake News via Neural Stacking. Commun. Comput. Inf. Sci. 2021, 1402, 177–188. [Google Scholar] [CrossRef]
  67. Hebroune, O.; Benhiba, L. User-Enriched Embedding for Fake News Detection on Social Media; Springer: Cham, Switzerland, 2022; pp. 581–599. [Google Scholar] [CrossRef]
  68. Batailler, C.; Brannon, S.M.; Teas, P.E.; Gawronski, B. A Signal Detection Approach to Understanding the Identification of Fake News. Perspect. Psychol. Sci. 2022, 17, 78–98. [Google Scholar] [CrossRef]
  69. Anoop, K.; Deepak, P.; Lajish, L.V. Emotion cognizance improves health fake news identification. In Proceedings of the 24th International Database Engineering & Applications Symposium (IDEAS 2020), Incheon, Korea, 12–18 August 2020. [Google Scholar]
  70. Shao, Y.; Sun, J.; Zhang, T.; Jiang, Y.; Ma, J.; Li, J. Fake News Detection Based on Multi-Modal Classifier Ensemble. In Proceedings of the 1st International Workshop on Multimedia AI against Disinformation, Newark, NJ, USA, 27–30 June 2022. [Google Scholar] [CrossRef]
  71. Rapti, M.; Tsakalidis, G.; Petridou, S.; Vergidis, K. Fake News Incidents through the Lens of the DCAM Disinformation Blueprint. Information 2022, 13, 306. [Google Scholar] [CrossRef]
  72. Chen, W.; Wang, Y.; Yang, S. Efficient influence maximization in social networks. In Proceedings of the 2010 IEEE International Conference on Data Mining, Miami, FL, USA, 6–9 December 2009. [Google Scholar] [CrossRef]
  73. Chen, W.; Yuan, Y.; Zhang, L. Scalable influence maximization in social networks under the linear threshold model. In Proceedings of the 2010 IEEE International Conference on Data Mining, Sydney, Australia, 13–17 December 2010. [Google Scholar] [CrossRef]
  74. Budak, C.; Agrawal, D.; Abbadi, A. El Limiting the spread of misinformation in social networks. In Proceedings of the 20th International Conference on World Wide Web, Hyderabad, India, 28 March–1 April 2011; pp. 665–674. [Google Scholar] [CrossRef]
  75. Jain, S.; Sharma, V.; Kaushal, R. Towards automated real-time detection of misinformation on Twitter. In Proceedings of the 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 21–24 September 2016; pp. 2015–2020. [Google Scholar] [CrossRef]
  76. Fairbanks, J.; Fitch, N.; Knauf, N.; Briscoe, E. Credibility Assessment in the News: Do we need to read? In Proceedings of the MIS2 Workshop held in conjuction with 11th Int’l Conference on Web Search and Data Mining, ACM, Del Ray, CA, USA, 5–9 February 2018. [Google Scholar]
  77. Bakir, V.; McStay, A. Fake News and The Economy of Emotions. Digit. J. 2018, 6, 154–175. [Google Scholar] [CrossRef]
  78. Pröllochs, N. Community-Based Fact-Checking on Twitter’s Birdwatch Platform. arXiv 2021, arXiv:2104.07175. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.