Public Evaluations of Misinformation and Motives for Sharing It

: Concerns around the definition of misinformation hamper ways of addressing purported problems associated with it, along with the fact that public understanding of the concept is often ignored. To this end, the present pilot survey study examines three broad issues, as follows: (1) contexts where the concept most applies to (i.e., face-to-face interactions, social media, news media, or all three contexts), (2) criteria people use to identify misinformation, and (3) motivations for sharing it. A total of 1897 participants (approximately 300 per country) from six different countries (Chile, Germany, Greece, Mexico, the UK, the USA) were asked questions on all three, along with an option to provide free text responses for two of them. The quantitative and qualitative findings reveal a nuanced understanding of the concept, with the common defining characteristics being claims presented as fact when they are opinion (71%), claims challenged by experts (66%), and claims that are unqualified by evidence (64%). Moreover, of the 28% (n = 538) of participants providing free text responses further qualifying criteria for misinformation, 31% of them mentioned critical details from communication (e.g., concealing relevant details or lacking evidence to support claims), and 41% mentioned additions in communication that reveal distortions (e.g., sensationalist language, exaggerating claims). Rather than being exclusive to social media, misinformation was seen by the full sample (n = 1897) as present in all communication contexts (59%) and is shared for amusement (50%) or inadvertently (56%).


Introduction
Many institutions have sounded the alarm on the threats posed by misinformation and its darker cousin, disinformation (e.g., the World Economic Forum (WEF), Organisation for Economic Co-Operation and Development (OECD), World Health Organisation (WHO), United Nations (UN), and European Commission (EU)).In these institutions, misinformation features at the top of global risk registers (World Economic Forum 2024), is the basis for a new scientific discipline called "Infodemiology" (World Health Organization 2020, 2024), is targeted through new codes of practice (European Commission 2022), and now has international expert steering groups to address it (Organisation for Economic Co-Operation and Development (OECD) 2021, 2023).

So, What Is Misinformation?
Misinformation is an example of a term whose definition is in contention; the research community has yet to settle on what it is (e.g., Adams et al. 2023;Grieve and Woodfield 2023;Karlova and Lee 2011;Nan et al. 2023;Roozenbeek and Van der Linden 2024;Scheufele and Krause 2019;Southwell et al. 2022;Tandoc et al. 2018;Vraga and Bode 2020;Van der Linden 2023;Wang et al. 2022;Yee 2023;Zeng and Brennen 2023;Zhou and Zhang 2007).Problems with definitions are a mainstay of academic research, in part because it takes time for a research community to agree on essential criteria.Misinformation, while not new, is developing into its own dedicated field of study with multiple communities designing research approaches to investigate it.As an emerging research endeavour, it continues to adapt to new findings, which add new facets to the phenomenon.Because misinformation carries considerable weight beyond academia, especially when claimed to be an existential threat, the requirements for a coherent agreed definition are necessary for basic and applied research.In turn, the classification of real phenomena into those that are and are not candidates of misinformation cannot succeed without an agreed definition (Adams et al. 2023;Freiling et al. 2023;Swire-Thompson and Lazer 2020).What is more, the topic has profound implications for journalism and the necessary standards needed to ensure that accurate news coverage is conveyed in real time, or corrected appropriately when facts about world events change (e.g., Pickard 2019).In addition, there is considerable interest in examining how media literacy can help prepare people for ways to scrutinise news that comes to them in different forms, and this could include wild distortions of facts (e.g., Hameleers 2022).
For all of the aforementioned reasons, there are continuing efforts to determine the key properties of misinformation (e.g., Tandoc et al. 2018;Vraga and Bode 2020;Wang et al. 2022;Roozenbeek and Van der Linden 2024).To support this, the aim here is to begin by undertaking a comprehensive examination of definitions of misinformation in academic research and in public policy organisations across a range of years ; the reason for the latter is because the term has policy functions and so, has consequences beyond academia.Therefore, in the next section, the definitions proposed are examined in detail from the published academic literature and by public policy in white papers (Table 1).While not every public policy organisation's definition of misinformation is included, key examples are referred to.

Type Misinformation Definition
Properties of content "Since information may be false, we see that misinformation is a species of information, just as misinforming is a species of informing. ..informing does not require truth and information need not be true; but misinforming requires falsehood, and misinformation must be false" (p.153) Fox (1983).
"When information has been lost in producing a particular output characteristic, the value taken on by the characteristic is determined, in part, by a random or error component.When there exists a non-null error component in determining a characteristic or variable's value, the "information" contained in the variable may be referred to as "misinformation" (p.252) Losee (1997).
"information presented as truthful initially but that turns out to be false later on" (p.488) Lewandowsky et al. (2013).
"incorrect or misleading information about the state of the world" (p. 2) Lazer et al. (2018).
"Misinformation takes many different forms: fake news, propaganda, conspiracy theories, strongly partisan reporting, clickbait, 'alternative' science, etc.What they all have in common is their non-veracity: misinformation is, by definition, false or misleading information."(p. 2) De Ridder (2021).
"Objectively incorrect information, as determined by the best available evidence and expertise on the subject" (p.227).Bode et al. (2021).Properties of content with reference to intent and source "information that is initially presented as true but later shown to be false" (p.207) Cook (2017).
"misinformation" as claims-well-intentioned or not-that are at odds with the best available empirical evidence" (p.143) Freiling et al. (2023).
"an umbrella term encompassing all forms of false or misleading information regardless of the intent behind it" (p. 2) Altay et al. (2023).
" misinformation as information that is false or misleading, irrespective of intention or source."(p.20) Roozenbeek and Van der Linden (2024).
Properties of content and mental states of the receiver ". ..information that is not justified.If someone believes something for the wrong reasons, one may be said to be "misinformed."(p.252) Losee (1997).
"any piece of information that is initially processed as valid but is subsequently retracted or corrected."(p.124).Lewandowsky et al. (2012).
"Misinformation, or factual misperception, refers to the presence of or belief in objectively incorrect information" (p.621) Bode and Vraga (2015).
"..if they [people] firmly hold beliefs that happen to be wrong, they are misinformed-not just in the dark, but wrongheaded" (p.793) Kuklinski et al. (2000).
Properties of content and deliberate intentions of the sender "Misinformation is false or inaccurate information, especially that which is deliberately intended to deceive" (p.3).Kumar and Geethakumari (2014).
"MISINFORMATION often refers to information that does not directly reflect the "true" state of the world (e.g., distorted information or falsehoods).We extend the definitions of misinformation by including information that does not reflect the "true" state of mind of an information sender, such as a lie or something deviating from the true belief of the sender."(p.804) Zhou and Zhang (2007).
"There are essentially two alternative criteria in assessing misinformation: false intent and false fact.The former indicates that information senders have the intent to create misinformation, and the latter implies that the information content does not match a fact or the true state of the world" (p.805) Zhou and Zhang (2007).
"Misinformation is the transmission of distortions or falsehoods to an audience. ...: 1) misinformation should be about one or more objects; 2) misinformation should depend on the true belief of the sender, which may not be justified; 3) misinformation being transmitted is moderated by a variety of contextual factors (e.g., the communication modality and motivation) [5], [6], [8], [14]- [16] and familiarity between the sender and the receiver [17]; and 4) both the sender's true belief and the familiarity between the sender and the receiver may change over time (p.805) Zhou and Zhang (2007).
"misinformation (that is, false or inaccurate information deliberately intended to deceive)".(p.544) Properties of content and ambiguous intentions of the sender "Misleading information is not necessarily false (although it can be), but instead can be factually accurate information that is presented in such a way that the meaning of the information is distorted.The information must mislead the intended audience or recipient in some way, as to cause them to act in a way towards the provider that would otherwise differ had the information been published or provided in a non-misleading way" (p.7) Department of Health and Social Care, UK Government (2015).
"While many messaging errors might have little to no impact on people affected by a disaster, some rumors and misinformation can be very destructive.Misleading communication might promote harmful behaviors that increase personal and public health risks.Inconsistent guidance can also undermine the credibility of your organization" Centre for Disease Control and Prevention (2017).
"misinformation that . . .does not qualify as disinformation: people can inadvertently communicate falsehoods when they intend to share accurate information, and this should not be confused with lying" (p.12) Grieve and Woodfield (2023).
"misinformation refers to false information shared without intent to harm (a person, social group, organization or country); to express accurate information taken out of context with the intent to harm, knowingly false information shared with the intent to harm" (p.1274) Aven and Thekdi (2022).
Properties of content and benign intentions of the sender "False information is that which can be demonstrably proved to be incorrect.For the purposes of the False or Misleading Information [FOMI] offence, there need not be any intent on the part of an organisation to supply or publish false information, only that the information is false or misleading in a material respect.Furthermore, the value of this exercise is to complement the main focus of this present pilot study.This pilot survey investigates the criteria that the public views as the most closely aligned to their understanding of term misinformation, which, to date, is still underexplored (Lu et al. 2020;Osman et al. 2022).Along with this, because misinformation is assumed to be rampant on social media (e.g., Muhammed and Mathew 2022), it is also worth investigating if the public aligns with this assumption and, in turn, their reasons for sharing it on social media platforms.Finally, outside of extending the limited evidence base, the applied value of this study is that it can show where there are departures between the public's construal of misinformation and official definitions.If there is misalignment between the two, then knowing this can illuminate future research efforts to see why the misalignment exists, in particular because of how people appraise it when they encounter it in media offline, as well as online.

Definitions of Misinformation: Inconsistencies vs. Multidimensional Properties
On the surface, the official characterisation of misinformation seems intuitive.It broadly concerns content that is inaccurate or false (see Table 1 for examples).Unfortunately, this unravels fairly quickly because by necessity, it requires a framework for what truth is (e.g., Dretske 1983;Stahl 2006) to know what the departures from it are (for discussion see Adams et al. 2023).For some, a simple dichotomy can be applied (i.e., truth vs. falsehood) (e.g., Levi 2018;Qazvinian et al. 2011;Van der Meer and Jin 2019).Others prefer a continuum between completely true and completely false (Hameleers et al. 2021) or the use of frameworks with several dimensions on which truth, falsity, and intentions are mapped (Tandoc et al. 2018;Vraga and Bode 2020).Complications also arise because misinformation is also defined relative to new terms that refer to intentions to distort the truth (i.e., disinformation) or malicious use of the truth (i.e., malinformation).Therefore, to make the analysis here manageable, the focus is exclusively on definitions of misinformation, and, as is apparent by looking at Table 1, there is considerable variation.
Beyond definitions that solely focus on the properties of the content itself (i.e., that it is false or inaccurate, includes some form of distortions of details, or contains information presented out of context), some definitions also make reference to the intentions of the sender of misinformation (See Table 1).For some definitions, the outcome of misleading the receiver of the content is accidental because the sender is unaware they are conveying inaccurate or false information (See Table 1).To complement this, some definitions also make reference to the state of mind of the receiver of misinformation (see Table 1).This acknowledges that we may, as receivers, misinterpret or misperceive some details and commit them to memory in their distorted state.In turn, we communicate our misapprehensions to others; thus, we are both the recipient and sender of misinformation.
Other definitions refer to the sender as knowingly communicating content to mislead the receiver (See Table 1).The problem here is that it conflates misinformation with definitions of disinformation.Currently, there are serious ramifications for individuals or organisations found to have intentionally disseminated inaccurate or false content deemed harmful (e.g., democratically, economically, socially).In fact, this is the case in several countries where laws are being devised or are already in effect (e.g., China, France, Germany, Ireland, Kenya, Russia, Singapore, Thailand, the USA) (e.g., Aaronson 2024;Colomina et al. 2021;Saurwein and Spencer-Smith 2020;Schiffrin and Cunliffe-Jones 2022;Tan 2023).If the defining characteristics of misinformation and disinformation are indistinguishable, then either one of the terms is not needed, or else some other property should separate them.
Recent definitions make reference to the medium in which false or inaccurate details are disseminated, along with the act of sharing (Table 1).These latest additions reflect growing concerns about access to misinformation, particularly on social media (e.g., Freelon and Wells 2020;Kumar and Geethakumari 2014;Lewandowsky et al. 2012;Malhotra and Pearce 2022;Pennycook andRand 2021, 2022;Rossini et al. 2021;Wittenberg and Berinsky 2020).In particular, because more people consume news media online, whether warranted or not (Altay et al. 2023;Marin 2021), the concern is that it has been polluted by misinformation that can be shared more widely and quickly (e.g., Del Vicario et al. 2016;Hunt et al. 2022).As a consequence, some definitions specifically refer to social media because this access point to misleading people is potent (see Table 1).

Transmission Heuristic
Over the span of 50 years, the definition of misinformation has shifted considerably to include properties that are more psychologically oriented because they refer to the state of mind of the sender and receiver, as well as their behaviours (e.g., sharing).As seen in Table 1, there are those that (1) focus exclusively on characterising properties of the content itself, (2) refer to the state of mind of the receiver, (3) refer to the intentions of the sender, (4) refer to sharing behaviours, and (5) focus on the medium through which content is communicated.To succinctly capture these changes, underlying the expansion of the criteria of misinformation is a heuristic.It makes a simple value judgment with regards to the psychological features of the transmission of content-transmission heuristic.
The transmission heuristic equates "good" (accurate/true) information to "good mental states" to "good" behaviour, and "bad" (inaccurate/false) information to "bad mental states" to "bad" behaviour.In this way, the transmission heuristic neatly synthesises the various definitions into those that only focus on the properties of content of transmission, differentiating it from those that ascribe psychological properties to mental states and from those that refer to the behavioural consequences of what is transmitted.In so doing, what this analysis shows is that there is a presumed causal relationship between the content, mental states, and behaviour.However, as has been shown in recent work (Adams et al. 2023), there are currently no reliable evidential grounds for demonstrating that misinformation is directly and exclusively related to aberrant behaviour via substantive changes in beliefs.This is no different for accurate information and the causal consequences on changing mental states and behaviour in predictable ways.Moreover, this kind of simple causal model reflects a limited interpretation of the nature of our cognition and corresponding value judgments that can be reliably applied.What is more, it ignores the vast psychological literature on the complex relationship between beliefs and behaviour.

Purposes of This Pilot Study
As mentioned earlier, many have acknowledged that there is considerable variation in the definition of misinformation.However, while this is appreciated, it remains inde-pendent of empirical research examining people's ability to accurately classify examples of misinformation from truthful content (e.g., Pennycook et al. 2019), their sharing of misinformation (e.g., Chen et al. 2015;Metzger et al. 2021;Pennycook et al. 2019;Perach et al. 2023), and metrics for identifying those most susceptible to it (e.g., Maertens et al. 2023).To better understand the application of the term misinformation in academic research and beyond, this present pilot study aims to explore it from the public's own perspective.
There is limited work that has examined the public's understanding of the term misinformation (Osman et al. 2022).A representative sample of participants (N = 4407) from four different countries (Russia, Turkey, the UK, the USA) were asked what they took misinformation to mean.The majority (~60%) agreed or strongly agreed with the definition "Information that is intentionally designed to mislead".Compare this to the smaller proportion that agreed or strongly agreed (~30%) with the definition of misinformation as "Information that is unintentionally designed to mislead".This reflects a similar ambiguity in the way misinformation is understood in academic literature.In addition, when probed further regarding the specific properties of the content, Osman et al. (2022) found that the most common property was claims that exaggerated conclusions from facts (43%); next was content that did not provide a complete picture (42%) or else content that was presented as fact but was opinion or rumour (38%).Of further note, this pattern of responses did not vary by education level, and there are two reasons for why this is important.First, a popular claim is that less-educated individuals are more susceptible to misinformation and conspiracy theories than their university-educated counterparts (e.g., Nan et al. 2022;Scherer et al. 2021;Van Prooijen 2017).However, there are studies showing that there is no reliable relationship between the level of education and susceptibility to misinformation (e.g., Pennycook and Rand 2019;Wang and Yu 2022), or even the opposite, regarding the perceived ability to detect misinformation (e.g., Khan and Idris 2019).A second assumption is that the level of education is correlated with media literacy (e.g., Kleemans and Eggink 2016), so that the more educated one is, the better able one is to analyse media content.Though here also, there does not appear to be a stable relationship between the two (e.g., Kahne et al. 2012), but other factors matter more (e.g., interest in current affairs, civil engagement) (e.g., Ashley et al. 2017;Martens and Hobbs 2015).Thus, this present study aims to explore if indeed interpretations of the term misinformation are differentiated by educational level.The main focus of this present pilot study is to considers the following three critical factors: (1) the contexts where misinformation most commonly applies to (social media, face-to-face interactions, news media); (2) the key criteria used to determine if a transmission constitutes misinformation; and (3) the reasons for sharing what might be suspected to be misinformation.

Participants and Design
A total of 1897 participants took part in the pilot survey.The inclusion criteria for taking part were that participants were born in and were current residents of each respective country that was included in this study-Chile, Germany, Greece, Mexico, the UK, and the USA (see Table 2).The selection of the six countries was opportunistic and based on access to large samples.In addition, participants had to be a minimum of 18 years of age to take part in this study (age restrictions were 18 to 75 for all samples).Ethics approval for this pilot study was granted by the College Ethics board, Queen Mary University of London, UK (QMREC1948).Participants were required to give informed consent at the beginning of the web survey before participating.The full raw data file of demographics and responses of all the participants is available at https://osf.io/uxm7c.
Participants were asked to indicate their gender (Male = 53.4%;Female = 45.1%,Other = 1.2%, Prefer not to say = 0.3%).They also indicated their age (18-24 = 38.3%;25-34 = 37.4%; 35-44 = 14.3%; 45-54 = 6%; 55+ = 4%).They provided details of their education level, which was adapted for each country (Level 1 (up to 16 years) = 5.5%; Level 2 (up to 18 years) = 35.1%;Level 3 (Undergraduate degree) = 42.5%;Level 4 (Postgraduate degree) = 14.6%;Level 5 (Doctoral degree) = 1.6%).Additionally, they provided details of their political affiliation, also adapted for each country and presented on an 8-point scale, with 1 (extremely liberal) and 7 (extremely conservative) and 0 (no political affiliation) (0 = 8.6%; 1 = 14.5%; 2 = 24.4%; 3 = 25.9%; 4 = 15%; 5 = 7.8%; 6 = 2.6%; 7 = 1.2%).Note that because the pilot survey did not use a representative sampling method, the findings presented in the results section were not based on examining the pattern of responses with respect to demographic factors or by country comparisons.This simple pilot survey was run via the online survey platform Qualtrics (https: //www.qualtrics.com/uk/), and participants were recruited via Prolific (https://www.prolific.co/),an online crowd-sourcing platform.The process of recruitment via Prolific was volunteer sampling.Overall, the pilot survey contained three main dependent variables (Context in which misinformation most applies, Characteristic of misinformation, Sharing reasons) and five demographic details-country, age, education level, gender, and political affiliation.All participants were presented with the same questions in the same order, because the latter two questions were informed by responses to the first question.The survey took approximately 5 min to complete, and participants were paid 0.88 USD for their time (equivalent to a rate of 10.56 USD per hour at the time the study was run).

Procedure
Once participants had given their consent and provided their demographic details, they were then presented with the first main dependent variable, which referred to the context in which the concept of misinformation most commonly applies (See Table 3).They were then presented with the second dependent variable, which referred to a range of criteria used to identify a transmission as misinformation (See Table 3), and then the final dependent variable, which referred to reasons given for sharing misinformation (see Table 3).Select one of the statements.

Key criteria
For whichever context you have selected for where you think misinformation is present, please provide answers [yes/no] as to the critical factors that you think make any information count as misinformation in your mind.
(1) Have to be intended to deliberately mislead (2) Have to be disproven by a large body of scientific evidence (3) Have to be challenged by academic opinion (or other expert groups) (4) Have to be presented as fact rather than opinion (5) Have to be disproven rather than shown to be inaccurate.

Yes/No to each statement
Open-ended Question 1 For whichever context you have selected for where you think misinformation is present, if the options presented in the previous question [on key criteria] do not include critical factors that you think make any information count as misinformation in your mind, please type them in the section below.

Free text response
Once they had completed these questions and the two open-ended free text questions, the survey was complete; the data file with the coded responses for the open-ended question, Question 1, can be found here: https://osf.io/64qfp,and the coded responses for Question 2 can be found here: https://osf.io/hpfrt.

Finding 1: Common Contexts in Which the Term Misinformation Most Applies
The findings show that the context in which people perceive that the concept of misinformation most applies is in all possible communication contexts (59.6%), which in this case, includes the following: news media (e.g., online, radio, TV, newspapers, magazines), social media (e.g., Twitter, Facebook, Instagram, WhatsApp, TikTok), and faceto-face interactions (e.g., social gatherings, workplace settings).Few specifically identified social media (30.2%) as the communication context where the concept of misinformation most exclusively applies (see Figure 1).

Free text response
Once they had completed these questions and the two open-ended free text questions, the survey was complete; the data file with the coded responses for the openended question, Question 1, can be found here: https://osf.io/64qfp,and the coded responses for Question 2 can be found here: https://osf.io/hpfrt.

Finding 1: Common Contexts in Which the Term Misinformation Most Applies
The findings show that the context in which people perceive that the concept of misinformation most applies is in all possible communication contexts (59.6%), which in this case, includes the following: news media (e.g., online, radio, TV, newspapers, magazines), social media (e.g., Twitter, Facebook, Instagram, WhatsApp, TikTok), and face-to-face interactions (e.g., social gatherings, workplace settings).Few specifically identified social media (30.2%) as the communication context where the concept of misinformation most exclusively applies (see Figure 1).

Finding 2: Common Defining Characteristics of the Term Misinformation
The findings show (see Figure 2) that the most frequently agreed-upon criterion for defining misinformation was presenting information as fact rather than opinion (71.2%),

Finding 2: Common Defining Characteristics of the Term Misinformation
The findings show (see Figure 2) that the most frequently agreed-upon criterion for defining misinformation was presenting information as fact rather than opinion (71.2%), or that has been found to be challenged by experts (66.4%), or else later disproven by evidence (64.8%).The latter two suggest that some value is placed on validating communication by an authority, or with reference to empirical work; this is further explored in the free text analysis.or that has been found to be challenged by experts (66.4%), or else later disproven by evidence (64.8%).The latter two suggest that some value is placed on validating communication by an authority, or with reference to empirical work; this is further explored in the free text analysis.

Finding 3: Reasons for Sharing Misinformation
The last question pertained to reasons for sharing misinformation.People mostly selected approximately two (M= 1.8; SD = 1) out of the six possible reasons for sharing misinformation.The two most common reasons were that the information was shared unwittingly (56.7%) or deliberately for amusement purposes (50.3%) (see Figure 3).

Finding 3: Reasons for Sharing Misinformation
The last question pertained to reasons for sharing misinformation.People mostly selected approximately two (M = 1.8;SD = 1) out of the six possible reasons for sharing misinformation.The two most common reasons were that the information was shared unwittingly (56.7%) or deliberately for amusement purposes (50.3%) (see Figure 3).

Finding 4: Open Question 1 on Criteria for Determining Misinformation
Of the 1897 participants that took part, a total of 618 (33%) volunteered details in the free text box to further qualify their responses to Question 2. While 6 responded that they were unsure of what the open-ended question was asking them to do, 74 participants explicitly stated that they were happy with the options that they were provided in Question 2. This is, in of itself, useful information, as it suggests that they considered the options a reasonable representation of their own considerations.The remaining 538

Finding 4: Open Question 1 on Criteria for Determining Misinformation
Of the 1897 participants that took part, a total of 618 (33%) volunteered details in the free text box to further qualify their responses to Question 2. While 6 responded that they were unsure of what the open-ended question was asking them to do, 74 participants explicitly stated that they were happy with the options that they were provided in Question 2. This is, in of itself, useful information, as it suggests that they considered the options a reasonable representation of their own considerations.The remaining 538 participants further qualified factors that they considered as relevant to understanding the term misinformation.
Demographics.Of the 538 that provided details in the free text box, approximately 76% were between the ages of 18-34, and 24% were 35-54%, with 53% males and 45% females, and the political affiliation was 61% left-leaning and 13% right-leaning, and 35% were non-university graduates and 64% were university graduates.The patterns generally reflected the demographic distribution of the overall sample.Regarding the demographic details of the 74 that endorsed the options already presented, 32% were non-university graduates and 68% were university graduates.Note also that this, too, reflects the general sample distribution of university graduates (~39%) to non-university graduates (~59%).
Coding: After examining all of the free text in detail, the coding frame used to examine the free text first classified the responses into "qualifying the criteria of misinformation", "provision of an illustration", "identifying a target/victim of misinformation", "identifying a source of misinformation", "identifying motivations for misinformation", and "identifying a context in which misinformation is found" (See Table 4).As presented in Table 4, there was high agreement between the two coders for four of the six criteria on which the free text was classified.The two criteria where there was most disagreement were the examples of misinformation and the context in which misinformation occurred.The main reason for the disagreement was that one of the coders conflated examples with context.After both coders reviewed the free text again, the detailed analysis of the classified free text was based on the maximal agreement between both coders.The only exception was for the example and context classifications.The solution was to collapse across both categories and examine patterns in general regarding where misinformation was reported.
When it came to the category "qualifying the criteria of misinformation", at a broad level the responses were further classified into three groups (see Table 5).Some (31%) qualified misinformation according to what was absent, such as concealing relevant details or lacking evidence to support claims; these were details that would be necessary to ensure that it was valid or accurate communication.Others (41%) qualified misinformation according to details that were present in communication, such as distortions, sensationalist language, or exaggerating claims.Another aspect that featured in the way that participants qualified their understanding of misinformation was to refer to the motivations they ascribed to the communication (34%).Of those volunteering motivation reasons, participants explicitly mentioned that misleading the receiver was unintended (6%) or that it could be both intentional as well as unintentional (10%).However, more referred to reasons that indicated that the generator/communicator of misinformation had intentions to mislead, which was referred simply by reference to terms associated with intent (15%).Alternatively, some suggested that there were nefarious motives (17%), such as sowing discord, to be deceitful (5%) or because of financial incentives to communicate distorted or wrong claims (13%).Taken together, this suggests that motivations behind constructing and communicating misinformation is viewed as wilful (50%) rather than unintentional (6%), with some accepting that both are possible.
Many of the responses (32%) in the free text referred to contexts where misinformation was expected to be present (see Table 4).Given the level of disagreement between the two coders in the classification of examples, the examples and context classifications were collapsed to examine general areas where misinformation was referred to have occurred (n = 223); where examples were given, they often indicated a context (e.g., Facebook postsindicated the context was social media).To this end, of the 32% that identified a specific context, there were four common contexts referred to-Social media (16%), News media (19%), Political domains/Politics (14%), or situations where Statistics/Data/Evidence/Scientific claims were communicated (18%).Finally, some responses (15%) made reference to the victim, or the target of misinformation.For those referring to a target, there were five types-Anyone (including self) = 36%, Public (Public, Masses, Population) = 12%, Less informed/trusting individuals (Naïve, trusting, less educated) = 24%, and Targeted groups = 12%, social media users = 14%.

Finding 5: Open Question 2 on Reasons for Sharing
Of the 1897 participants, a total of 375 (20%) participants volunteered a response in the second open-ended question regarding sharing behaviours.While 6 responded that they were unsure of what the open-ended question was asking them to do, 82 participants responded that they were happy with the options that they were provided in Question 3. The remaining 287 participants provided responses that further qualified factors that they considered regarding motivations behind sharing.Of the 287 free text responses, 21 were uninterpretable, leaving 266 for coding (see Table 6).Once all free text was reviewed to determine themes, the text was classified into three broad categories.Note that while people gave multiple responses, the free text was classified into only one category (not multiple categories), where there were multiple responses; the first response was used as a basis for classification.It was clear that there were three main categories, as follows: those that explicitly indicated that they do not knowingly share misinformation (24%); those that share it unwittingly, for which there are two different factors (simply not knowing (34%), not knowing at the time because of difficulty in verification/was considered valid at the time); and those that share deliberately (for amusement, for illustrative purposes) (35%) (see Table 6).The demographic details of respondents were split into three groups-"Avoid sharing", "Unwittingly shared", and "Knowingly shared".It is worth noting that the demographics (see Table 6) seem to reflect the overall general patterns in the overall sample.In particular, it is not the case that education level reflects large differences between those that avoid sharing, share unwittingly, or share knowingly.

General Discussion
In summary, this pilot study provides insights that align with those of Osman et al. (2022).Consistent with previous findings (Osman et al. 2022), this present pilot study also found that the majority (59.6%) expect to find misinformation in all communicative contexts and not exclusively on social media (30.2%).This suggests that the public takes a broader line compared with some published definitions that are specifically concerned with the presence of misinformation on social media (see Table 1).One reason for this may be that the public is much more pragmatic and, at the same time, less alarmist in their views of misinformation because they expect that any communication between people will involve inaccuracies and distortions.This speculation requires further empirical investigation in support of it.
When it comes to reasons for sharing, again, the present findings replicate those of Osman et al. (2022).The present sample indicated that if they did share misinformation, the majority did this unwittingly (56.7%).In the qualitative analysis, this is further qualified by explanations people gave that at the time of sharing, it was not officially considered misinformation.Where people knowingly shared misinformation, this was ironically done (50.3%).When looking at the free text responses, people explained that some of the content was satirical, which could, in some cases, qualify under the definition of misinformation.Of those providing free text responses (n = 266), 21% explained that they used the content as a means of correcting misapprehensions amongst their social networks on social media, as well as offline.This finding contrasts recent work suggesting that when people are aware that misinformation is being shared, they may fail to point it out on their social network for fear of offending (Malhotra and Pearce 2022).Overall, this pilot study supports claims that the range of motivations behind communication apply to misinformation as they do information.We are motivated to entertain, inform, and persuade, as well as to manipulate (e.g., Altay et al. 2023;Osman et al. 2022;Pennycook et al. 2019).
The main focus of this pilot study was to examine the public's construal of misinformation, and the present findings also replicate those of Osman et al. (2022).The most common responses to options regarding key criteria for misinformation were as follows: content that was presented as fact but was opinion (71.2%), claims that had later been challenged by experts (66.4%), or claims that were later disproven by evidence (64.8%).The free text responses (n = 538) further qualified these patterns.When it came to content that was described as consequential (e.g., regarding political content, economic details, world affairs, health recommendations), respondents valued supporting evidence, and so the absence of it, or the obscuring of it, was used as a strong signal that it was misinformation (31%).
Unlike any of the official definitions of misinformation, the free text (n = 538) revealed that the public considers the underlying motivations of the source (34%), which go beyond intentions to mislead.The qualitative data revealed that some communication is harmful because it is designed to be divisive, often because the content is presented in such a way as to prohibit discussion or raise challenges.Some of the responses indicate suspicion that content could be misinformation because of the censorious nature of state communication, which aligns with recent findings (Lu et al. 2020).Some highlighted that there are likely financial incentives to communicate content that is salacious or sensational because it is clickbait, often referred to in association with news media and journalism more broadly.
Overall, the present findings suggest that the public takes a pragmatic approach to what constitutes misinformation, as indicated in the responses to the closed and open questions on the topic.One reason for this interpretation is that people reported that there were times where content was encountered that, at the time, was viewed as accurate but was later invalidated in light of new evidence.This suggests some flexibility in what could count as misinformation at any one time.In fact, this latter point is worth considering further because it also aligns with work discussing the dynamic nature of communication and the temporal status of claims according to what could be known at the time (Adams et al. 2023;Lewandowsky et al. 2012;Osman et al. 2022).The free text responses referred to evidence and scientific claims as ways to illustrate the criteria they considered critical for characterizing misinformation.In fact, when it comes to scientific claims that are communicated to the public, either directly by an academic or translated by journalists or popular science writers, concerns have been raised about exaggerations (Bratton et al. 2019;Bott et al. 2019) or the downplaying of uncertainties (Gustafson and Rice 2020).The latter means that scientific claims, either by scientists or journalists on their behalf, tend towards packaging scientific claims with certitude.Journalistic flair also means that distortions in the original scientific claims are inevitable for the sake of accessibility (Bratton et al. 2019;Bott et al. 2019), but this, according to some official definitions (see Table 1), would qualify as misinformation.What is clear from the present findings is that the public is sensitive to this and knows that license is taken when they are presented with scientific claims in ways that do not provide a complete and accurate picture (Osman et al. 2022).
Given this, and widening the discussion further, it is worth considering the following: What improvements can be achieved in the dissemination of content through media based on insights from the public understanding of misinformation?An appropriate response will depend on what the function of dissemination is and who is executing the dissemination.One possible function of dissemination is for audience engagement (e.g., through the number of views, likes, and shares) (e.g., López and Santamaría 2019), which is best achieved online, not least because it can be quantified easily.If the content is opinion-based, then it is clear that the public are sensitive to when this is disseminated as fact, which they treat as misinformation.To make improvements, should corrective measures apply equally to citizens, journalists, 1 activists, the general public, and traditional news journalists?The answer to this involves taking media ethics into account (e.g., Stroud 2019;Ward and Wasserman 2010).
Professional media ethics for traditional journalism (i.e., the pre-digital age) meant upholding principles of truth seeking, objectivity, accountability, and a responsibility to serve the public good (Ward 2014).Digital journalism follows new principles concerned with the active pursuit of public engagement and tailoring news to the presumed needs of a target audience (Guess et al. 2021;Nelson 2021;Ward 2014).Traditional print journalism was understood to have political tilts, but now, with new principles in place, news media is explicitly partisan (e.g., Guess et al. 2021).An example of an agreed fact, regardless of partisan news media, would be reporting the date, time, and location of a political event that occurred.But, depending on the news media outlet and audience (e.g., Kuklinski et al. 1998;Savolainen 2023), the surrounding descriptions, with omissions or commissions of the event, may end up also being treated as fact or as an opinion masquerading as fact.If corrective mechanisms are applied to limit the dissemination of content that the public perceives as misinformation (e.g., opinion presented as fact), then for journalism, this means reintroducing and enforcing pre-digital media ethics, with less partisanship.Corrective measures simply mean publishing corrections with explanations (Appelman and Hettinga 2021;Osman et al. 2023).If the motivation is to engage audiences to serve the public good, then activists and citizen journalists should also follow suit, with the aim of raising the bar regarding the quality and sourcing of support for claims and facts.If the motivations are clear and singular, and dissemination of information is for some public good, then it is fair to expect that those doing so are held to some basic standard of reporting.

Limitations
Three main limitations of the pilot survey were that the options presented to people were not fully randomised, so there could have been some biased responding, given the order of presentation of the options of both questions.In addition, the sampling was not designed in a way to ensure representation based on gender, age, and other key demographics.The third limitation is that the question regarding the criteria that people use to judge a piece of information as misinformation presented options that conflated general descriptions of what misinformation could be with the sources that could be used to judge what misinformation could be, with features of the information that would lead people to suspect that it is misinformation.For all these reasons, the presentation of the data in the results section was based on simple descriptives rather than inferential analyses of the data.Acknowledging this, the general patterns still stand as informative, not least because they replicate and extend previous work (Osman et al. 2022), using a wider sample of participants from countries not often included in this field of research.

Conclusions
Given how profoundly inconsistent published definitions of misinformation are, this present study was designed to investigate how the public construe the concept.The findings from the present pilot survey show that the majority treats misinformation as a concept that applies to all communication contexts, not just communication online via social media.The majority see opinions presented as facts as an indicator of misinformation, along with claims unsubstantiated by experts or later disproven by evidence.The latter is important because it indicates sensitivity to the temporal nature of claims and their status with respect to evidence, which frequently need updating in precision and accuracy as new findings are uncovered.Sharing is not motivated by efforts to deliberately mislead-to the extent that they are aware; the sharing of misinformation is to stimulate discussion or for entertainment.What the public views as misinformation, and the criteria they apply to their everyday experiences, does not reflect nativity.The public makes pragmatic appraisals based on a number of factors, including motivations and context.Informed Consent Statement: Informed consent was obtained from all participants that took part in the experiment.

Figure 1 .
Figure 1.Percentage of responses to each of the four possible communication contexts where the concept of misinformation most applies.

Figure 1 .
Figure 1.Percentage of responses to each of the four possible communication contexts where the concept of misinformation most applies.

Figure 2 .
Figure 2. Percentage of Yes responses to each of the five stated definitions of misinformation.

Figure 2 .
Figure 2. Percentage of Yes responses to each of the five stated definitions of misinformation.

Figure 3 .
Figure 3. Percentage of responses to the six possible reasons for sharing misinformation.

Figure 3 .
Figure 3. Percentage of responses to the six possible reasons for sharing misinformation.

Funding:
This research received no external funding.Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the College Ethics board, Queen Mary University of London.Approval Code: (QMREC1948); approval date: March 2022.

Table 1 .
Illustrations of definitions of misinformation presented in academic publications and public institutions.

Table 2 .
The total sample includes N = 1897 adult participants from six countries.

Table 3 .
Questions presented to participants in the pilot survey.

Table 4 .
Superordinate coding of free text of the first open question: misinformation criteria.

Table 5 .
Detailed coding of free text for the first open question: misinformation criteria.

Table 6 .
Volunteered responses to open-ended question on sharing.