A study on information disorders on social networks during the Chilean social outbreak and COVID-19 pandemic

Information disorders on social media can have a significant impact on citizens' participation in democratic processes. To better understand the spread of false and inaccurate information online, this research analyzed data from Twitter, Facebook, and Instagram. The data was collected and verified by professional fact-checkers in Chile between October 2019 and October 2021, a period marked by political and health crises. The study found that false information spreads faster and reaches more users than true information on Twitter and Facebook. Instagram, on the other hand, seemed to be less affected by this phenomenon. False information was also more likely to be shared by users with lower reading comprehension skills. True information, on the other hand, tended to be less verbose and generate less interest among audiences. This research provides valuable insights into the characteristics of misinformation and how it spreads online. By recognizing the patterns of how false information diffuses and how users interact with it, we can identify the circumstances in which false and inaccurate messages are prone to become widespread. This knowledge can help us develop strategies to counter the spread of misinformation and protect the integrity of democratic processes.


Introduction
Misinformation is an old story fueled by new technologies [1]. As a multifaceted problem that includes constructs such as disinformation, conspiracy theories, false rumors, and misleading content, misinformation is considered an information disorder [2] that harms the information ecosystem and negatively impacts people's well-being. For example, during the COVID-19 pandemic, anti-vaccine groups used online misinformation campaigns to spread the idea that vaccines are unsafe [3]. As a result, medical and scientific groups trying to combat the pandemic also had to deal with an infodemic [4]. It is important, then, to understand the characteristics and effects of misinformation.
Research in the United States has shown that false information spreads quickly within certain groups [5], especially when it confirms users' existing beliefs. Accordingly, researchers have examined how false information can affect important events like presidential elections [6] and advertising campaigns [7] . The recent COVID-19 pandemic has shown that misinformation can impact regions like Europe [8], India [9], and China [10], revealing that this phenomenon has a global reach. Different studies show that misinformation can affect how people view vaccination campaigns [11] and make it harder for governments to control the spread of the pandemic [12].
Numerous organizations are studying how information disorders impact society [13]. Their objective is to develop public policies that can prevent the harmful effects of misinformation [14]. These efforts demonstrate that misinformation is a multifaceted issue that creates an ecosystem of information disorders [15]. The factors contributing to this ecosystem include producing false content, employing bots to disseminate false information widely [16], and increasing incivility on social media platforms, often fueled by troll accounts [17]. The complexity of the problem means that many questions still need to be answered.
This study aims to identify the characteristics of misinformation on social media platforms in Chile, a Latin American country where social media usage is widespread and that has experienced several social and political crises that have facilitated the spread of misinformation [18,19,20,21,22]. With a population of 19 million, Chile has 13 million Facebook accounts, 9.7 million Instagram accounts, and 2.25 million Twitter accounts [23]. In 2019, a wave of protests against transportation fares led to a fully-fledged social uprising against inequality and the political establishment. With the onset of COVID-19 in 2020, the social outbreak led to a process of drafting a new constitution, which failed in 2021 but was resumed in 2022.
More specifically, the study has two objectives: -Interpret the dynamics of misinformation propagation on social platforms in Chile. We define the trajectories and traceability of messages with misinformation spread on Twitter, Facebook, and Instagram. This information will recognize the speed and scope of the propagation, identifying if there are non-evident diffusion patterns. -Explain and analyze the characteristics of the messages that misinform on social platforms in Chile: Within the messages we studied, we conducted a content analysis, determining which linguistic strategies are used to misinform.
Importantly, we compare the diffusion of stories verified as false, inaccurate, and true by professional fact-checkers. That is, we study the differences and similarities of the dynamics of misinformation across platforms and between types of content. Our study examines the similarities and differences of misinformation across various topics of our recent history. Specifically, we will focus on the Chilean social outbreak, the rise of the COVID-19 pandemic, the beginning of the constitutional process, and the 2021 presidential elections. By exploring these topics, we hope to gain insight into the characteristics of disinformation during different events. We will pay close attention to the structural propagation of false information. Additionally, we will consider specific properties of the content, including the readability of the texts and accessibility to content from various social media platforms.
This article is organized as follows. We discuss related work in Section 2. The materials and methods used to develop this study are presented in Section 3. The content analysis of Twitter is presented in Section 4. Propagation dynamics on Twitter are discussed in Section 5. The analysis of volumes of reactions on Facebook and Instagram is introduced in Section 6. Finally, we discuss the results and findings of this study in Section 7, and we conclude in Section 8, providing concluding remarks and outlining future work.

Related Work
Information disorders are a growing concern in the age of social media and digital disruption. Frau-Meigs et al. [14] defines information disorders as a result of the "social turn" and the emergence of social media, which has created an ecosystem for disinformation and radicalization. Christopoulou [24] studies the role of social media in the information disorder ecosystem, the initiatives that have been developed, and the taxonomies of false information types. Wardle [25] argues for smarter definitions and practical, timely empirical research on information disorder, and closer relationships between journalism academics, news organizations, and technology companies. Recent studies [26] discusses how disinformation has reshaped the relationship between journalism and Media and Information Literacy (MIL), and the challenges faced by MIL and digital journalism in the fight against disinformation. Overall, these studies highlight the importance of understanding and addressing information disorders in the digital age highlighting that information disorders are a complex and multifaceted issue that requires interdisciplinary collaboration and research.
There are several studies focused on the characterization of misinformation [27] and fake news. These studies aim to identify differences between false, misleading content and true content to assist the work of fact-checkers and help developing automatic methods for misinformation detection [28,29]. Most of these studies focus on the distinction between false and true news [30], disregarding the analysis of imprecise content (i.e., half-truths). Regarding the variety of characterizations carried out to distinguish between false and true information, some studies focus on linguistic features. For example, Silverman [31] determined that fake news shows higher levels of lexical inconsistency between the headline and the body. Horne and Adali [32] studied various news sources, finding that fake news has linguistic similarities with satirical-style texts. In addition, the study shows that fake news headlines are longer and are generally more verbose. On the other hand, the body of this type of news includes repetitive texts and, typically, they are easier to read than real news. Pérez et al. [33] analyzed the text of fake news, focusing on gossip websites. The study shows that fake news uses more verbs than true news. Rashkin et al. [34] introduced a case study based on stories qualified by https://www.politifact.com/. They found that fake news uses subjective words to dramatize or sensationalize news stories. They also found that these stories use more intensifiers, comparatives, superlatives, action adverbs, manner adverbs, and modal adverbs than true contents.
Analyzing conversations on Twitter, Mitra et al. [35] show that discussions around false content generate disbelief and skepticism and include optimistic words related to satire, such as joking or grins. Kumar et al. [36] found that conversations triggered by false news are longer but include fewer external sources than conversations initiated by actual content. Bessi and Ferrara [37] found a significant presence of bots in the spread of fabricated claims on Twitter related to the 2016 US presidential election and found that many of these accounts were newly created.
In relation to the motivations that users of social networks would have to share misinformation, some studies argue that a relevant factor is the confirmation bias [38] . Confirmation bias refers to the tendency to seek, interpret, and remember information in a way that confirms pre-existing beliefs [39]. Essentially, individuals tend to look for information that supports their existing views and overlook information that contradicts them. Confirmation bias can result in the persistence of false beliefs and the reinforcement of stereotypes. According to Pennycook et al. [40], the commonly held belief that people share misinformation due to confirmation bias may be misguided. Instead, their findings suggest a need for more careful reasoning and relevant knowledge linked to poor truth discernment. Moreover, there is a significant disparity between what people believe and what they share on social media, primarily attributable to inattention rather than the intentional dissemination of false information. Since many users unintentionally share fake news, effective interventions can encourage individuals to consider verified news, emphasizing the importance of fact-checking initiatives. Unfortunately, recent studies highlights evidence that question the effectiveness of these initiatives. For example, Ecker et al. [41] surveyed information that illustrates resistance from social network users to correcting information, showing that it is challenging to change their minds after users adopt a stance. In a recent study, van der Linden [29] shows that individuals are prone to misinformation through the illusory truth phenomenon. This effect suggests that statements that are frequently repeated are more likely to be perceived as true, in comparison to statements that are novel or infrequently repeated. As false information can spread rapidly, while fact-checking interventions that verify the facts often come too late, these interventions may not be seen by a significant number of people, leading to their failure to counteract the illusory truth effect.
Regarding propagation dynamics, several studies show that storied labeled as false spreads deeper than news labeled as true. For example, Friggeri et al. [42] show that false content is more likely to be reshared on Twitter, reaching more people than actual news. Zeng et al. [43] show that so-called fake news spreads faster than true content. Zubiaga et al. [44] analyzed the conversations triggered by various false rumors, finding that at the beginning most users engage with the content without questioning it by sharing it or commenting on it. The study shows that questioning this type of information occurs at a later stage and that the level of certainty of the comments remains the same before and after debunking. Finally, Vosoughi et al. [5] showed that the spread of stories fact-checkes as false is significantly faster, reaches deeper into the network, and in general, their information cascades are wider than for content veried as true, especially during the early stages of content propagation.

Materials and Methods
We worked on a collection of content tagged by a media outlet in Chile certified by the International Fact-Checking Network (IFCN) 1 , Fast Check CL 2 . This Chilean initiative systematically analyzes the content circulating in national media and social networks. To have greater content coverage, we also work with content verified by two other factchecking initiatives: Decodificador 3 , an independent journalistic initiative, and Fact Checking UC 4 , a fact-checking outlet housed in the School of Communications at the Pontificia Universidad Católica de Chile. The three initiatives follow cross-checking verification, board discussion, and source-checking methodologies.
We collected content verified by the three fact-checking agencies mentioned above from October 2019 to October 2021. The content was classified into four thematic dimensions: • Social outbreak: the Chilean social outbreak refers to the Chilean socio-political crisis that started on October 19th, 2019, and after several months of protests and riots, led to a new political constitution process. This event highlighted the role of media in shaping public opinion and influencing social movements. The use of social media by protesters and the circulation of user-generated content challenged traditional news media's monopoly over information dissemination. The event also demonstrated the need for journalists to report on diverse perspectives and provide critical analysis of events unfolding on the ground.
• COVID-19: COVID-19 reached Chile on March 3rd, 2020, and produced 61,050 deaths and more than 3,000,000 of infections by 2022. The COVID-19 pandemic has been a pivotal event for communication sciences and journalism, revealing both the strengths and weaknesses of the media in disseminating information during a global crisis. It has emphasized the importance of reliable and accurate reporting in a time of uncertainty, as well as the role of journalists as gatekeepers of information. The pandemic has also highlighted the role of social media in disseminating misinformation, leading to the need for increased media literacy and critical thinking skills.
• 2021 Elections: In 2021, Chile held political elections for the congress and the presidency, with the latter featuring a highly polarized contest between right-wing candidate José Antonio Kast and left-wing candidate Gabriel Boric. Despite the intense political climate, Boric emerged victorious by a wide margin in the presidential ballotage. This event is very relevant for the study because the extensive coverage of the candidates by traditional media and social networks has played a critical role in informing the public. The elections have also raised questions about the use of digital media for political campaigning and the impact of micro-targeting on electoral outcomes. Finally, the elections have brought to light issues of political polarization and the role of the media in contributing to or mitigating these divisions.
• Constitution: On November 15, 2019, the National Congress decided to start a constitutional process in response to the social unrest that arose during the social outbreak. This process involved a plebiscite to ratify the decision, which was approved by 79% of voters. Subsequently, on May 15 and 16, 2021, conventional constituents were elected and began the constitutional process on July 4 of that same year. The constitutional process has been characterized by public participation, with citizens engaging in debates and discussions about the future of the country. Communication has played a crucial role in facilitating this process, with media organizations providing platforms for dialogue and debate, and social networks enabling citizens to exchange ideas. The constitutional process has brought to the forefront issues of social inequality, human rights, and democratic participation, which have been the subject of intense public debate and discussion. Finally, the constitutional process has emphasized the need for innovative and effective communication strategies to engage citizens and foster public participation in the democratic process.
In addition, other topics were grouped in the category Others. In total, we collected 1000 verified stories, with 130 units of content for Social outbreak, 353 on COVID-19, 79 on 2021 Elections, and 132 on Constitution. Table 1 shows these units of content by the fact-checking agency. Each fact-checker works with its verification categories, several of which are equivalent. To carry out the homogenization of veracity categories, we reviewed two studies. First, we considered the study by Amazeen [45] to measure the level of agreement between fact-checking projects. This study considers in a single category all the verifications cataloged with some component of falsehood or imprecision. Following Lim [46], we generated a manual equivalence between the veracity categories. To establish the equivalence relationships, we used the descriptions of the qualifications provided by the editors of each fact-checking agency. Accordingly, the research team grouped these categories into three main categories: 'True', 'False', and 'Imprecise'. Imprecise content groups hybrid content, which contains half-truths and outdated content (out of temporal context). Unverifiable units of content were removed from the dataset. Table 2 shows this information per fact-checking agency.  True  284  38  53  375  False  250  86  37  373  Imprecise  122  42  85  249  Unverifiable  0  3  0  3   Total  656  169  175  1000   Table 3 shows the units of content verified by thematic dimension. It can be seen that the topic with the most factchecks is 'COVID-19'. We also observe that the two topics with the highest amount of false labels are COVID-19 and Constitution. There is also a significant amount of imprecise content in these two topics. In both COVID-19 and Constitution, false and imprecise content doubles true content, encompassing 67% in COVID-19 and 71% in Constitution. Each project gave a different emphasis to the coverage of the topics, but COVID-19 was the most fact-checked one of the three of them. To characterize the content, we used the articles from the fact-checking outlets to locate the original content in the social platforms in which they were spread. Accordingly, we retrieved full texts and structural information. In the case of Twitter, we retrieved full text and structural information considering retweets, replies, and comments. For Instagram and Facebook, we only have access to aggregated data provided by the CrowdTangle API 5 , allowing us to report the total number of interactions per verified content. CrowdTangle is an analytical platform owned by Meta that shares data on users' interactions (i.e., shares, views, comments, likes, and so forth) with public pages, groups, and verified profiles on Facebook and Instagram.
For Twitter, the data collection process was conducted as follows: -Text data: We searched for related messages on Twitter 6 for each verified original post. We collected replies derived from the original content. We included replies to the original post or its retweets and the full list of interactions derived from them (replies of replies). We implemented this query mechanism using recursion. This query retrieval mechanism allowed us to retrieve a huge number of interactions for each post. Then, for each original post, we consolidated a set of messages that compounds the conversational thread of each verified content.
-Structural data: For a given post, we structured its set of related replies using parent-child edges. The original post was represented as the root of the structure. Each message corresponds to a child of the root node.
Interactions between users are direct, so each reply connects to the tweet it mentions. For each message, we retrieved the author's identifier and the post's timestamp. The structure that emerges from this process is a propagation tree, with timestamped edges indicating reactions triggered by the original message.

Content analysis in Twitter
We start the content analysis by working on the verified content on Twitter, which corresponds to a subset of the total content included in our dataset. Of the 341 units of content whose source is traceable to Twitter, 307 could be accessed from the Twitter Academic API at the time of data collection (January -March 2022). An essential aspect of the dataset is that the number of tweets associated with each content is quite high. The dataset comprises 397,253 tweets, with 94,469 replies (23.78%) and 302,784 retweets (76.22%). Table 4 shows the units of content verified on Twitter by topic. We calculate a set of linguistic metrics for each content verified on Twitter, together with the messages that comment on it (conversational threads). We report each feature averaging over the conversational threads retrieved from Twitter. First, we computed the threads' length measured in characters and words. We also count the number of special symbols/words used in each thread (e.g., emoticons, verbs, nouns, proper nouns, named persons, and named locations, among others). In addition, we computed three polarity-based scores usually used in sentiment analysis: valence, arousal, and dominance. Finally, we computed the Shannon entropy of each sequence of words. For features that count the number of symbols/word occurrences, we report the feature value relative to the average length of the thread, measured in the number of words, allowing values between subjects to be comparable.
We report these metrics per veracity category in Table 5 and per topic dimension in Table 6. The results in Table 5 show that the units of content related to imprecise claims are larger than the rest of the units of content. In addition, true content tends to be less verbose and more precise in terms of the use of proper nouns, ad-positions, and locations.
The results in Table 6 show that the units of content associated with the social outbreak are significantly shorter than the rest. This finding correlates with crisis literacy [47], in which it is reported that conversational threads during crises are characterized by being shorter and triggered by urgency.
The table also shows some linguistic differences. The most notorious (indicated in red) shows that the units of content verified during the '2021 Elections' contain, on average, more proper nouns and mention more persons than the rest of the topics. This result makes sense since it is expected that the units of content of political campaigns mention candidates, which would increase the relative presence of these words in the threads.  We computed a set of readability metrics 7 . These metrics measure the linguistic complexity of each conversational thread. The linguistic complexity can be measured differently, such as indicating the level of education necessary to handle the vocabulary that makes up the thread (e.g., GFOG, NDCRS, ARI, FKI, DCRS, and CLI) or the reading ease of a text (e.g., LWF and FRE) [48]. The metrics are based on English resources; then, we translated the conversational threads from Spanish to English, using the Google translate API, to apply these metrics. In addition, we computed two native Spanish readability metrics (IFSZ and FHRI) 8 . We computed the following readability metrics: • GFOG: Gunning's Fog Index outputs a grade level in 0-20. It estimates the education level required to understand the text. The ideal readability score is 7-8. A value ≥ 12 is too hard for most people to read.
• NDCRS: The new Dale-Chall readability formula outputs values in 0-20 (grade levels). It compares a text against several words considered familiar to fourth-graders. The more unfamiliar words are used, the higher the reading level. A value ≤ 4.9 means grade ≤ 4, while a value ≥ 10 means grades ≥ 16 (college graduate).
• ARI: Automated readability index that outputs values in 5-22 (age-related to grade levels). This index relies on a factor of characters per word. A value of 15-16 means a high school student level (10-grade level).
• FKI: Flesch-Kincaid index (grade levels) that outputs values in 0-18. The index assesses the approximate reading grade level of a text. A level 8 means the reader needs a grade level ≥ 8 to understand (13-14 years old).
• DCRS: Dale-Chall readability formula that outputs values in 0-20 (grade levels). It compares a text against 3000 familiar words for at least 80% of fifth-graders. The more unfamiliar words are used, the higher the reading level.
• CLI: Coleman-Liau index that outputs values in 0-20. The index assesses the approximate reading grade level of a text.
• LWRI: Linsear-Write Reading Ease Index that scores monosyllabic words and strong verbs. It outputs values in 0-100, and higher is easier.
• FRE: Flesch Reading Ease index. It gives a score between 1-100, with 100 being the highest readability score. Scoring between 70-80 is equivalent to school grade level 8. Higher is easier.
• FHRI: Fernández-Huerta readability index for Spanish texts [50] that outputs values in 1-100. The readability of the text is interpreted as followed: Beyond college (< We show the results of this analysis per veracity category in Table 7. The first six metrics in Table 7 correspond to grade levels. These metrics indicate that the complexity of the texts is in the range of basic to medium complexity. We found significant differences between content types when calculating GFOG, ARI, FKI, and CLI. The four metrics consistently indicate that false and imprecise content requires fewer grade levels than true content to be understood. This educational gap between true content and false/imprecise content varies depending on the metric used. For example, the difference between true and imprecise according to the ARI metric is around 2 points, which in the US grade-level system is equivalent to a difference between an eleventh-grade student and a ninth-grade student. According to the FKI index (Flesch-Kincaid grade level), the difference between what is true and what is imprecise is also around 2 points. On this scale, the gap moves from 10 to 12, distinguishing a text of basic complexity from one of average complexity. The difference in the other metrics is smaller. The last four indices are ease of reading metrics. For all these indices, imprecise and false content are more accessible to read than true ones. Consistently, these metrics show a gap in readability, both when evaluating texts translated from Spanish to English (LWRI and FRE) and when using the metrics defined with lexical resources in Spanish (IFSZ and FHRI).
We also show the readability scores according to the different topics covered in our study. These results are shown in Table 8.  Table 8 shows that the verified units of content for COVID-19 are less complex than the rest of the topics. These differences are significant for the GFOG, ARI, FKI, and CLI indices, in which the readability gap between this topic and the rest is at least one point. Regarding the ease of reading metrics, the four indices show that the topics with more complex texts are related to the 2021 elections and the constitutional process. On the other hand, the more straightforward texts are associated with COVID-19 and the Chilean social outbreak.

Propagation dynamics in Twitter
To study the propagation dynamics of this content on Twitter, we reconstructed the information cascades for each verified content considering replies, representing 20% of the total interactions triggered by content on Twitter. We release the structural data to favor reproducible research 9 . We used the format suggested by Ma et al. [51] to examine cascades, in order to create useful data for professionals and researchers in this field. We focus on three characteristics of the cascades: 1) Depth, 2) Size, and 3) Breadth.
A first analysis calculated these characteristics on all the threads added according to veracity. The averages of these characteristics and their deviations are shown in Tables 9, 10, and 11.    Table 9 shows that true contents cascades are more shallow than the rest. On the other hand, false contents show trees with high depth. False and imprecise propagation trees are quite similar. Table 10 shows that false contents reach more people than the rest. This finding corroborates prior results [5], showing that false contents reach more people than other kinds of content. Differences in terms of tree size are significant between the groups. Regarding breadth, Table 11 shows that imprecise propagation trees are wider than the rest.
For a second analysis, this time of a dynamic nature, we calculated the information cascades' characteristics as a time function. In this way, it was possible to assess whether there were differences in the propagation dynamics. Based on this analysis, we focused on the growth patterns of the cascades, considering both the depth, size, and breadth of the propagation trees. We show these results in figures 1, 2, and 3, respectively.    , the time required to reach that depth depends on the type of content. While imprecise content takes an average of 2 hours to get that depth, true content takes 15 hours. This finding indicates that the contents' speed (depth axis) is conditioned on veracity. Furthermore, the figure shows that imprecise contents acquire greater depth in less time than false or true contents. Figure 2 shows that for a fixed size (the figure marks what happens for the first 3000 engaged users), the time required to reach that size depends on veracity. While false content takes 2 hours to get that size, true content needs 6 hours. This finding indicates that the content's growth (users involved in the thread) depends on veracity. Furthermore, the figure shows that the true content infects fewer users than the rest for a fixed observation window. For example, two hours after the original post (see the slice marked as fast), the false content is close to 3,000 engaged users on average, while the real ones are under a thousand. In this dynamic, the imprecise contents are between the true and the false. Finally, the figure shows some real content with many engaged users and a longer permanence than the rest. These viral units of content are few in quantity but significantly impact the network. Our dataset indicates that these units of content correspond to the beginning of the health alert for COVID-19 in Chile, which generated long-lasting threads on the network with many engaged users. In general, this is different from false and imprecise content, which shows explosive growth on the network but are more volatile than true content. Figure 3 shows that the dynamics of breadth for false and imprecise content are very similar. On the other hand, for true units of content, their cascades grow slower in breadth than the rest. Due to the virality of true content related to COVID-19, the chart of true content grows more than the rest but at a much slower speed than false and imprecise content.

Reactions in Facebook and Instagram
We accessed volumes of reactions from verified content on Facebook and Instagram using the Crowdtangle platform. The platform allows us to know indicators related to volumes, but without accessing the text of the messages or structural information. For this reason, the analysis of these two networks is different from the one we were able to carry out on Twitter.
Of the 73 units of content whose source is traceable to Facebook and the 131 traceable to Instagram, 38 and 75 could be accessed from Crowdtangle at the time of data collection (January -March 2022). As Tables 12 and 13 show, Facebook has more false than true content. On the other hand, Instagram has more true than false content. The content formats verified on each network are shown in Tables 14 and 15. Table 14 shows that most of the verified content on Facebook corresponds to photos, while only some content corresponds to external links or videos. Table 15 shows that on Instagram, most of the content is also photos. However, on Instagram, the volume of verified content corresponding to albums or videos is more relevant than on Facebook.  The indicators provided by Crowdtangle for verified Facebook pages and groups show volumes of likes, comments, shares, and emotional reactions (e.g., love, wow, among others). We show the total amounts of reactions per veracity type for Facebook in Table 16.  Table 16 shows that fake content attracts more likes than other content. The percentages indicate the volume of likes over the total number of reactions. In false content, likes are equivalent to 37.4% of reactions, while in true content, they reach 20.5% of reactions. In these last units of content, we record more shares than in the false and imprecise contents. Table 16 also shows that the total number of reactions for false content is much higher than the rest. The last row shows the average reaction value per verified content. The average value of reactions for false content is close to 20,000, while for true content, it is only 5,401. It is noteworthy that imprecise content produces very few reactions on Facebook. Table 17 shows the reactions to content on Instagram. Crowdtangle provides four reactions: likes, comments, views (only for videos), and shares. Table 17 shows that true content produces more reactions in this network than other types of content. The ratio of these reactions between likes, comments, and shares is almost the same (1/3 for each type of reaction). This finding draws attention since, for false content, many reactions correspond to views, but for imprecise content, these reactions correspond to likes. Notably, the proportion of comments on Instagram is very low. The last row shows the volume of reactions in proportion to the total verified content. It is observed that true content generates almost twice as many reactions as false and imprecise content (on average).

Discussion
This study highlights the characteristics and diffusion process of misinformation in Chile as measured on Twitter, Facebook, and Instagram. The analyzes carried out on Twitter consider the content of the messages and propagation dynamics. On Instagram and Facebook, we had access to volumes of reactions. Several findings emerge from these analyses, which we discuss next: -On Twitter, true content is less verbose than false or imprecise content (Section 3). We measured that true contents are less verbose than the rest. This finding corroborates the results of Horne and Adali [32], Pérez et al. [33], and Rashkin et al. [34], which show a more significant presence of verbs and their variants in news and comments related to false content. We connect this finding to the study of Mitra et al. [35] , which shows that discussions around false content generate skepticism and include satire, joking, and grins, producing more verbs and verbal expressions than factual content. We also relate this finding to the study of Steinert [17] that connects false content and emotional contagion, a mood that produces verbosity texts. This connection has been explored in several studies illustrating linguistic properties of the encoding of emotions as verbosity [52,53]. However, we found that some conversations triggered by true content are long, with the longest permanence on Twitter, according to our study. This result differs from what was reported by Kumar et al. [36], who showed that the true contents tended to be shorter. One reason for this discrepancy is that the analyzed contents spread during COVID-19, which has its own dynamic not previously observed in other similar studies. -On Twitter, false and imprecise content shows lower reading comprehension barriers than true content (Section 3). We compute various ease of reading and grade level measures, consistently showing lower reading comprehension barriers for false or imprecise content. This finding corroborates the study by Horne and Adali [32], who measured ease of reading indices to distinguish between true and false content. In addition, we show that the access barriers for imprecise content are similar to those for false content. We also connect this finding to the concept of illusory truth [29], which refers to a false statement that is repeated so frequently that it becomes widely accepted as true. This phenomenon is often seen in disinformation campaigns, where false information is deliberately spread to manipulate public opinion. False statements are often written in a simple way so that they are easily remembered and repeated. This can lead to a lowering of reading comprehension standards, as audiences may become more accustomed to simplistic texts and reductionist arguments. -On Twitter, imprecise content travels faster than true content (Section 4). Our study shows that both in size, as well as in-depth and breadth, true contents are slower than the rest of the content. This finding coincides with the study by Vosoughi et al. [5], who had already shown this pattern in 2018. This study corroborates that this dynamic is maintained on Twitter. This finding can be linked to the connection between emotional contagion and false information [40]. When false information is shared, it often triggers emotional responses and prompts people to take a stance on the issue. This emotional language then trigger interactions between users. Recent studies suggest that these interactions can also lead to a higher level of polarization [54], as the emotional nature of the content may reinforce preexisting beliefs and opinions. -On Twitter, false content is reproduced faster than true content (Section 4). Our study also shows that the information cascades of false and inaccurate content grow faster than true content. This finding corroborates the results of Friggeri et al. [42], Zeng et al. [43], and Vosoughi et al. [5], showing that this dynamic is maintained on Twitter. This finding also connects to other studies. For example, King [55] notes that pre-print servers have helped to rapidly publish important information during the COVID-19 pandemic, but there is a risk of spreading false information or fake news. Zhao et al. [56] found that fake news spreads differently from real news even at early stages of propagation on social media platforms like Weibo and Twitter, showing the importance of bots and coordinate actions in disinformation campaigns. -On Facebook, false content concentrates more reactions than the rest (Section 5). We connect this finding to two studies. First, Zollo and Quattrociocchi [57] found that Facebook users tend to join polarized communities sharing a common narrative, acquire information confirming their beliefs even if containing false claims, and ignore dissenting information. We believe that our observation about volumes of reactions to false content can be triggered from these highly polarized communities. Then, Barfar [58] found that political disinformation in Facebook received significantly less analytic responses from followers, and responses to political disinformation were filled with greater anger and incivility. The study also found similar levels of cognitive thinking in responses to extreme conservative and extreme liberal disinformation. These findings suggest that false contents in Facebook can trigger emotional and ideological responses, producing more interactions between users that factual information. -On Instagram, true content concentrates more reactions than the rest (Section 5). To the best of our knowledge, this finding is the first result related to Instagram that correlates volumes of reactions and the content's veracity. However, there are studies that offer insights into Instagram users' engagement with content and how brands can utilize the platform to target their audience. Argyris et al. [59] has proposed that establishing visual consistency between influencers and followers can create strong connections and increase brand engagement. This suggests that user images on Instagram, as well as text, contribute to engagement, producing higher verification barriers. Ric and Benazic [60] discovered that interactivity on Instagram only influences responses when it is influenced by an individual's motivation to use the application, whether for hedonistic or utilitarian reasons. We believe that since these are the primary motivations on Instagram, informative content receives less attention than on other networks and therefore untrue content is less widely shared.
Study limitations. This study has some limitations. Firstly, there was a discrepancy in the availability of data between the three platforms examined. While Twitter provided access to the messages of users who shared false content, Instagram and Facebook only allowed access to statistics on the reactions to the content. Consequently, this study provides a more comprehensive analysis of Twitter and a less detailed analysis of Facebook and Instagram. Due to this disparity, it was not possible for us to track accounts on Instagram and Facebook because Crowdtangle only provides access to aggregated data, while restricting access to individual profiles and other user-specific information. This limitation avoid the analysis of cross-platform spreading and other interesting aspects of the phenomenon. Secondly, the study was hindered by the fact that fact-checking journalism in Chile is still a budding industry, with few verified content and verification agencies available. It is crucial to promote initiatives that combat information disorder, as indicated by the findings of this study. Lastly, this study only investigated the reactions to verified content on social media, without exploring the source of the content or its correlation with traditional media outlets. An investigation that examines the impact of mainstream media narratives on radio and television, and their interplay with social networks, would complement the focus of this study and provide further insights into the causes and effects of information disorder.

Conclusions
By studying information disorders on social media, this study provides an updated vision of the phenomenon in Chile.
Using data collected and verified by professional fact-checkers from October 2019 to October 2021, we find that imprecise content travels faster than true content on Twitter. It is also shown that false content reaches more users than true content and generates more interactions on both Twitter and Facebook. The study shows that Instagram is a network less affected by this phenomenon producing more favorable reactions to true content than false. In addition, this study shows that access barriers according to ease of reading or grade level are lower for false content than for the rest of the content. It is also shown that true content is less verbose but, at the same time, generates shorter threads of conversation, less depth, and less wide than false or imprecise content. These findings suggest that the dynamics of the spread of misinformation vary across platforms and confirm that different language characteristics are associated with false, imprecise, and true content. Thus, future efforts to combat misinformation and develop methods for the automatic detection of it need to take into account these unique attributes. That means a challenge for media literacy initiatives. Also, the results invite us to be aware of the context in which the misinformation is produced, by comparing a social outbreak, the COVID-19 pandemic, elections, and the writing of a new Constitution, each event shows specific features that frame the false information.