Disinformation and Fact-Checking in the Face of Natural Disasters: A Case Study on Turkey–Syria Earthquakes

: Natural disasters linked to contexts of unpredictability and surprise generate a climate of uncertainty in the population, resulting in an exponential increase in disinformation. These are crisis situations that cause the management of public and governmental institutions to be questioned, diminish citizens’ trust in the media


Introduction
Information is always essential, but even more so in times of uncertainty such as disasters or natural catastrophes.Faced with an exceptional event, people need to obtain information to make decisions [1] (p. 144).The role of the media in cases of catastrophes and humanitarian crises is to comply with the social function of informing and respecting protagonists, while searching for and contextualizing contrasting sources [2].
The Turkey-Syria earthquake occurred on 6 February 2023 with a magnitude of 7.8 on the Richter scale and more than 1000 aftershocks.It was the deadliest and strongest earthquake in Turkey since the 1939 Erzincan earthquake, and the deadliest earthquake in Syria since the 1822 Aleppo earthquake.At least 13.5 million people were affected in Turkey.Reuters agency estimates that more than 50,000 people died, 125,000 were injured, and more than 300,000 buildings were damaged, some of which are considered World Heritage Sites [3].In Syria, the catastrophe affected 8.8 million people and resulted in 6000 deaths and 12,000 injuries [4].
The Turkish government of Tayyip Erdo gan declared a level-four alert to request international aid and developed a huge deployment for human rescue, but the management was not exempt from criticism.The opposition questioned the collapse of newly constructed buildings, the slow reaction to the disaster, the late deployment of military resources, and the inadequate aid during the first days.For their part, social network users complained about the government's decision to block access to Twitter for 12 h-from the afternoon of 8 February to the early morning of 9 February-to combat disinformation, while the population was urgently trying to find their relatives, waiting for help or the location of those trapped in the rubble.
Risk communication is concerned with the study of "the actors, processes, messages and effects of communication in natural and technological disasters and humanitarian, health, food crises, etc. in the contexts of crises, disasters and emergencies, regardless of the nature of their cause" [5].Given the complexity of the information scenario where the underlying intentions of messages are not clear, it is possible to refer in general to disinformation, although several types of information disorders can be distinguished [6] (p. 9.): mis-information, information shared without intent to harm; disinformation, information shared with intent to harm; and misinformation, truthful private information shared to cause harm [7][8][9].
People's vulnerability to misinformation in risk and disaster contexts brings with it the need for collective fact-checking [10].The journalistic profession has adopted falsehood detection among its work routines and, in journalism, fact-checking is a new business model [11] that makes more sense during major disasters due to the enormous interest it arouses among audiences.In an analysis of the Twitter account of Emergencias 112 Comunidad de Madrid, the publications with more impact were referencing international news related to the Turkey and Syria earthquake events ahead of local emergencies [12].
Our starting point in this research is the management of the crisis by the official channels of the Spanish and Turkish governments.Our idea was that journalists who work in institutional communication do not always contrast information to offer security to citizens, and verification agencies have become a necessary complement to carry out the routine of information production in the face of natural disasters.Following this, the objective of this study is to explore the response mechanisms that news agencies and fact-checking agencies gave to false or erroneous information that circulated around the earthquakes in Turkey and Syria that took place from 6 to 20 February 2023.In this sense, we analyzed the typology, category, and disinformation strategies of the fake news that circulated around this natural disaster, mainly through social networks and online messaging platforms.In complementarity, we quantified the digital impact that these verifications achieved on X (formerly Twitter), the main online tool used by journalists to verify fake news and improve the quality of the information circulating in virtual networks.
According to our main objectives, the following research questions are posed: (RQ1) What is the degree of commitment of news agencies to the verification and rectification of fake news circulating on the Internet?(RQ2) What is the value of verification agencies in achieving a higher quality of information circulating on virtual networks?(RQ3) What typology of fake news is created and circulated following a major natural disaster?And are the efforts made to counteract the information sufficient, or is the power of propagation greater than that of verification?(RQ4)

Social Networks and Misinformation in the Face of Natural Disasters
The fact that the scenes of the events are located at great distances from media editorial offices leads to a certain relaxation of ethical and deontological criteria [13], and journalistic work is often criticized [14][15][16] for the production of distorted, tendentious and biased information [17][18][19][20][21][22][23], with an abundance of polemic, dramatic, scandalous, and sensationalist ingredients that cause real or potential fear among citizens.At the same time, there is a shortage of scientific information broadcast on the causes and consequences of disasters.This leads to negative reactions and myths [24] (p.44).
The journalistic coverage of disasters has been extensively studied, as has the predominance of spectacular over-information and the lack of rigorous information [25].It is necessary to dispense with lurid data, to act with empathy, and to provide reliable and contrasted data [26] so that the consumerism of informative sensation does not prevail over the interest in knowing [27] (p.580).This is perceptible in news cycles on catastrophes that repeat cyclically with a common trend: information saturation causes audience disinterest and, consequently, the media stop offering content when the problem continues [28].Likewise, media discourse conditions the involvement of audiences in the scope of catastrophes according to the level of information received and their own experience [26] (p.17).
The trust attributed to an information source in a crisis is a key issue [29].People mobilize in an emergency for two reasons: trust in the source transmitting the message, and having experienced a similar dangerous situation [30] (p.13).Following this, self-protection information must be reliable and up to date [5].Sometimes, the sources perceived as most reliable are not the ones that provide the most up-to-date and accurate data [31] (p.469), but amount of information is directly proportional to the credibility of a source [32].Ultimately, trust in a source depends on the perception of expert knowledge, transparency and honesty, and concern and care [32] (p.43).
Journalists must not only transmit information; they are also spokespersons for institutions during alerts and natural disasters [33] (p.289).They have traditionally accessed information sources considered reliable, with control and validation to respond to the needs of an audience [34] (p.1347).In the case of disasters, journalists share the stage with the organizations involved in the response to the crisis (police, municipalities, ministries, autonomous communities, firefighters, UME, civil protection, etc.) [24] (p.44) and are confused by the urgency and haste of the moment [35] (p.174), which increases misinformation.In this sense, there is an excessive dependence on official sources in risk communication [36] (p.257).Also, the coexistence of networks of official and unofficial sources allows the dissemination of erroneous information, and it is not always easy for information professionals to judge the relevance and credibility of the information produced and to take appropriate protective measures [10].
The monopoly of information was fractured with the eruption of the Internet [37] and, especially, with social networks, which have become a basic source of information for citizens and cooperation (alerts, requests for help, rescues, location of missing persons, dangers, available shelters, prevention, promotion of donations, solidarity channels, etc.) [38][39][40].Furthermore, social networks function as emotional support and a primary element of protection [41] in reinforcing traditional channels of communication and encouraging public feedback.They prove to be a powerful tool for building trust among the population and play a cardinal role in the crisis communication [1] (p. 143) of governments and institutions, for example, as occurred during Hurricane Katrina (2005) [42], the Southern California wildfires (2007) [43], and Hurricane Sandy (2012) [44].
The possibility offered by various applications for online social network content analysis, especially developed for crisis management in the three phases proposed by Coombs [45] (p.10) (pre-crisis, crisis, and post-crisis), with the option for different types of analysis (tracking, event detection, and sentiment and content analysis) [40] (p.193), allows constant monitoring to track the evolution of a disaster, as well as the effectiveness of the response, as perceived by the public [46] (p.143).
Technologies have been used previously during various events (for example, the Kobe earthquake in Japan (1995), the tsunami in the Indian Ocean (2004) and Hurricane Katrina (2005), but social networks began to be deployed in 2010, around the same time as the earthquake in Haiti, when Twitter became one of the most used networks by victims and a source of information.Due to their characteristics (immediacy, horizontality, and simplicity), social networks enable the instant communication of risk situations [47].The specific case of Twitter in the management of social emergencies (humanitarian crises, disasters, earthquakes, cyclones, forest fires) has been studied prolifically [48].Twitter has proven to be of great help in the face of natural disasters [40,49] such as the Japan earthquake (2011), Typhoon Bopha in the Philippines (2012), Hurricane Sandy (2012), Typhoon Haiyan in the Philippines (2013), the Nepal earthquake (2015), the Ecuador earthquake (2016), and the Mexico earthquake (2017).
Although in food crises, social networks tend to amplify panic and hysteria in the public due to their reticular nature [50] (p.88), they also help strengthen citizens' trust in messages coming from official local sources because of the perception of shared responsibility [51].The integration of social networks allows for multidirectional information dissemination from official and unofficial sources, which increases the resilience and responsiveness of all those involved [52].
For their part, journalists use social networks in the face of catastrophes, especially, to verify data and obtain contacts [53], search for material, and document themselves [54].Media professionals often find it difficult to expand their number of correspondents or special envoys to obtain direct information, and in the coverage of breaking news such as catastrophes, speed is essential to verify information.The first testimonies, accounts, videos, and images of hurricanes, terrorist attacks, accidents, etc., are often disseminated by eyewitnesses via cell phones [55].It should be taken into consideration that citizens in general react faster if the news comes from their inner circle of trust and is directly related to messages obtained from television or radio [56] (p.35), and that people between 18 and 34 years old use social networks as their first source of information.
However, information saturation and the nature of social networks lead to disinformation, supported by the use of bots that develop information bubbles [57].The deep knowledge of audiences by social networks allows the design of tailored disinformation content, which enhances its impact and propagation and contributes to the creation of an intoxicated information ecosystem [6] (p. 9).
In the 2011 Japan earthquake, publications were found to have become outdated or inaccurate [58], and a study concluded the unreliability of Twitter as a source of information in the case of the Ebola crisis [59].Also, in the April 2016 earthquake in Ecuador, information chaos added more problems.There was a collapse of information, and citizens did not receive news for a long time from official sources or media outlets.The avalanche of improvised versions of information contributed to the general panic situation.It was the international media that highlighted the information errors of the Ecuadorian media [60] (p.1000).
But, fundamentally, the phenomenon of disinformation begins in the context of the rise of populism and nationalism and the discrediting of the media [61] from 2016 onwards.There are outbreaks of fake news, or false and distorted news [62,63] (images, memes, parodies, etc.) [64], due to political, economic, or strategic interests, with the effect of the manipulation of public opinion.The limits of verisimilitude are blurred, and the messages are sensationalist and more emotional [9] than rational, following the idea that, since "emotions and feelings are real, it's concluded that the objectives are also real and, therefore, shared emotions are important.That is, emotion and feelings are equated with truth and legality" [9].
In this sense, algorithms are the instruments of deeply human psychological drives, and social networks act as a sounding board for the psychological motivations that underlie catastrophes: 1. Relational motivations (social identity, solidarity, shared reality).2. Epistemic motivations (reduction in uncertainty, complexity, or ambiguity, and seeking cognitive support in certainties, structure, and order).3. Existential motivations (management of adverse circumstances based on security, self-esteem, and the search for meaning in one's own life) [57].In short, in the face of a rumor, the feeling of surprise, indignation, or emotional contagion is more powerful than the factuality of an event [65].All this has led journalists to distrust social networks as a source of information due to the fragmentation and atomization of the informative narrative [66], and resulted in the need to study the impact of social networks on risk communication [67,68].

Verifiers and Fact-Checking as a Professional Tool
Social networks fulfill their objective of communicating, but information needs to be validated in the face of worrying levels of dissemination of false news and the loss of credibility of the media.
Verification of information is one of the basic characteristics of journalistic production and an indispensable task in the exercise of good journalism [69].Fact-checking carried out in media editorial offices before publishing a news item differs from the verification of information that has already been published [70].Fact-checking, or fact-checking journalism as a professional journalistic practice, arises to deal with fake news in order to check, a posteriori, the information published by the media [71].The fact-checker is the journalist responsible for checking all the data in the published information.His job is to resort to primary and quality sources that can confirm or deny the claims made to the public so that they can form their opinion with a critical sense [72].
Fact-checking has experienced substantial growth in recent years since the first initiative in 1995 called Snopes.com.The number of fact-checking media has reliably skyrocketed over the years, from only 11 sites in 2008 to 424 in 2022; however, although misinformation continues to grow, in the last five years, fact-checking has levelled off [73].Throughout this time, verification processes have become a journalistic bet [74], with specific tools [75,76], multiple platforms (FactCheck.org,Politifact.com,First Draft, Crosscheck, Verificado2018, Maldito Bulo, Maldita Hemeroteca Miniver, Newtral, etc.) [77,78], and specialized agencies to combat disinformation [11,78].
Digital verification and fact-checking platforms employ a methodology based on data and investigative journalism in order to dismantle hoaxes and show the contradictions of public discourse [77].To this end, it focuses on searching for statements in media and social networks and the original facts by contrasting sources to expose evidence along with the correctness of the information [71].
Its rapid dissemination is explained by its links to the fundamental values of professional journalism, although the quality of this process is sometimes questioned [79] (p.3).In some cases, users themselves have corrected the data [53].This is the "open verification", or public, collaborative, and real-time corroboration process, which emerged in 2011 during the Arab Spring.It was not without controversy that a journalist shared unverified information in a public forum to contrast it with social network users because of the risk of the data going viral or being manipulated before being verified [6] (p. 8).In any case, correcting misinformation through fact-checking is not always safe and sometimes causes the opposite effect by contributing to amplifying misinformation and enhancing lies [80].Corrections are more effective when they come from friends and acquaintances [81] (p.14).
The digital verification of visual content and sources requires communication professionals to have a specific set of skills [82].The phases of manual fact-checking include extracting statements, constructing convenient questions, obtaining answers from relevant sources, and agreeing on a verdict [83].The verification of information on the web encompasses four tasks: 1. Monitoring media and capturing content.2. Detecting claims.3. Verifying claims.4. Publishing content [84].
The resources, rules, and routines of media newsrooms, as well as the working methods of journalists, define the techniques for verifying content, without forgetting the socalled journalistic sense of smell that makes it possible to assess the content and reliability of sources.In the UNESCO handbook of journalism and disinformation, Trewinnard and Bell [70] identify the most common types of false or misleading visual content: 1. Wrong time/wrong place (old images pertaining to previous events that are shared as new about a recent event).2. Manipulated content (content that has been digitally manipulated using photo or video editing software).3. Staged content (original content that has been created or shared with the intent to mislead, etc.).The dramatic or irritating factor in an image or news item is directly proportional to the risk of it being a hoax [6] (p. 8).
The widespread dissemination of false content brings with it the need to control misinformation in the early days of any crisis to prevent the multiplication of misinformation [85].The huge amount of data available in real time allows many rescue agencies to monitor it on a regular basis to identify disasters, reduce risks, and save lives, but it is impossible for humans to manually verify this large amount of data and identify disasters in real time [86].Technological solutions to avoid information clutter and audience manipulation involve collaboration between humans and machines [77, 87,88].
The main research in this area focuses on the use of machine learning in word representations to identify the sentiment of a text [86] and text classification with NLP (Natural Language Processing) for tweets related to natural disasters [89].Also, some software, such as AI-Social Disaster, allows for decision making to identify and analyze natural disasters such as earthquakes, floods, and forest fires using sources from social networks, and for data for sentiment analysis to be obtained [90].

Materials and Methods
We applied a mixed methodology based on comparative content analysis [91].The studied time was 59 days, from 6 February to 6 April 2023, coinciding with the two earthquakes and an additional month and a half, in order to cover the misinformation derived not only from the rescue, but also from the reconstruction of both countries.
The sample was composed of messages posted on the social network X due to the impact that this social platform has in terms of the reproduction and viralization of its messages [72,92].
First, we turned to the verified accounts of the official bodies of Turkey and Spain, the Ministry of Foreign Affairs (@MAECgob), the Presidency of the Republic of Turkey (@trpresidency), and of President Erdo gan himself (@RTErdogan).In the case of Syria, we could not find any verified accounts, nor could we obtain a response from the Spanish Embassy in Syria, questioned in this regard.
Secondly, we used the profiles of six news agencies: four that control world news (Reuters, Associated Press, Agence France-Presse, Deutsche Presse-Agentur), the main news agency in Spain, EFE, and Anadolu, the main press agency of the Turkish government, as well as the Spanish version of AFP.
Since all of them have specific news verification profiles, we included them in our study.In the case of the DPA agency, which does not have its own verifier, we opted for the inclusion of the German-Austrian Digital Media Observatory, an association of fact-checking organizations and research teams of which the DPA is a member, as well as the @correctif_fakt profile that it also uses for its denials (see Table 1).Since 2015, and in order to counteract the growing flow of misinformation circulating through the media, the International Fact Checker Network (IFCN), a growing community of fact-checkers from around the world, has been operating to help ensure that truth and transparency prevail in information.
The IFCN consists of 74 organizations, of which we selected 20 (see Table 2), as follows: the 10 with the largest number of followers on X; 5 Syrian and Turkish fact-checkers belonging to the International Network (Teyit, Do gruluk Payi, Do grula, Malumatfuruş, and Verify Syria); 2 agencies of news verification, Newtral and Maldita Hemeroteca, which are reference projects in the Spanish context and members of the European Fact-Checking Standards Network (EFCSN); and finally, we included Snopes, being the first site dedicated to fact-checking [93,94]; FactCheck.org,the first digital media with professional journalists dedicated to political fact-checking in the USA; and PolitiFact, specialized in combating disinformation since 2007.The general sample of tweets (n = 46,747) corresponds to the total number of messages published by the selected accounts in relation to the earthquakes in Turkey and Syria during the aforementioned period (6 February to 6 April 2023), from which all the tweets verifying false news and those certifying the authenticity of the news broadcast were extracted for subsequent analysis, thus forming the specific sample (n = 564).
Based on these constants, and to provide answers to RQ1, RQ2 and RQ3, a table was drawn up in which the numbers of comments, retweets, likes and reproductions of each of the tweets published were recorded at a quantitative level.To calculate the viralization and influence capacity of the tweets, we opted for the formula applied by Carrasco-Polaino, Villar-Cirujano, and Tejedor-Fuentes [95], which assigns a double value to retweets versus likes, since the former increase the dissemination capacity of the messages by displaying them on the timeline of the person who published them.
In order to measure the capacity to generate debate, a second formula was established that divides the number of responses by the number of tweets, while the average number of reproductions gives us an idea of the reach obtained by each message.
At a qualitative level, and in order to be able to respond to RQ3, we wanted to know the typology of informative disorders that had been most widely disseminated in relation to the earthquake in Turkey.For this purpose, we resorted to the classification established (see Table 3) by Wardle and Derakhshan [96] (p.46) and collated by Sánchez Duarte and Magallón-Rosa [97], who established the five-fold typology reproduced in the following table, to which we added three additional categories: authentic news, satire/parody, and manufactured content.

Typology of Informative Disorders
Misleading content (1) Misleading use of information to frame an issue.

Manufactured content (3)
The news content is 100% fake, designed to deceive and harm.
False connection (4) Headlines, visuals, or captions do not support or relate to the content.

False context (5)
Authentic content is shared with false contextual information.

Manipulated content (6)
Authentic information or images are manipulated to deceive.
Real news (7) Information is verified and found to be true.

False attribution (1)
Relating images from other contexts, places and/or times to current events.

Exaggeration of facts (2)
Information that is not false but is exaggerated to reinforce an argument.

Image manipulation (3)
Photographs in which non-existent elements are added to reinforce a message.

Invention of facts (4)
False and manufactured content using guerrilla marketing 2.0 tactics, such as automated bots and impersonation.
Counterfeit (5) Specific subcategory of the previous one, which consists of creating fake pages or profiles on social networks that mimic the image of corporate brands or real people.

False attribution (1)
Relating images from other contexts, places and/or times to current events.

Conspiracy theorists (2)
Alluding the event to an act of conspiracy.

Stereotyped history (3)
History that has been reused in a recurrent way.
Sensationalist-Emotional (4) News that appeals to feelings and emotions.
Image manipulation (5) Images that have been modified to support the information provided.

Damage to reputation (6)
News created with the clear objective of losing credibility in a person, institution, or company.
Fraud (7) Information created with the aim of deceiving and stealing the recipient's private data, access credentials, or bank details.
Other (8) Disinformation that does not fall into any of the above categories.
In order to identify the disinformation strategy present in such information, we followed the categorizations of Aparici, García Marín, and Rincón Manzano [98]: false attribution, exaggeration of facts, manipulation of images, invention of facts, and impersonation.
Regarding the category of disinformation, we established eight assumptions: false tracing, conspiracy-mongering, stereotyped stories, sensational-emotional, image manipulation, reputational damage, fraud, and others.
Other variables that we analyzed included the type of bias present in the verification in order to check whether the information was finally true (1), false (2) or neutral (3), in case there was not enough evidence to declare them false, as well as the supporting elements that prevailed in the tweets issued, i.e., photograph (1), video ( 2), text/link (3), GIF (4), or audio (5).
As can be seen, all the above variables have a numerical coding that allowed us to quantify them.For data processing, we used IBM SPSS Statistics version 29 statistical software, whose descriptive statistics allowed us to show the central tendency of the results [99] as well as elaborate the contingency and frequency tables.The reliability of the agreement was calculated using Scott's Pi formula, with an error level of 0.98.

Results
Once the sample was analyzed, we found that the official profiles of the Spanish Ministry of Foreign Affairs, of President Erdo gan, and of the government of the Republic of Turkey, despite the high volume of tweets about the earthquakes, with percentages exceeding 35% at the end of March and April and reaching 98% in February, have not issued any message that disproves any of the false news that has circulated through social networks and messaging applications.
The same circumstances can be seen in the profiles of the main news agencies.During the selected period, the agencies published a total of 15,575 tweets, of which 13.75% (2142) corresponded to information related to the earthquakes that occurred in Turkey and Syria, and no news related to the denial of false information was found.At the news level, the Turkish news agency Anadolu stands out above all, with up to 1847 tweets on the earthquakes, representing 57.3% of its information (see Table 4).It is followed by DPA with 3.3% and AFP (3.2%) with minimal new volumes, especially considering that the earthquake in Turkey was the deadliest in the world since the earthquake in Haiti in 2010, causing structural damage not only in Turkey and Syria, but also in Lebanon, Israel, Cyprus, and the Turkish Black Sea coast.In the case where these agencies had to verify information on a total of 1477 tweets, the percentage of total denials is close to 8.8%.Germany's DPA and France Press, through their two profiles, verified the most erroneous information, with 7.2% and 6.4%, respectively.
In terms of verifiers, Turks and Syrians stand out the most, especially Malumatfuruş (31.8%) and Teyit (21.4%).From outside the territory, we find at a great distance the Spanish Newtral with 4% of tweets aimed at disproving false information, as well as India Today and Liputan6, both with 1.9%, respectively.
As for the bias of the information verified, 91.4% was false, 8.1% was positively biased, and 0.5% was neutral, i.e., it contained true elements but could not be 100% verified.By date, the bulk of information and denials was concentrated in February (87%), with a residual 12% in March.
The hoaxes circulated mostly through X (57%) and Facebook (13%), with a growing prominence of audiovisual networks, with 15% of the total (11% TikTok and 4.9% YouTube), and the online messaging application WhatsApp being responsible for replicating 6.4% of the hoaxes.
As can be seen in Figure 1, the hoaxes had mostly audiovisual support (47% photographs and 38% videos), with those that were only accompanied by audio (1%) being almost non-existent, and all of them by WhatsApp or a GIF (0.2%).
the percentage of total denials is close to 8.8%.Germany's DPA and France Press, through their two profiles, verified the most erroneous information, with 7.2% and 6.4%, respectively.
In terms of verifiers, Turks and Syrians stand out the most, especially Malumatfuruş (31.8%) and Teyit (21.4%).From outside the territory, we find at a great distance the Spanish Newtral with 4% of tweets aimed at disproving false information, as well as India Today and Liputan6, both with 1.9%, respectively.
As for the bias of the information verified, 91.4% was false, 8.1% was positively biased, and 0.5% was neutral, i.e., it contained true elements but could not be 100% verified.By date, the bulk of information and denials was concentrated in February (87%), with a residual 12% in March.
The hoaxes circulated mostly through X (57%) and Facebook (13%), with a growing prominence of audiovisual networks, with 15% of the total (11% TikTok and 4.9% YouTube), and the online messaging application WhatsApp being responsible for replicating 6.4% of the hoaxes.
As can be seen in Figure 1, the hoaxes had mostly audiovisual support (47% photographs and 38% videos), with those that were only accompanied by audio (1%) being almost non-existent, and all of them by WhatsApp or a GIF (0.2%).

Reach of Verifier Tweets
The messages issued by the news agencies obtained better results than those issued by the verifiers; their ability to go viral made them reach a higher percentage of users and obtain a higher engagement than those of the verifiers, receiving 8 comments, 74 retweets, and up to 234 more likes.In terms of reproductions, the difference is overwhelming: almost 300,000 in the case of agencies (274,882), compared to 6993 registered by verifiers.
Of the agencies, EFE was the one with the highest engagement, followed by AP and AFP.These three agencies also lead in terms of discussion; however, this is very low-less than two responses per tweet issued in all cases.However, the tweets issued by AFP and AP do register a large reach, with 10,442 and 6277 reproductions, respectively.

Reach of Verifier Tweets
The messages issued by the news agencies obtained better results than those issued by the verifiers; their ability to go viral made them reach a higher percentage of users and obtain a higher engagement than those of the verifiers, receiving 8 comments, 74 retweets, and up to 234 more likes.In terms of reproductions, the difference is overwhelming: almost 300,000 in the case of agencies (274,882), compared to 6993 registered by verifiers.
Of the agencies, EFE was the one with the highest engagement, followed by AP and AFP.These three agencies also lead in terms of discussion; however, this is very low-less than two responses per tweet issued in all cases.However, the tweets issued by AFP and AP do register a large reach, with 10,442 and 6277 reproductions, respectively.
In the case of verifiers, the local ones stand out the most, perhaps due to geographical proximity or to the law against disinformation passed in October 2022 by the Turkish government, which imposes restrictions on online news sites and social media platforms (see Figure 2).Thus, Teyit, Do gruluk Payi, and Do grula's tweets have the highest number of retweets (481, 130, and 100, respectively), as well as a high number of likes (880, 235, and 176) and a fantastic discussion index, with 20 points in the case of Teyit and 15 in Do grula.Their reach is also huge, with Teyit's tweets obtaining 829,724 views, and 701,600 for Do gruluk Payi.
ernment, which imposes restrictions on online news sites and social media platforms (see Figure 2).Thus, Teyit, Doğruluk Payi, and Doğrula's tweets have the highest number of retweets (481, 130, and 100, respectively), as well as a high number of likes (880, 235, and 176) and a fantastic discussion index, with 20 points in the case of Teyit and 15 in Doğrula.Their reach is also huge, with Teyit's tweets obtaining 829,724 views, and 701,600 for Doğruluk Payi.Outside Turkey, Full Fact and Politifact are the verifiers with the highest engagement.The latter also stands out at the debate level, with more than 40 comments received in the case of the British verifier (see Figure 3).They are followed by Liputan6 and Snopes, while the Spanish sites Newtral and Maldita register very low levels of engagement; in fact, the  Outside Turkey, Full Fact and Politifact are the verifiers with the highest engagement.The latter also stands out at the debate level, with more than 40 comments received in the case of the British verifier (see Figure 3).They are followed by Liputan6 and Snopes, while the Spanish sites Newtral and Maldita register very low levels of engagement; in fact, the latter does not register any likes.In terms of reach, Politifact again stands out (30,062 views), followed by 20 Minutes and Aos Fatos with 29,705 each.
Societies 2024, 14, x FOR PEER REVIEW 11 of 30 In the case of verifiers, the local ones stand out the most, perhaps due to geographical proximity or to the law against disinformation passed in October 2022 by the Turkish government, which imposes restrictions on online news sites and social media platforms (see Figure 2).Thus, Teyit, Doğruluk Payi, and Doğrula's tweets have the highest number of retweets (481, 130, and 100, respectively), as well as a high number of likes (880, 235, and 176) and a fantastic discussion index, with 20 points in the case of Teyit and 15 in Doğrula.Their reach is also huge, with Teyit's tweets obtaining 829,724 views, and 701,600 for Doğruluk Payi.Outside Turkey, Full Fact and Politifact are the verifiers with the highest engagement.The latter also stands out at the debate level, with more than 40 comments received in the case of the British verifier (see Figure 3).They are followed by Liputan6 and Snopes, while the Spanish sites Newtral and Maldita register very low levels of engagement; in fact, the  If we look at the category of disinformation, conspiratorial tweets are the ones that achieve the highest engagement and debate, followed at a great distance by those that cause damage to reputation and those that clearly manipulate images (see Figure 4).Conspiracytype tweets also achieved the greatest reach, registering 1,196,754 reproductions.On the other hand, we find stereotypical stories and false locations as disproved tweets that receive less follow-up and response (see Figure 5).
If we look at the category of disinformation, conspiratorial tweets are the ones that achieve the highest engagement and debate, followed at a great distance by those that cause damage to reputation and those that clearly manipulate images (see Figure 4).Conspiracy-type tweets also achieved the greatest reach, registering 1,196,754 reproductions.On the other hand, we find stereotypical stories and false locations as disproved tweets that receive less follow-up and response (see Figure 5).In terms of disinformation strategy, messages that exaggerate or invent facts stand out, while false attribution and impersonation are those that achieve the lowest level of engagement (see Figure 6).The invention and exaggeration of facts are the strategies with the highest rate of debate, reaching up to 704,091 reproductions in the case of invention (see Figure 7).If we look at the category of disinformation, conspiratorial tweets are the ones that achieve the highest engagement and debate, followed at a great distance by those that cause damage to reputation and those that clearly manipulate images (see Figure 4).Conspiracy-type tweets also achieved the greatest reach, registering 1,196,754 reproductions.On the other hand, we find stereotypical stories and false locations as disproved tweets that receive less follow-up and response (see Figure 5).In terms of disinformation strategy, messages that exaggerate or invent facts stand out, while false attribution and impersonation are those that achieve the lowest level of engagement (see Figure 6).The invention and exaggeration of facts are the strategies with the highest rate of debate, reaching up to 704,091 reproductions in the case of invention (see Figure 7).In terms of disinformation strategy, messages that exaggerate or invent facts stand out, while false attribution and impersonation are those that achieve the lowest level of engagement (see Figure 6).The invention and exaggeration of facts are the strategies with the highest rate of debate, reaching up to 704,091 reproductions in the case of invention (see Figure 7).As for the type of information disorder, tweets pointing to manipulated and impostor content achieve higher engagement.Both types also polarize debate, while manipulated content, with a total of 741,967 reproductions, has the highest reach (see Figures 8 and 9).It is interesting to note that authentic news items barely elicit reproductions, likes, or retweets.As for the type of information disorder, tweets pointing to manipulated and impostor content achieve higher engagement.Both types also polarize debate, while manipulated content, with a total of 741,967 reproductions, has the highest reach (see Figures 8 and 9).It is interesting to note that authentic news items barely elicit reproductions, likes, or retweets.As for the type of information disorder, tweets pointing to manipulated and impostor content achieve higher engagement.Both types also polarize debate, while manipulated content, with a total of 741,967 reproductions, has the highest reach (see Figures 8 and 9).It is interesting to note that authentic news items barely elicit reproductions, likes, or retweets.When examining the tables of supporting elements and those related to the reach of the tweets, we observed that messages that include photography or video are those that achieve greater engagement (see Figure 10); however, it is the messages with audio, despite their small number (they do not reach 1%), that have the greatest reach, with more than 13 million reproductions, putting the instant messaging network WhatsApp in the spotlight for its viralizing power of misinformation (see Figure 11).When examining the tables of supporting elements and those related to the reach of the tweets, we observed that messages that include photography or video are those that achieve greater engagement (see Figure 10); however, it is the messages with audio, despite their small number (they do not reach 1%), that have the greatest reach, with more than 13 million reproductions, putting the instant messaging network WhatsApp in the spotlight for its viralizing power of misinformation (see Figure 11).When examining the tables of supporting elements and those related to the reach of the tweets, we observed that messages that include photography or video are those that achieve greater engagement (see Figure 10); however, it is the messages with audio, despite their small number (they do not reach 1%), that have the greatest reach, with more than 13 million reproductions, putting the instant messaging network WhatsApp in the spotlight for its viralizing power of misinformation (see Figure 11).

Types of Disorder and Categories of Misinformation
In relation to the type of fake news that is created and circulated in the wake of a major natural disaster, we can state that in the case of the earthquakes in Turkey and Syria, the following fake news story type stands out: false context (51%).For example, there was a case where images of the collapse of the Champlain Towers South Condo in Surfside, Miami, were passed off as buildings from Turkey.
Next in importance is content fabricated with the aim of misleading and harming (23%).This is the case of news stories that alluded to the fact that foreign countries removed their ambassadors from Turkey 24 h before the catastrophe, implying that they knew what was going to happen.There were also false collections for the victims of the earthquakes, behind which phishing intentions were hidden.
It is also noteworthy that 8% of the news items turned out to be authentic and that 11% of the information was manipulated with a clear intention to deceive (see Figure 12).

Types of Disorder and Categories of Misinformation
In relation to the type of fake news that is created and circulated in the wake of a major natural disaster, we can state that in the case of the earthquakes in Turkey and Syria, the following fake news story type stands out: false context (51%).For example, there was a case where images of the collapse of the Champlain Towers South Condo in Surfside, Miami, were passed off as buildings from Turkey.
Next in importance is content fabricated with the aim of misleading and harming (23%).This is the case of news stories that alluded to the fact that foreign countries removed their ambassadors from Turkey 24 h before the catastrophe, implying that they knew what was going to happen.There were also false collections for the victims of the earthquakes, behind which phishing intentions were hidden.
It is also noteworthy that 8% of the news items turned out to be authentic and that 11% of the information was manipulated with a clear intention to deceive (see Figure 12).

Types of Disorder and Categories of Misinformation
In relation to the type of fake news that is created and circulated in the wake of a major natural disaster, we can state that in the case of the earthquakes in Turkey and Syria, the following fake news story type stands out: false context (51%).For example, there was a case where images of the collapse of the Champlain Towers South Condo in Surfside, Miami, were passed off as buildings from Turkey.
Next in importance is content fabricated with the aim of misleading and harming (23%).This is the case of news stories that alluded to the fact that foreign countries removed their ambassadors from Turkey 24 h before the catastrophe, implying that they knew what was going to happen.There were also false collections for the victims of the earthquakes, behind which phishing intentions were hidden.
It is also noteworthy that 8% of the news items turned out to be authentic and that 11% of the information was manipulated with a clear intention to deceive (see Figure 12).The results on the agencies and verifiers are unequal with respect to the type of disinformation verified.In the first place, it is observed that agencies do not verify the news reports that turn out to be authentic, something that Turkish verifiers do, mainly.While it is true that both mostly disprove news with a false context, agencies focus secondly on impostor content (13.1%), while verifiers detect a higher percentage of fabricated (27.6%) and manipulated (11.8%) content (see Figure 13).Of the agencies studied, EFE is the one that identifies the most impostor content, while AFP and Correctif Fakt are the ones that particularly detect manipulated content.Fabricated and fake connections are located to a greater extent by AP and AFP (see Figure 14).In the case of verifiers, Snopes, Politifact, and FactCheck are the only ones to detect false connections, impostors, and misleading content, respectively.Liputan6 is the only verifier outside Turkey that verifies the authenticity of news.As can be seen in Figure 15, except for Turkish verifiers, most only identify one or two types of news clutter.The results on the agencies and verifiers are unequal with respect to the type of disinformation verified.In the first place, it is observed that agencies do not verify the news reports that turn out to be authentic, something that Turkish verifiers do, mainly.While it is true that both mostly disprove news with a false context, agencies focus secondly on impostor content (13.1%), while verifiers detect a higher percentage of fabricated (27.6%) and manipulated (11.8%) content (see Figure 13).The results on the agencies and verifiers are unequal with respect to the type of disinformation verified.In the first place, it is observed that agencies do not verify the news reports that turn out to be authentic, something that Turkish verifiers do, mainly.While it is true that both mostly disprove news with a false context, agencies focus secondly on impostor content (13.1%), while verifiers detect a higher percentage of fabricated (27.6%) and manipulated (11.8%) content (see Figure 13).Of the agencies studied, EFE is the one that identifies the most impostor content, while AFP and Correctif Fakt are the ones that particularly detect manipulated content.Fabricated and fake connections are located to a greater extent by AP and AFP (see Figure 14).In the case of verifiers, Snopes, Politifact, and FactCheck are the only ones to detect false connections, impostors, and misleading content, respectively.Liputan6 is the only verifier outside Turkey that verifies the authenticity of news.As can be seen in Figure 15, except for Turkish verifiers, most only identify one or two types of news clutter.Of the agencies studied, EFE is the one that identifies the most impostor content, while AFP and Correctif Fakt are the ones that particularly detect manipulated content.Fabricated and fake connections are located to a greater extent by AP and AFP (see Figure 14).In the case of verifiers, Snopes, Politifact, and FactCheck are the only ones to detect false connections, impostors, and misleading content, respectively.Liputan6 is the only verifier outside Turkey that verifies the authenticity of news.As can be seen in Figure 15, except for Turkish verifiers, most only identify one or two types of news clutter.Clearly in line with the studied typologies, the predominant category is false location (34%), followed by disinformation of an emotional nature (27%), such as the tweet of a dog that finds its owners among the rubble (see Image 1), taken from an image bank and viralized in other languages such as English, French, and German.Clearly in line with the studied typologies, the predominant category is false location (34%), followed by disinformation of an emotional nature (27%), such as the tweet of a dog that finds its owners among the rubble (see Image 1), taken from an image bank and viralized in other languages such as English, French, and German.Clearly in line with the studied typologies, the predominant category is false location (34%), followed by disinformation of an emotional nature (27%), such as the tweet of a dog that finds its owners among the rubble (see Image 1), taken from an image bank and viralized in other languages such as English, French, and German.In smaller proportions, we found disinformation with a clear intention to damage reputation (10%) and others of conspiratorial origin (9%), such as reports related to the pre-earthquake lights seen in the sky or the premonitory flocks of birds (see Image 2).Eight percent of the verified images were found to be manipulated and 8.7% of the messages were ultimately intended to commit fraud.False location hoaxes and sensationalist hoaxes are the categories of disinformation most frequently identified by agencies and verifiers, as can be seen in Figure 16.To a lesser extent, agencies identify hoaxes that manipulate images (10.8%) and conspiracies (7.2%), while verifiers check almost twenty percent of the latter category.It is important to note that none of the agencies disprove disinformation in the fraud category, while verifiers do so 10.8% of the time.In smaller proportions, we found disinformation with a clear intention to damage reputation (10%) and others of conspiratorial origin (9%), such as reports related to the pre-earthquake lights seen in the sky or the premonitory flocks of birds (see Image 2).Eight percent of the verified images were found to be manipulated and 8.7% of the messages were ultimately intended to commit fraud.In smaller proportions, we found disinformation with a clear intention to damage reputation (10%) and others of conspiratorial origin (9%), such as reports related to the pre-earthquake lights seen in the sky or the premonitory flocks of birds (see Image 2).Eight percent of the verified images were found to be manipulated and 8.7% of the messages were ultimately intended to commit fraud.False location hoaxes and sensationalist hoaxes are the categories of disinformation most frequently identified by agencies and verifiers, as can be seen in Figure 16.To a lesser extent, agencies identify hoaxes that manipulate images (10.8%) and conspiracies (7.2%), while verifiers check almost twenty percent of the latter category.It is important to note that none of the agencies disprove disinformation in the fraud category, while verifiers do so 10.8% of the time.False location hoaxes and sensationalist hoaxes are the categories of disinformation most frequently identified by agencies and verifiers, as can be seen in Figure 16.To a lesser extent, agencies identify hoaxes that manipulate images (10.8%) and conspiracies (7.2%), while verifiers check almost twenty percent of the latter category.It is important to note that none of the agencies disprove disinformation in the fraud category, while verifiers do so 10.8% of the time.Conspiratorial hoaxes are most detected by EFE (45%), followed by DPA, GadmoEu (100%), and Correctif Fakt (57.1%), and the hoaxes are mainly related to false location.Emotional content hoaxes are above 25% in most agencies (see Figure 17), reaching 41.7% in the case of Reuters.AFP is the only one that refutes hoaxes that cause reputational damage (7.4%), as well as those that point to stereotyped stories (3.7%).As with the type of disinformation, except for Turkish verifiers, most verifiers only identify one or two categories, the most recurrent being false location, sensationalist, and conspiracy theories (see Figure 18).Fact Check is the one that most often identifies the fraud category, while local verifiers are the ones that detect and disprove the largest number of disinformation categories.Conspiratorial hoaxes are most detected by EFE (45%), followed by DPA, GadmoEu (100%), and Correctif Fakt (57.1%), and the hoaxes are mainly related to false location.Emotional content hoaxes are above 25% in most agencies (see Figure 17), reaching 41.7% in the case of Reuters.AFP is the only one that refutes hoaxes that cause reputational damage (7.4%), as well as those that point to stereotyped stories (3.7%).Conspiratorial hoaxes are most detected by EFE (45%), followed by DPA, GadmoEu (100%), and Correctif Fakt (57.1%), and the hoaxes are mainly related to false location.Emotional content hoaxes are above 25% in most agencies (see Figure 17), reaching 41.7% in the case of Reuters.AFP is the only one that refutes hoaxes that cause reputational damage (7.4%), as well as those that point to stereotyped stories (3.7%).As with the type of disinformation, except for Turkish verifiers, most verifiers only identify one or two categories, the most recurrent being false location, sensationalist, and conspiracy theories (see Figure 18).Fact Check is the one that most often identifies the fraud category, while local verifiers are the ones that detect and disprove the largest number of disinformation categories.As with the type of disinformation, except for Turkish verifiers, most verifiers only identify one or two categories, the most recurrent being false location, sensationalist, and conspiracy theories (see Figure 18).Fact Check is the one that most often identifies the fraud category, while local verifiers are the ones that detect and disprove the largest number of disinformation categories.

Disinformation Strategies
False attribution (48%) and the invention of facts (27%) are the most detected disinformation strategies (see Figure 19).The objective behind both categories is to achieve rapid viralization through the use of shocking images of natural disasters that are passed off or related to current events, and invent facts to reach further, faster, and more people.In third place, we find 17% of misinformation based on image manipulation, such as a tweet of a fake staircase that remained standing, or clouds in the shape of praying hands that were nothing more than a manipulated photogram posted by an Argentinean fan on TikTok during the World Cup in Qatar (see Image 3a,b).

Disinformation Strategies
False attribution (48%) and the invention of facts (27%) are the most detected disinformation strategies (see Figure 19).The objective behind both categories is to achieve rapid viralization through the use of shocking images of natural disasters that are passed off or related to current events, and invent facts to reach further, faster, and more people.In third place, we find 17% of misinformation based on image manipulation, such as a tweet of a fake staircase that remained standing, or clouds in the shape of praying hands that were nothing more than a manipulated photogram posted by an Argentinean fan on TikTok during the World Cup in Qatar (see Image 3a,b).

Disinformation Strategies
False attribution (48%) and the invention of facts (27%) are the most detected disinformation strategies (see Figure 19).The objective behind both categories is to achieve rapid viralization through the use of shocking images of natural disasters that are passed off or related to current events, and invent facts to reach further, faster, and more people.In third place, we find 17% of misinformation based on image manipulation, such as a tweet of a fake staircase that remained standing, or clouds in the shape of praying hands that were nothing more than a manipulated photogram posted by an Argentinean fan on TikTok during the World Cup in Qatar (see Image 3a,b).The disinformation strategy most frequently identified by the agencies is false attribution (50%), followed by image manipulation (26.2%).This is the general trend, except for EFE, which denied 65% of hoaxes centered on the manipulation of images (65%).On the other hand, the German DPA, through Correctif Fakt, detected information that promoted impersonation (14.3%) and the fabrication of facts (28.6%) to a greater extent (see Figures 20 and 21).The disinformation strategy most frequently identified by the agencies is false attribution (50%), followed by image manipulation (26.2%).This is the general trend, except for EFE, which denied 65% of hoaxes centered on the manipulation of images (65%).On the other hand, the German DPA, through Correctif Fakt, detected information that promoted impersonation (14.3%) and the fabrication of facts (28.6%) to a greater extent (see Figures 20 and 21).The disinformation strategy most frequently identified by the agencies is false attribution (50%), followed by image manipulation (26.2%).This is the general trend, except for EFE, which denied 65% of hoaxes centered on the manipulation of images (65%).On the other hand, the German DPA, through Correctif Fakt, detected information that promoted impersonation (14.3%) and the fabrication of facts (28.6%) to a greater extent (see Figures 20 and 21).On the other hand, verifiers, as a general rule, focus on the exaggeration of facts (47.3%) and impersonation (30%).In fact, there are quite a few tweets that allude to the fact that Qatar was going to donate the money obtained from the World Cup or that Turkish universities were on vacation, when, in fact, after the disaster they continued teaching in an online format.
As on previous occasions, the Turkish fact-checkers are usually the ones who disprove the most disinformation strategies, while the rest focus on one or two.In the case of Animal Político, only falsely attributed news is disproved, and 20 Minutes only focuses on tweets that invent facts.PolitiFact is the verifier that, to a greater extent, disproves image manipulation, while Fact Check and the Turkish verifier Teyit are the ones that, to a greater extent, identify phishing hoaxes (see Figure 22).On the other hand, verifiers, as a general rule, focus on the exaggeration of facts (47.3%) and impersonation (30%).In fact, there are quite a few tweets that allude to the fact that Qatar was going to donate the money obtained from the World Cup or that Turkish universities were on vacation, when, in fact, after the disaster they continued teaching in an online format.
As on previous occasions, the Turkish fact-checkers are usually the ones who disprove the most disinformation strategies, while the rest focus on one or two.In the case of Animal Político, only falsely attributed news is disproved, and 20 Minutes only focuses on tweets that invent facts.PolitiFact is the verifier that, to a greater extent, disproves image manipulation, while Fact Check and the Turkish verifier Teyit are the ones that, to a greater extent, identify phishing hoaxes (see Figure 22).On the other hand, verifiers, as a general rule, focus on the exaggeration of facts (47.3%) and impersonation (30%).In fact, there are quite a few tweets that allude to the fact that Qatar was going to donate the money obtained from the World Cup or that Turkish universities were on vacation, when, in fact, after the disaster they continued teaching in an online format.
As on previous occasions, the Turkish fact-checkers are usually the ones who disprove the most disinformation strategies, while the rest focus on one or two.In the case of Animal Político, only falsely attributed news is disproved, and 20 Minutes only focuses on tweets that invent facts.PolitiFact is the verifier that, to a greater extent, disproves image manipulation, while Fact Check and the Turkish verifier Teyit are the ones that, to a greater extent, identify phishing hoaxes (see Figure 22).

Discussion and Conclusions
Truth has an added handicap that makes it vulnerable to misinformation.While the truth must prove that it is true, a lie does not have to do so; it just spreads like an oil slick on water.
It is precisely in the situations of uncertainty and vulnerability that catastrophes provoke that disinformation finds an exponential field of fertilizer.In the confusion and drama, disinformation thrives with a wide range of intentions, from conspiracy narratives to fraud.The rapid spread of fake news at critical moments underlines the need for transparent and accurate communication, and to this end, authorities, journalists, and digital platforms must collaborate to disseminate verified data and counteract disinformation [1,100].
Crisis communication in disaster situations has become obsolete with the eruption of the Internet, and it is time to reinforce traditional communication channels with the incorporation of new actors capable of informing, producing content, and providing rapid information in real time [1,101].Social networks have proven to be a basic source of information for citizens and cooperation, since they not only bridge physical distances, but their immediacy and speed, as well as their bidirectional capacity, allow them in many cases to provide an effective response, and even to counteract rumors [102][103][104].
Social networks function as an emotional support and allow the reinforcement of traditional communication channels; therefore, they play a key role in the crisis communication of governments and institutions [32] since they allow a constant monitoring of the evolution of the disaster [24], as has been proven in previous natural disasters [38].
However, they can also contribute to aggravating crises by disseminating erroneous data, generating panic, and hindering decision-making and, consequently, an effective response [67].It is in the face of these events that citizens need greater information transparency; information saturation itself generates disinformation as a result of the bots that develop information bubbles [43] and create an intoxicated information ecosystem [44].
In this sense, previous studies have shown that network publications become obsolete and inaccurate [45] and even lead to informative errors in the media, which is why the work of verification takes on unusual importance in this crusade against disinformation.In this line, our study aims to explore both the type of disinformation that circulated around the earthquakes in Turkey and Syria in February 2023, as well as the response mechanisms given to this problem by institutions, news agencies, and verifiers.
The results of this research provide new knowledge and interesting findings on the misinformation that circulated on social media and instant messaging platforms about the natural disaster in our study.
The main contribution is to note the low level of commitment of official bodies and news agencies to the verification and rectification of false news circulating on the Internet.In response to RQ1, we found that even though the quality of democracy and the full exercise of civil rights by citizens depend on the veracity of information [105], neither the official accounts of the Turkish government nor the main international news agencies have been responsible for refuting existing disinformation (RQ1).Although it is true that the agencies have specific verification profiles, the percentage of verified news is low, a trend that is also present in the verifications carried out.
The media continue to play a dominant role in crisis and disaster communication [106], and to this end, the journalistic profession has adopted the detection of falsehood as one of its work routines.In this sense, a new type of journalism focusing on data, research, and verification emerges: fact-checking.This is a new business model [11] (p. 10) that responds to a growing need for the digital verification of news, while contributing to creating an ecosystem that ensures the accuracy of data and facts circulating in the public sphere [105].
On the value of verifiers in improving the quality of information circulating in virtual networks (RQ2), we have been able to verify that it is precisely these verifiers who detect a greater number of types, categories, and strategies of disinformation (RQ2), noting the effort made and also the professionalism of members of staff.Of the studied verifiers, it is the Turkish verifiers, especially Malumatfuruş and Teyit, who verify the highest volume of disinformation.Outside Turkey, the Spanish company Newtral is the one that carries out the most verification work.
Their firm commitment to increase the skills of journalists against disinformation [107] and the incorporation of artificial intelligence algorithms that contribute to the detection of fake news have made fact-checking agencies an instrument that allows the recovery of credibility in journalism, as well as the revitalization of democracy and public discourse [108], even considering that the main difficulty in the fight against disinformation is that it is a very broad term.
Our third contribution concerns the typology of fake news created and circulated in the aftermath of a major natural disaster (RQ3).Thus, we confirmed that the information verified corresponds mostly to the field of disinformation, since there is an intentionality when designing and circulating it, either to appeal to human emotions, magnify the event, damage reputation, or simply commit fraud.Only 0.5% of cases were a result of misinformation derived from the incorrect interpretation of the economic and material aid announced by various international organizations (see the case of the donation of the revenues of the World Cup in Qatar).
In relation to the predominant bias type, it is mostly negative, and although more than half of news reports circulate through X, there is a growing prominence of TikTok and the WhatsApp application.It is also interesting to note that most of the hoaxes had audiovisual support, and although only 1% were accompanied by audio, their penetration rate was enormous, which allows us to point to the viralizing power of WhatsApp messages and misinformation.
Regarding the most frequent type of informative disorder, we found those of false context, i.e., real images previously used in natural disasters and passed off as current, to be the most prominent, with those aimed at deceiving or defrauding also being notable.In line with the typologies studied, false localization and emotional disinformation were the predominant forms within the analyzed categories.These results coincide with those obtained in previous studies [8] [109] (p.10), which point out that these hoaxes that follow the structure of false context suggest that "potential issuers prefer to start from what they consider visual evidence, which accompanied by a text would serve as a way of justifying the veracity of the claims".
However, it is striking that almost 30% of the analyzed messages aimed to damage reputation, commit fraud, or were of conspiratorial origin.These types of messages, despite their lower percentage, are the ones that obtain by far the greatest reach, as well as higher engagement and a high level of debate, especially those that invent facts.
As for the strategies followed, false attribution, the invention of facts, and the manipulation of images with the aim of sensationalizing stand out as contributors to the rapid dissemination and viralization of messages.
Another relevant aspect highlighted by our research is the significant role and prominence of Turkish verifiers.Their work is essential to prevent the dissemination of false information in contexts of alarm and uncertainty, but also to ensure that less content remains unverified than in other countries.Another element to highlight is their work when it comes to highlighting the neutral bias of information in which there are indications of verisimilitude, but which they have not been able to prove to be true.
In crisis situations, the dissemination of accurate information is of crucial importance to avoid the spread of rumors and harmful misinformation.Following this, verifying accurate news provides the population with the certainty that they are dealing with reliable data on the magnitude of the disaster, the affected areas, and the recommended safety measures, helping them to make decisions on critical aspects for their well-being and allowing people to trust the authorities and the response measures implemented.News verification by Turkish verifiers also helps to prevent the dissemination of false information in contexts of alarm and uncertainty, so it is striking that neither agencies nor official bodies have used their channels to reassure the population and ensure public safety.
Finally, and in relation to whether the efforts made to counteract disinformation are sufficient or whether the power of propagation is greater than that of verification (RQ4), the metrics show that, despite their efforts, the tweets of the verifiers do not achieve the desired reach, debate, or engagement; only the local verifiers, especially Teyit, Do gruluk Payi, and Do grula, perhaps due to their geographical proximity and level of credibility among the population, obtain acceptable levels of viralization in disproving disinformation.These results confirm that the detection and exposure of a hoax does not always have the same impact as the original news story [110].
In conclusion, these empirical results demonstrate the high level of disinformation circulating on social networks and instant messaging platforms after the event of a natural catastrophe, such as the earthquakes in Turkey and Syria in February 2023.Likewise, it once again puts the focus back on how social networks do not take responsibility for what is posted on them, and how the "media citizenry" [111], which should be "the backbone of pluralistic societies through contrasted information, reasonable opinions and plausible interpretations", lacks the skills to resist the incessant flow of disinformation that occurs in this type of situation.
The study also highlights the role of fact-checking in journalism, since, based on the discourse of objectivity, journalists are responsible for disproving and verifying circulating information on the basis of sources, verified facts, and contrasting voices, a task that should also be incumbent on institutions responsible for increasing the supply of public goods available with truthful and intelligible information that allows citizens to understand their immediate environment [112].
Based on the results obtained, a recommendation to combat the disinformation generated in disaster situations is proposed.This would involve generating standardized dissemination metrics so that the information already disproved by the verifying agencies can be distinguished from new disinformation, thus saving verification efforts, reducing workload, and combating to a greater extent the flow of disinformation and the viralization of already verified hoaxes [109].
Among the limitations of this study, some inherent to the sample stand out, such as the impossibility of analyzing the fake news that circulated on a network that was not contrasted by verification agencies.In addition, in the case of the Syrian earthquake, no official verified accounts or response from the Spanish Embassy in Syria were found to overcome this limitation.
In future studies on verification, research could be completed with an in-depth analysis of new promoters of fake news on social networks, their thematic and strategic agenda, the implementation of new tools related to the dissemination of disinformation through artificial intelligence, and the impact on audiences.

Figure 1 .
Figure 1.Comparison of the reach obtained by agency tweets.

Figure 1 .
Figure 1.Comparison of the reach obtained by agency tweets.

Figure 2 .
Figure 2. Comparison of the scope achieved by Turkish verifiers.

Figure 3 .
Figure 3. Scope achieved by the rest of the verifiers.

Figure 2 .
Figure 2. Comparison of the scope achieved by Turkish verifiers.

Figure 2 .
Figure 2. Comparison of the scope achieved by Turkish verifiers.

Figure 3 .
Figure 3. Scope achieved by the rest of the verifiers.

Figure 3 .
Figure 3. Scope achieved by the rest of the verifiers.

Figure 4 .
Figure 4. Engagement and discussion achieved by tweets based on their category.

Figure 5 .
Figure 5. Reach of tweets according to their category.

Figure 4 .
Figure 4. Engagement and discussion achieved by tweets based on their category.

Figure 4 .
Figure 4. Engagement and discussion achieved by tweets based on their category.

Figure 5 .
Figure 5. Reach of tweets according to their category.

Figure 5 .
Figure 5. Reach of tweets according to their category.

Figure 6 .
Figure 6.Engagement and discussion achieved by tweets based on their strategy.

Figure 7 .
Figure 7. Reach of tweets according to their strategy.

Figure 6 .
Figure 6.Engagement and discussion achieved by tweets based on their strategy.

Figure 6 .
Figure 6.Engagement and discussion achieved by tweets based on their strategy.

Figure 7 .
Figure 7. Reach of tweets according to their strategy.

Figure 7 .
Figure 7. Reach of tweets according to their strategy.

Figure 8 .
Figure 8. Engagement and discussion achieved by tweets based on the type of information disorder.

Figure 9 .
Figure 9. Reach of tweets based on the type of information disorder.

Figure 8 .
Figure 8. Engagement and discussion achieved by tweets based on the type of information disorder.

Figure 9 .
Figure 9. Reach of tweets based on the type of information disorder.

Figure 9 .
Figure 9. Reach of tweets based on the type of information disorder.

Figure 10 .
Figure 10.Engagement and discussion achieved by tweets based on their supporting elements.

Figure 11 .
Figure 11.Reach of tweets based on their supporting elements.

Figure 11 .
Figure 11.Reach of tweets based on their supporting elements.

Figure 13 .
Figure 13.Comparison of information disorders between agencies and verifiers.

Figure 13 .
Figure 13.Comparison of information disorders between agencies and verifiers.

Figure 13 .
Figure 13.Comparison of information disorders between agencies and verifiers.

Figure 14 .
Figure 14.Disorders denied by the agencies.

Figure 15 .
Figure 15.Type of informative disorder denied by the verifiers.

Figure 14 .
Figure 14.Disorders denied by the agencies.

Figure 15 .
Figure 15.Type of informative disorder denied by the verifiers.

Figure 16 .
Figure 16.Comparison of disinformation categories detected by agencies and verifiers.

Figure 17 .
Figure 17.Categories of disinformation denied by the agencies.

Societies 2024 , 30 Figure 16 .
Figure 16.Comparison of disinformation categories detected by agencies and verifiers.

Figure 17 .
Figure 17.Categories of disinformation denied by the agencies.

Figure 17 .
Figure 17.Categories of disinformation denied by the agencies.

Figure 18 .
Figure 18.Categories of disinformation denied by verifiers.

Figure 18 .
Figure 18.Categories of disinformation denied by verifiers.

Image 3 .
(a,b) Image manipulation: A snapshot of Aleppo in 2013 and clouds representing praying hands.Both tweets are from Malumatfuruş.

Image 3 .
(a,b) Image manipulation: A snapshot of Aleppo in 2013 and clouds representing praying hands.Both tweets are from Malumatfuruş.

Figure 21 .
Figure 21.Disinformation strategies detected by the agencies.

Figure 21 .
Figure 21.Disinformation strategies detected by the agencies.

Figure 21 .
Figure 21.Disinformation strategies detected by the agencies.

Table 1 .
Organizations, news agencies, and verifiers included in the sample.

Table 2 .
IFCN media selected for the sample.
Engagement and discussion achieved by tweets based on the type of information disorder.