Next Article in Journal
The Spectacle of Power: Hybridisation and Digital Populism in White House Communication (2025)
Previous Article in Journal
Constructing Stability: The Emergence and Persistence of a Newly Formed Status Characteristic
Previous Article in Special Issue
Cartography of the Use of Artificial Intelligence Against Disinformation in Europe: Trends, Stakeholders, and Emerging Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence

by
Hala Alshwayyat
and
Jorge Vázquez-Herrero
*
Faculty of Communication Sciences, Universidade de Santiago de Compostela, Avenida de Castelao s/n, 15782 Santiago de Compostela, A Coruña, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2026, 15(3), 185; https://doi.org/10.3390/socsci15030185
Submission received: 29 November 2025 / Revised: 10 March 2026 / Accepted: 11 March 2026 / Published: 13 March 2026
(This article belongs to the Special Issue Disinformation in the Age of Artificial Intelligence)

Abstract

Information disorders are a significant global issue but are particularly relevant and underexplored in the Middle East, where political instability contributes to their spread. Despite the critical role fact-checking platforms play in combating information disorders, we need to learn more about how these platforms operate in such a complicated regional context. This study analyzes three fact-checking platforms: Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran) to better understand the differences in how they approach fact-checking, the strategies they use, and the obstacles they face, including social and political conditions but also regarding the impact of AI. Using a multimethod qualitative approach based on document analysis and interviews, the study highlights recurring issues such as censorship, limited access to data, and audience engagement. The findings reveal how these platforms address these challenges and provide valuable insights into effective methodologies for fighting mis-/disinformation. The results offer broader implications for enhancing media literacy, strengthening the role of fact-checking platforms in the Middle East, and providing recommendations for best practices that can be applied regionally.

1. Introduction

In recent years, the rise in information disorders has become one of the most pressing global challenges, affecting journalism, politics, and public trust in media. While research has largely focused on Western contexts, comparatively little is known about how misinformation is addressed in the Middle East, where political constraints and censorship shape the circulation of information. Wardle and Derakhshan (2017) offered one of the most influential frameworks for understanding this phenomenon, distinguishing between misinformation, which they defined as false information shared without intent to harm, disinformation as false information shared deliberately to harm, and malinformation, which is genuine information shared to cause harm. Expanding on this conceptualization, Kandel (2020) introduced the notion of Information Disorder Syndrome, which classifies the spread of false information into different grades of severity and calls for strategies ranging from rumors surveillance to regulatory interventions. More recent research by Revez and Corujo (2024) highlights the persistent research gaps in understanding how different actors, including scientists and media professionals, cope with the creation, acceptance, and dissemination of false information, emphasizing the need for multidisciplinary approaches to address this evolving problem.
While much of the existing literature focuses on Western contexts, there remains limited understanding of how information disorders manifest in the Middle East, where political instability, censorship, and the limited digital access shape the circulation of information. This study seeks to address this gap by examining how three fact-checking platforms, Akeed in Jordan, Teyit in Turkey, and Factnameh in Iran, operate within this complex landscape, exploring their methodologies, challenges, and the growing influence of artificial intelligence in their practices. Investigating these platforms is important because it expands knowledge of fact-checking practices beyond Western contexts, provides insights into credibility-building under constrained environments, and highlights how emerging technologies like AI are cautiously integrated into verification processes. By focusing on these Middle Eastern cases, the study offers empirical insight into the operational realities, strategies, and challenges of fact-checking in regions that remain underrepresented in academic research.

1.1. Media Environments in the Middle East

The media environments of the Middle East are characterized by a persistent tension between modernization and control, where political structures, economic fragility, and social dynamics shape information circulation and credibility. As Sreberny (2008) argues, studying media in this region requires moving beyond Western-centric models toward frameworks that account for overlapping political and cultural forces. While digitalization has expanded the public sphere, it has also amplified information disorders, false or misleading content that thrives in contexts marked by censorship, low transparency, and polarized politics.
Jordan presents a hybrid media landscape that swing between openness and constraint. Despite periodic reforms, traditional media remain heavily influenced by political authority and economic dependency. Studies by Alzubi (2022) or Pies and Madanat (2011) show that digital transformation has offered new opportunities for expression and accountability, yet challenges persist, including limited access to official data, restrictive media laws, and insufficient funding for independent outlets. Tweissi (2019) notes that while the Jordanian media system has partially liberalized since the early 2000s, self-censorship and regulatory barriers continue to limit critical reporting. Recent initiatives studied by Dayyeh and Al-Zaghlawan (2024), such as the government’s integration of media and information literacy (MIL) into the education system, represent an effort to build resilience against misinformation. However, as Habes et al. (2023) pointed out, misinformation continues to spread rapidly online, particularly during times of crisis, such as the COVID-19 pandemic. These conditions create a complex environment in which fact-checking initiatives like Akeed operate as both mediators of truth and navigators of political sensitivity.
In Turkey, the Justice and Development Party (AKP) has gradually consolidated control over the media landscape, reshaping it into what scholars like Akser and Baybars-Hawks (2012) and Coşkun (2020) describe as a “captured” system. Demir (2020) emphasizes that this control extends to economic ownership, judicial pressure, and intimidation tactics that discourage dissent. Although as Çetinkaya et al. (2014) mentioned, social media initially offered alternative avenues for public debate and mobilization, the state has increasingly exerted control through digital surveillance, content removal, and the strategic use of pro-government “troll armies”—organizations or groups that coordinate the spread of disinformation (Ayeb and Bonini 2024)—to dominate discourse, which Saka (2018) confirmed in his research. Corke et al. (2014) and Kurban and Sözeri (2013) both document how legislative and institutional constraints have undermined media pluralism and accountability, resulting in one of the most repressive communication environments in the region. Within this setting, Teyit—Turkey’s leading fact-checking site—must balance its verification mission with navigating political hostility and regulatory threats.
Iran represents a more tightly centralized media system dominated by state ideology and censorship. Rahimi (2015) explains how Iran’s media regulation is designed not only to suppress dissent but also to construct a state-sanctioned narrative of legitimacy. Blout (2017) adds that this control operates through a discourse of “soft war,” portraying foreign media influence as a cultural invasion. Sohrabi-Haghighat (2011) highlights that despite pervasive restrictions, online communication has occasionally served as a site of resistance, as seen during the 2009 Green Movement—protests in Iran against alleged electoral fraud, marked by mass demonstrations, social media activism, and demands for political reform and civil rights, which faced severe government repression—when digital networks challenged state narratives. Yet, Zanconato and Sabahi (2019) argued that subsequent crackdowns have reinforced strict ideological boundaries, and independent journalism remains heavily constrained. These dynamics help explain why Factnameh—Iran’s principal fact-checking site—operates from Toronto, Canada, where it can conduct verification work without exposure to censorship, surveillance, or political retaliation.
Across Jordan, Turkey, and Iran, the media environments share several structural features, including state influence over information flows, restrictions on press freedom, and limited data transparency, all of which facilitate the persistence of information disorders. Understanding these environments is essential for analyzing how fact-checking initiatives such as Akeed, Teyit, and Factnameh function, the challenges they face, and the strategies they employ to uphold credibility within constrained media systems.

1.2. Fact-Checking and Its Challenges in Times of AI

Fact-checking has become an essential journalistic practice in the fight against misinformation and disinformation, both of which undermine public trust and distort democratic discourse. Traditionally, fact-checking referred to internal verification routines within news organizations, aimed at ensuring accuracy before publication. As Graves and Amazeen (2019) explain, this “internal fact-checking” was once a defining feature of professional journalism, especially in magazines that adhered to strict verification norms. However, with the decline of newsroom resources and the rise in digital misinformation, a newer form, external fact-checking, has gained prominence. This involves publicly evaluating the accuracy of statements made by politicians, public figures, or viral online content.
Fact-checking emerged as a distinct journalistic practice in the United States during the 1990s, initially as a response to misleading political advertising on television. Since then, it has expanded until being considered a global movement which involves practitioners, academia and civil sphere (Graves 2018). Vázquez-Herrero et al. (2019) analyzed 135 active fact-checking sites, mainly digital native. Graves and Cherubini (2016) categorize these organizations into two main operational models: newsroom-based fact-checking units, which function within established media outlets, and NGO-style initiatives, which operate independently with external funding. They further distinguish between fact-checkers as reporters, reformers, or experts, depending on whether they focus on informing the public, improving journalistic standards, or influencing policy.
Beyond organizational structures, Amazeen (2020) argues that the rise in fact-checking reflects a broader democratic reform movement, emerging most strongly in countries facing political polarization or weak institutional trust. Empirical evidence supports this, countries with higher levels of internet access and greater democratic engagement tend to see more fact-checking initiatives. Nevertheless, the expansion of fact-checking does not automatically ensure public trust or behavioral change. Studies such as Nyhan et al. (2020) and Margolin et al. (2018) reveal that while factual corrections can improve citizens’ knowledge accuracy, they often fail to alter political preferences. This limitation is partly explained by motivated reasoning, individuals’ tendency to accept information aligned with their beliefs, and by the importance of social trust in the messenger. Margolin et al. (2018) found that corrections are more effective when delivered through socially connected individuals rather than anonymous or institutional actors, emphasizing that successful fact-checking depends not only on accurate information but also on relational credibility.
However, the exponential increase in online misinformation, coupled with the speed and volume of digital communication, has made manual fact-checking increasingly unsustainable. Researchers have therefore turned to automated and semi-automated systems to support verification processes. Vlachos and Riedel (2014) were among the first to conceptualize automated fact-checking as a computational task, defining it as the assessment of a claim’s truthfulness through algorithmic evidence retrieval and analysis. Their work inspired subsequent research into automation tools such as ClaimBuster studied by Hassan et al. (2015), which identifies “check-worthy” political statements in real time. Guo et al. (2022) refined this body of work by outlining a four-stage model of automated fact-checking, claim detection, evidence retrieval, verdict prediction, and justification generation, highlighting both progress and persistent obstacles, such as limited datasets and the difficulty of reasoning across multiple sources. Nakov et al. (2021) further emphasize that while automation offers scalability, it cannot replace human judgment. They propose a human-in-the-loop model, where computational tools assist but do not substitute professional fact-checkers. This hybrid approach allows algorithms to flag potentially false claims, retrieve multilingual evidence, or summarize complex content, while journalists evaluate context and nuance. Similarly, Alhindi et al. (2018) argue that integrating human justifications into machine learning models through datasets such as LIAR-PLUS, enhances system accuracy and better reflects how journalists reason when verifying claims.
Recent scholarship further emphasizes the importance of contextualizing fact-checking within diverse political and cultural settings. Vinhas and Bastos (2024) argue that fact-checking practices in non-WEIRD countries (countries outside the Western, Educated, Industrialized, Rich, and Democratic world) are shaped less by individual misinformation behaviors and more by broader social, historical, and community dynamics. Their findings show that misinformation in these contexts often circulates through collective and community-driven channels, where distrust in institutions and mainstream media is widespread. As a result, fact-checkers in such environments frequently rely on educational initiatives and horizontal, peer-to-peer correction strategies rather than traditional top-down verification models. This reinforces the need for context-sensitive fact-checking frameworks, particularly relevant in regions like the Middle East, where local tensions, social cohesion, and institutional constraints strongly influence how misinformation spreads and how fact-checking interventions are received.
Together, these studies underline that the future of fact-checking lies in collaboration between human expertise and artificial intelligence, rather than complete automation. Emerging technologies, however, bring new challenges and risks. Gutiérrez-Caneda and Vázquez-Herrero (2024) acknowledge that AI can be a double-edged sword after analyzing its use in fact-checking. Automated systems risk amplifying existing biases if trained on unrepresentative datasets, and AI-generated content, such as deepfakes or synthetic text, complicates the verification process itself. Moreover, in politically sensitive regions like in the Middle East, where data transparency is limited and censorship is widespread, deploying AI tools for verification raises ethical concerns about surveillance, privacy, and accountability. Addressing these challenges requires not only technological innovation but also renewed commitment to journalistic principles, transparency, and audience education.

1.2.1. Strengthening Journalism and Returning to Basics

Despite technological advancements, scholars continue to stress that effective fact-checking ultimately depends on strong journalistic institutions and media-literate publics. Graves et al. (2015) demonstrate that institutionalizing fact-checking through dedicated operations and professional training is more effective than relying on peer imitation or audience demand. This finding underscores the need to reintegrate verification as a core journalistic value (Kovach and Rosenstiel 2001), aligning it with journalism’s traditional watchdog and truth-seeking roles.
In the Middle East, where political pressures and economic dependency often undermine journalistic independence, media and information literacy (MIL) initiatives offer a complementary path toward resilience. Jordan’s experience, as examined by Dayyeh and Al-Zaghlawan (2024), illustrates how structured MIL programs, such as those led by the Jordan Media Institute and supported by UNESCO, can strengthen citizens’ capacity to identify misinformation and demand accuracy. These initiatives highlight the potential of combining top-down policy efforts with grassroots media education to create an informed and critical public.
Fact-checking, then, is not merely a technological or procedural task but part of a broader effort to rebuild trust in journalism. As Amazeen (2020) notes, fact-checking often emerges where democratic institutions are strained, serving as a corrective mechanism for accountability. In this sense, returning to the fundamentals of journalistic ethics, accuracy, transparency, and responsibility, remains as crucial as adopting new technologies. While artificial intelligence can enhance verification speed and scope, the human dimensions of credibility, trust, and ethics remain at the heart of effective fact-checking.

1.2.2. Applications and Challenges of AI in Fact-Checking

The rapid spread of misinformation has spurred growing interest in the use of AI to support the fact-checking process. Graves (2018) provides a foundational overview of Automated Fact-Checking (AFC) systems, describing their goal as assisting human fact-checkers by automating three main stages: identification, verification, and correction. Current applications affect every part of the fact-checking process (Gutiérrez-Caneda and Vázquez-Herrero 2024), but focus primarily on analyzing images or video, verifying simple factual claims against structured data or matching statements to existing fact-checks. However, as Graves notes, full automation remains a distant goal due to the complexity of human judgment, contextual interpretation, and the nuanced understanding of intent and meaning required in more intricate claims.
Recent studies highlight that technological innovation alone is insufficient to ensure reliable fact-checking. Juneja and Mitra (2022) emphasize the need to understand the human and organizational infrastructure surrounding fact-checking work. Through interviews with professionals worldwide, they reveal widespread skepticism toward AI tools, citing “tool overload” and the risk of over-reliance on opaque systems. Practitioners prefer explainable and participatory models in which humans retain oversight, reflecting a broader consensus that automated systems should augment rather than replace journalistic expertise.
Brandtzaeg et al. (2018) further explore how journalists and social media users perceive online verification tools, finding ambivalence toward automation. While users acknowledge these tools’ usefulness in investigative work, they remain reluctant to rely on them exclusively, especially when transparency about methods and funding is lacking. The authors argue that successful AI-supported fact-checking must prioritize transparency, accountability, and collaboration, integrating both professional and public contributions in identifying and evaluating claims.
Effectiveness of fact-checking is not solely determined by procedural rigor or technological tools; audience perception plays a critical role. Liu et al. (2025) show that perceived credibility of the fact-checking source strongly mediates the impact of fact checks on belief correction, with high-credibility sources overcoming motivated reasoning even when individuals’ initial beliefs conflict with the verdict. This highlights that platforms’ transparency, professionalism, and public trust are central to successful misinformation correction, complementing the collaborative and workflow-focused insights from Juneja and Mitra (2022).
Thus, based on the arguments of these authors, despite these advances, AI introduces persistent challenges (Gutiérrez-Caneda and Vázquez-Herrero 2024). Automated systems often struggle with unstructured or multilingual data (Guo et al. 2022), a critical issue in the Middle East where regional dialects and local references complicate machine understanding. Moreover, algorithmic verification tools rely heavily on open and structured database resources often scarce in politically constrained environments (Vinhas and Bastos 2024). The ethical implications of automation also remain significant particularly in countries with limited press freedom (Wardle and Derakhshan 2017).
Studying fact-checking platforms in this region requires looking not only at technological tools like AI but also at the organizational workflows, transparency practices, and audience engagement strategies that shape their effectiveness. Although AI is increasingly discussed in global fact-checking, its adoption in these platforms is limited, making it necessary to consider both technological and human-centered practices to understand how these organizations operate.

1.2.3. Akeed, Teyit & Factnameh

Fact-checking organizations in the Middle East have emerged as critical tools for improving media credibility and curbing misinformation in contexts marked by political constraints, censorship, and low public trust. While these platforms share the common goal of verifying information, their methods, scope, and challenges differ significantly due to varying media and political environments. This section examines three leading initiatives: Akeed in Jordan, Teyit in Turkey, and Factnameh in Iran, highlighting their roles, strategies, and limitations.
Fact-checking organizations in restrictive or authoritarian environments often contend with severe operational pressures, including online harassment, government threats, and financial constraints, which shape both their focus and methods (Badji et al. 2024). As observed in Ethiopia and Mali, such organizations may self-censor on politically sensitive issues, prioritize viral social media content over direct political accountability, and adapt their dissemination strategies using local languages or alternative media formats. These findings underscore how contextual pressures shape verification workflows, audience engagement, and organizational strategies, helping to explain the adaptive practices of platforms such as Akeed, Teyit, and Factnameh.
In Jordan, Akeed, the Jordanian Media Credibility Monitor, has become a cornerstone for enhancing journalistic accountability and transparency. Akeed’s team consists of 10 members: one director, who also serves as Editor-in-Chief, and 9 collaborators working as fact-checkers and editors. Two team members supervise the website and social media content production, while a specialized “production and design” unit at the Jordan Media Institute handles visual design and technical production of videos, podcasts, and photo albums. Akeed is funded by the King Abdullah II Fund for Development (KAFD), which supported the first year of the project under the Democratic Empowerment Program, with the Jordan Media Institute holding intellectual property rights and legal responsibility for all activities and programs.
Alshwayyat and Vázquez-Herrero (2025) argue that Akeed plays a vital role in countering misinformation and strengthening the ethical standards of Jordanian journalism. Through a quantitative analysis of its reports and statistical data, they find that Akeed has effectively reduced the spread of false or misleading information, particularly in sensitive areas such as economic and security coverage. However, they also note that the platform faces persistent challenges in addressing misinformation originating from social media and other non-institutional digital sources. Complementing these findings, Al-Jalabneh et al. (2023) demonstrate how Jordan experienced the global “infodemic” and post-truth dynamics during COVID-19 by drawing on Akeed’s monitoring data. Their study shows that nearly half of the misleading stories circulating in Jordan were inaccurate rather than intentionally fabricated, with social media functioning as the primary vehicle for dissemination. Similarly, Aljalabneh (2024) analyzes Akeed’s documentation of 481 rumors involving images and videos and identifies key patterns in the spread of visual mis/disinformation, most of which originated internally and circulated through social media platforms. Together, these studies underscore Akeed’s crucial role in mapping misinformation trends and providing empirical insights into how misleading content, especially visual material, shapes public understanding. They also highlight persistent challenges, while Akeed significantly contributes to improving professional reporting practices, it must continue adapting to the rapidly evolving digital environment that fuels social-media-driven misinformation.
In Turkey, Teyit stands as one of the most established fact-checking organizations in the region and a member of the International Fact-Checking Network (IFCN). Teyit’s team currently consists of 7 full-time employees and a freelancer who supports funding and project management. Teyit finances its operations through service production, collaborations, national and international grants, and other partnerships, reinvesting all revenue to enhance social impact rather than distributing it to partners.
Ünal and Çiçeklioğlu (2019) show that Teyit plays a crucial role in maintaining information integrity, particularly in politically polarized settings where false narratives spread quickly through visual content such as manipulated images and videos. Their content analysis and interviews reveal that most of the misinformation in Turkey is politically motivated, underscoring how partisan dynamics and filter bubbles fuel disinformation. In later work, Ünal and Çiçeklioğlu (2022) demonstrated that during the COVID-19 pandemic, misinformation shifted from politics to health, with 91% of verified claims about the virus found to be false, mainly circulating on social media platforms. They conclude that while Teyit mitigates the effects of misinformation, broader improvements in public media literacy are necessary to make these efforts more sustainable.
Complementing this, Karadağ and Ayten (2020) compare Teyit and Doğruluk Payı, noting that both operate under social responsibility frameworks but differ in scope; Teyit engages with a wider range of misinformation types, whereas Doğruluk Payı focuses mainly on political discourse. Their analysis reveals structural and political challenges, such as polarization and perceptions of bias, that threaten the perceived neutrality of fact-checking organizations. More recently, Budak and Baloğlu (2025) explored Teyit’s approach to climate change misinformation, finding that although its verification methods align with international practices, it performs fewer checks in scientific domains compared to European counterparts. They recommend leveraging AI tools to enhance detection efficiency and expanding the thematic scope of fact-checks to include underreported issues such as environmental communication.
In Iran, Factnameh operates under particularly restrictive conditions, confronting censorship, political repression, and limited access to open data. The team currently consists of 7 fact-checkers, full-time and contractors, and an audience engagement coordinator. Due to these constraints, Factnameh operates from Toronto, Canada. According to Van Damme (2021), despite being included in the Duke census of fact-checkers (https://reporterslab.org/fact-checking; accessed on 21 November 2025), the organization publishes its findings under pseudonyms to protect its contributors, since revealing their identities could endanger them and their families. Operating from abroad allows Factnameh to verify information and promote accountability without risking surveillance, legal repercussions, or threats to staff. Funding comes from multiple sources: internal revenue from technology services by ASL19, grants from international collaborations including the Facebook–IFCN partnership for COVID-19 fact-checking, and other public funds. While specific donors are not always disclosed for safety reasons, the platform adheres to IFCN principles of transparency, nonpartisanship, and fairness. Although its internal operations are less documented and studied than Akeed or Teyit, Factnameh focuses on claims related to politics, religion, and social issues, carving a space for transparency within a tightly controlled information environment.
Together, these three platforms illustrate both the promise and the precarity of fact-checking in the Middle East. Each faces overlapping challenges, limited access to reliable data, political pressures, and audience mistrust, but also contributes uniquely to regional media accountability. Their comparative study provides crucial insights into how fact-checking can adapt to diverse political realities and how technology, policy, and civic engagement intersect in the fight against misinformation.
Building on this comparative perspective, the study is guided by the following research question: How do three fact-checking platforms in the Middle East organize their verification processes, navigate socio-political challenges, and integrate technological tools, including AI, into their work?

2. Materials and Methods

This study employs a qualitative comparative research design to examine how three fact-checking platforms, Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran), operate within distinctive socio-political contexts in the Middle East. This approach allows for an in-depth understanding of each platform’s verification strategies, operational challenges, and use of technological tools, including artificial intelligence, as well as the influence of political and social environments on their work. To address these objectives, the study combines two main phases: document analysis and semi-structured interviews.
Methodological triangulation was applied by combining document analysis and semi-structured interviews to enhance the credibility and robustness of the findings. Triangulation refers to the use of multiple data sources or methods to examine the same phenomenon in order to reduce bias and strengthen interpretive validity (Flick 2019). In this study, publicly available organizational documents were analyzed to map formal verification procedures, transparency practices, and technological integration, while interviews provided contextualized insights into how these practices are implemented in everyday work and shaped by socio-political constraints. Findings from both data sources were systematically compared to identify convergences and discrepancies, allowing for cross-validation of themes and strengthening the explanatory power of the analysis across the three cases.
The study further investigates the following sub-questions:
  • RQ1. How do Akeed, Teyit, and Factnameh structure their fact-checking processes and verification methodologies?
  • RQ2. What operational challenges do these platforms face in their respective socio-political contexts?
  • RQ3. How do these platforms integrate technological tools, including AI, into their verification workflows?
Jordan, Turkey, and Iran were chosen to capture variation in political and media environments in the region, ranging from a relatively open but constrained media landscape (Jordan), to a highly polarized environment with shrinking media freedom (Turkey), to a highly restrictive and authoritarian context (Iran). This selection enables comparison of how fact-checking practices adapt under different socio-political constraint.
Jordan has multiple fact-checking initiatives (e.g., Misbar, Fatabyyano), but Akeed was selected as it is the most established and visible fact-checking initiative; it is widely referenced in academic literature as a central actor in monitoring media credibility and is linked with a formal institute, the Jordan Media Institute (Al-Jalabneh et al. 2023; Alshwayyat and Vázquez-Herrero 2025). In Turkey, various smaller fact-checkers exist, yet Teyit is the most prominent and internationally recognized fact-checking organization; it is affiliated with the International Fact-Checking Network (IFCN), making it a key reference point for professional verification practices in a highly polarized media system (Karadağ and Ayten 2020; Ünal and Çiçeklioğlu 2019). In Iran, while sites like Gomaneh have existed, Factnameh remains one of the few independent fact-checking initiatives accessible to Iranian audiences, working from exile due to domestic censorship and political repression (Van Damme 2021).
The first phase involves a detailed review of publicly available documentation from each platform, including mission statements, methodological notes, available annual reports, and website documents. The analysis focused on verification strategies, procedural frameworks, use of technology or AI tools in fact-checking, methods of audience engagement and identified operational challenges. Document analysis is defined by Bowen (2009) as a systematic procedure for reviewing and evaluating printed or electronic documents to extract meaning, deepen understanding, and generate empirical knowledge. Within media and communication research, documents serve not only as sources of information but also as texts that actively construct meaning. Karppinen and Moe (2012) emphasize this dual role, noting that documents reflect institutional contexts while also shaping how policy issues and organizational practices are framed. They therefore call for clarity in explaining how documents are selected, analyzed, and interpreted. In later work, Karppinen and Moe (2019) argue that policy and industry documents offer efficient access to institutional processes, but researchers must remain aware of their limitations, such as bias, selective reporting, or restricted accessibility. These challenges can be mitigated through critical source evaluation and transparent documentation of gaps in available materials. Similarly, Morgan (2022) highlights the value of qualitative document analysis when access to field sites or participants is limited and stresses the importance of selecting documents that are credible, authentic, and meaningful. Reflexive thematic analysis is recommended to ensure rich qualitative insights. Document analysis in this study served two purposes: (1) to build a foundational understanding of each platform’s operations; and (2) to inform the development of the interview guide. All analyzed materials (Appendix A, Table A1) were publicly available, eliminating the need for additional consent. Document analysis was manually coded using Excel (Microsoft Office LTSC Professional Plus 2021) to organize themes and maintain consistency across platforms (Table A2 summarizes the main coding categories used for the document analysis).
Guided by insights from the document analysis, semi-structured interviews were conducted with practitioners from each platform. Participants were selected using purposive sampling to recruit senior staff members with an overarching view of organizational practices, to ensure representation of different operational roles, including verification, technology integration, and strategy. Due to the small size of the organizations, one senior representative per platform was interviewed. According to Campbell et al. (2020), purposive sampling is a qualitative sampling strategy in which participants are deliberately selected because they can provide information that directly supports the aims of the study. Its purpose is to ensure that the sample aligns closely with the research questions, which strengthens the study’s rigor and overall trustworthiness, evaluated through credibility, transferability, dependability, and confirmability.
Documents were analyzed using qualitative thematic analysis (Morgan 2022). Initial concepts were derived from the literature on fact-checking practices, verification workflows, transparency, audience engagement, and the use of technological tools, including AI. These concepts guided a first round of open coding, during which documents were read and relevant segments were labeled without imposing fixed categories.
Documents were selected based on their relevance to verification practices, organizational transparency, workflow organization, audience engagement, and references to technological tools or AI. Only publicly available materials published on the platforms’ official websites and affiliated channels during the study period were included. Interviewees were asked whether any additional internal or unpublished materials were relevant to the study and they all confirmed that the key documents needed to understand their verification practices and organizational workflows are publicly available online.
Through comparison across the three cases, preliminary codes were refined into broader analytical themes. These themes were continuously revised as new patterns emerged and as differences between platforms became visible. The resulting thematic framework was then used to structure the interview guide and to enable systematic comparison between documents materials and interview data.
In-depth interviews are particularly suited to studies seeking rich, detailed accounts of experiences and operational practices. As Boyce and Neale (2006) note, such interviews allow researchers to explore participants’ perspectives, motivations, and interpretations in ways that reveal contextually grounded insights. Semi-structured interviews were chosen because they provide a balance between structure and flexibility, while the interviewer follows a predetermined set of questions, participants retain space to introduce new themes or elaborate on their experiences. This approach aligns with Birmingham and Wilkinson (2003), who emphasize the value of semi-structured formats for studies requiring both comparability and depth.
The interview guide (Appendix B, Table A3) was developed using the interview checklist proposed by Birmingham and Wilkinson (2003), combined with the principles for drafting questions and conducting semi-structured interviews outlined by Adams (2015). Interview topics included verification processes, workflow organization, technological integration, including AI, audience-engagement strategies, and socio-political constraints. All interviews were conducted with informed consent, in English with Teyit and Factnameh, and in Arabic with Akeed. They were carried out through video and phone calls, based on participant preferences, language and availability. A total of three participants were interviewed (Table 1). All interviews were initially scheduled for 30 min; however, in cases where participants had additional insights to contribute, the conversations were extended accordingly. As a result, interview durations ranged from 30 min to more than 1 h. All interviews were recorded with consent and subsequently transcribed, and all interviewees agreed to the use of their real -or work- names in the research.
The interviews were conducted with senior representatives responsible for editorial and strategic decisions in each organization. Given the small size of the teams, ranging from 7 to 10 members, these respondents are directly involved in most organizational processes discussed in the article.
Excel was used to code the interview data manually and to align themes across both interview transcripts and document analysis materials using the shared thematic framework developed during the document analysis phase. Documents and interview transcripts were analyzed thematically based on categories such as organizational structure, verification workflow, use of technology (including AI), transparency practices, audience engagement, and socio-political constraints. This enabled comparison across fact-checking sites, highlighting similarities, differences, and context-specific adaptations in their practices.
Finally, this study has several limitations that should be acknowledged. First, the sample of interview participants was relatively small and purposively selected, which limits the generalizability of the findings. Second, while the document analysis covered all publicly available reports from Akeed, Teyit, and Factnameh, it relied on the accuracy and completeness of the platforms’ published archives, which may not capture unpublished or removed content. Also, the study focuses on a specific regional context; therefore, the practices identified may not fully reflect fact-checking approaches in other regions or media environments.

3. Results

Previous studies conducted by Alshwayyat and Vázquez-Herrero (2025), Ünal and Çiçeklioğlu (2019, 2022), Karadağ and Ayten (2020), Budak and Baloğlu (2025), and Van Damme (2021) demonstrate that while Akeed, Teyit, and Factnameh share the overarching goal of countering misinformation, their strategies, verification methods, and challenges are shaped by distinct political, social, and institutional environments. Existing literature highlights that Akeed has strengthened journalistic standards in Jordan by monitoring media credibility, Teyit has developed into a technologically informed fact-checking hub operating within Turkey’s polarized media landscape, and Factnameh has emerged as one of the few independent verification initiatives capable of operating under Iran’s highly censored context. These insights provide the foundation for understanding how each platform functions today and situate the results from the document analysis and the three interviews presented below. Table 2 provides a comparative overview of Akeed, Teyit, and Factnameh, summarizing their focus, verification methods, audience engagement, challenges, and use of technology based on the document analysis. The table does not imply that characteristics listed for one platform are absent in the others. To avoid misinterpretation, each feature is presented as a variable and reported for all three platforms based on publicly available documentation and interview data.
The findings are presented by platform (Akeed, Teyit, and Factnameh) to preserve contextual depth and reflect the highly divergent political and organizational environments in which these initiatives operate. Given the strong influence of national media systems and political constraints on verification practices, a case-based structure allows for a more nuanced understanding of each platform’s operational realities. Cross-case similarities and differences are synthesized comparatively in Section 3.4 and summarized in Table 3.

3.1. Akeed

According to the document analysis, Akeed, the Jordanian Media Credibility Monitor, operates as a digital observatory that tracks and verifies information from primary sources to enhance media accountability and professional standards. The platform emphasizes accuracy, balance, and transparency in news reporting, alerting both media outlets and the public to breaches in journalistic standards. Its approach is primarily human-driven, relying on professional journalistic norms rather than technological tools such as artificial intelligence.
Akeed functions in a context of evolving social and political values, facing challenges such as misinformation proliferation and the need to promote professional standards in a transitional democratic environment. The interview with the head editor of Akeed, Mr. Hussein Abu Rumman, reveals that the platform operates through a fully human-driven verification system grounded in journalistic and academic standards. Verification decisions are made during daily editorial meetings, where the team prioritizes topics based on public relevance, emerging misinformation, and media reports. Akeed relies heavily on primary sources, official documents, and direct communication with institutions to confirm the accuracy of claims. No AI tools are used in the verification process, reflecting Akeed’s emphasis on methodological accuracy.
According to Mr. Hussein, the socio-political context significantly shapes Akeed’s operations. Jordan’s transitional political environment and declining newsroom capacities, driven by economic pressure, weakened verification culture, and the competition for speed, create persistent challenges for maintaining media accuracy. The scarcity of official data, along with occasional political constraints, further complicates verification. Akeed positions itself as a corrective mechanism that strengthens media accountability and enhances journalistic professionalism in a landscape where misinformation is widespread among influencers, politicians, and public discourse.
Akeed Monitor faced a major operational challenge when Meta forced them to create a new Facebook page on January 2025 after their previous page was shut down. Meta cited violations of its advertising standards but did not clarify the specific issue, and the team suspected that external complaints may have influenced the platform’s decision. In response, Akeed became more cautious in selecting images and language, particularly avoiding traditional crime-reporting visuals such as guns, knives, or blood, which are flagged by Meta’s algorithms and reduce reach.
“We never changed our editorial approach, but recently we had to adapt to restrictions imposed by Meta. After our old page’s promotion abilities were blocked for allegedly not meeting their advertising standards, without any explanation, we created a new page at the beginning of this year. Despite the pressures and complaints against us, our content remained unaffected, though we became more cautious with our choice of images and even language.”
(Hussein Abu Rumman, 19 November 2025)
Despite these restrictions, Akeed maintains the integrity of its core content and continues to publish reports across its website and social media platforms, including Facebook, Instagram, and X. This incident highlights how platform-specific restrictions can create operational challenges even beyond governmental pressures, requiring adaptive strategies to maintain audience reach and content visibility.
Audience engagement emerges as a central goal of Akeed. The platform publishes daily fact-checks, weekly reports on media violations, and annual analytical assessments that evaluate broader trends in credibility. It actively collaborates with university students, contributing to the development of verification skills among future journalists. The platform’s educational orientation is evident in its efforts to guide the public on how to assess the credibility of news, making Akeed not only a monitoring body but also a tool for improving media literacy. Akeed also prioritizes transparency in its operations: all verification reports remain permanently available on the website as part of an open archive, allowing the public to track the platform’s work over time. In addition, Akeed provides a dedicated “right to response” section where individuals or institutions can offer clarifications or challenge the platform’s findings. These measures, he explained, help ensure accountability and maintain public trust while keeping the verification process firmly rooted in human-led evaluation rather than technological automation.
The interview makes clear that Akeed does not currently use AI or automated technological tools in its verification workflow. The head editor explained that the platform’s work remains fully human-driven due to both methodological preferences and contextual realities within Jordan. He emphasized that in his view, effective verification requires professional judgment, contextual interpretation, and critical analysis elements that cannot yet be reliably automated. Moreover, the volume and complexity of misinformation in Jordan remain manageable for a human-centered workflow, reducing the perceived need for AI tools. More specifically, he discussed both claim detection and visual analysis in relation to Akeed Monitor’s fact-checking work, highlighting the balance between human effort and technological support. Akeed’s methodology relies heavily on manual monitoring, with the team using laptops to track Jordanian media and social media content. For traditional media, claim detection focuses on professional, ethical, and legal violations, while for rumours, the team documents claims either denied by official bodies or, since early 2024, widely circulating claims that remain unaddressed.
Regarding visual analysis, Akeed established a dedicated fact-checking section in mid-2023, specifically targeting misleading, false, or fabricated photos and videos. The team identifies deceptive visuals, including AI-generated content, and explains the mechanisms behind them to educate the public, highlighting details such as fingers, eyes, and colours. Ethically, Akeed refuses to generate AI content themselves, stating that it would be inconsistent to critique AI-generated images while using the technology for their own reports. Despite these efforts, Mr. Hussein acknowledges ongoing technical challenges: although the team aspires to move from direct manual work to more advanced methods, they have yet to implement effective AI or algorithmic tools for faster and smoother monitoring, constrained by available expertise and resources. While acknowledging the global trend toward AI-assisted fact-checking, he expressed caution regarding its adoption, noting concerns about accuracy, transparency, and the risk of over-reliance on automated systems in a media environment already struggling with credibility: “Another challenge is perhaps the technical one… To what extent can we have the means to move from direct manual work to more advanced methods?.” As a result, Akeed prioritizes human expertise and the application of journalistic and academic standards, positioning itself deliberately as a credibility-focused observatory rather than a technologically driven fact-checking operation.

3.2. Teyit

The document analysis shows that Teyit operates under Turkey’s polarized media landscape, combining rigorous fact-checking with educational and empowerment initiatives. The platform adheres to international standards, including the International Fact-Checking Network (IFCN) and the European Fact-Checking Standards Network (EFCSN), ensuring methodological rigor, impartiality, and transparency. Unlike Akeed, Teyit has a broader mandate that integrates public education, social responsibility, and stakeholder engagement alongside verification activities.
“Following IFCN and EFCSN standards affects our methods mostly and it means that we need to have some accountability, we need to have some transparency and also we need to rely mostly on open sources. That’s the most important three things that we are following.”
(Dilge Temiz, 20 November 2025)
Teyit faces challenges such as balancing impartiality with educational outreach, maintaining credibility in a politically charged environment, and sustaining operations as a nonprofit social enterprise. The interview with Dilge Temiz, editor and fact-checker, highlights a verification process that is entirely human-driven at the initial stage. The team manually scans the daily agenda, monitors trending topics, and identifies claims circulating across social media platforms. Claims are then shared in an internal Slack channel, where a small team collectively assesses whether they merit verification. Priority is given to claims that are viral, have significant interaction, or pose potential harm. Teyit adheres to the core accountability, transparency, and open-source verification standards required by both IFCN and EFCSN, which shape the team’s methodology and ethical principles.
Although AI tools are sometimes used during verification, for instance, to search for online traces of an image, video, or claim, the team emphasizes that these tools are supportive rather than determinative. AI may help locate information or check facts, but it is not relied upon as the final source, and Teyit staff write and edit all fact-checks themselves. The interviewee noted that AI outputs can vary in accuracy: while sometimes producing useful results, they are often inconsistent and not fully “trustable reliable.” The team is aware that AI-generated content is spreading rapidly online, and while AI could assist in monitoring and claim detection, challenges remain in verifying original sources and assessing reliability.
During the interview, two primary challenges emerged. First, the lack of cooperation from official institutions in Turkey significantly limits the team’s ability to verify politically sensitive claims. While Teyit has a dedicated channel for submitting inquiries to government bodies, responses are often not provided, forcing the team to abandon certain analyses because they cannot obtain authoritative confirmation. Second, the increasing prevalence of AI-generated content online introduces verification difficulties, especially when the original source of manipulated images or videos cannot be traced. This makes it harder for the team to fulfill transparency standards, as the source of a claim is a critical component of Teyit’s methodology.
“The other one (challenge) is based on the AI generated content now. It’s super spreading, like super super super easily online and we want to fact check them but sometimes we just cannot reach the source of it […] We are missing some source information with the AI generated content.”
(Dilge Temiz, 20 November 2025)
Teyit’s work, according to Dilge, is shaped by Turkey’s broader digital regulatory environment, including restrictions on access to information and unequal application of digital security laws. The respondent explained that certain fact-checks, especially those involving officials, may experience limited algorithmic visibility or be subject to warnings requesting removal. Conversely, certain misleading narratives gain exceptional visibility due to algorithmic amplification, forcing the team to prioritize verification of trending falsehoods. While Teyit has not yet faced direct legal risks or staff safety issues, they remain aware that other media actors in Turkey have been targeted, and the platform adapts its content priorities accordingly.
Maintaining trust amid public skepticism is a central concern for Teyit. The platforms’ editor highlighted a growing crisis of credibility among audiences, who increasingly distrust information sources in general. To address this, Teyit has invested in offline engagement and community-building, including hosting in-person media literacy clubs, conducting university visits, and offering hands-on workshops demonstrating fact-checking methods. These interactions aim to “go to the root of the problem” by improving users’ digital literacy rather than relying solely on published analyses. In terms of communication, Teyit publishes fact-checks across its website and social media platforms, while engagement and impact are assessed through interaction metrics and qualitative feedback from users and professionals in the sector. However, the respondent acknowledged that Teyit does not yet produce annual reports summarizing trends or impact, largely due to limited resources following staff reductions linked to Meta’s restructuring of its fact-checking program.
The interview revealed that Teyit is undergoing an internal shift toward simplifying its fact-checking outputs. The team is working to shorten analyses, introduce bullet-point fact summaries, and refine its categorization system beyond standard misinformation/disinformation labels. This includes adopting and expanding the First Draft model of content types. The goal is both to increase productivity and adapt to changing audience behavior, as users increasingly seek concise and highly accessible verification outputs.
Dilge expressed skepticism regarding AI’s usefulness in the team’s workflow, emphasizing concerns about accuracy, inconsistency, and time efficiency. While AI may occasionally produce helpful results, it often replicates the same verification steps that human fact-checkers perform, such as reverse image searching, and sometimes delivers unreliable or contradictory outputs. Despite this, the interviewee recognized that fact-checking organizations globally are moving toward integrating AI tools more systematically, including developing automated verification bots and AI-based platforms. Teyit itself is involved in exploratory research on AI-assisted fact-checking, although its current use remains supplementary rather than foundational to its methodology.

3.3. Factnameh

Factnameh operates under particularly restrictive conditions in Iran, confronting censorship, political repression, and limited access to reliable data. To circumvent these challenges, the platform is registered and operates from Toronto, Canada. Despite this external base, Factnameh publishes findings under pseudonyms to protect its contributors from potential threats or legal repercussions (Van Damme 2021). Its work focuses on verifying claims related to politics, religion, and social issues, producing multimedia content, including videos, podcasts, and documentaries, to engage both domestic and diaspora audiences.
The platform’s approach prioritizes audience engagement and transparency in a highly censored context, relying on innovative digital strategies rather than AI. The interview with the head editor, Farhad Souzanchi, highlights that Factnameh’s verification process is largely human-driven. The team relies exclusively on open and publicly accessible data, contacts claimants when possible, and provides sources to ensure transparency: “The methodology is very similar to many other fact checkers which we only use open data, every and anything that is only accessible to everyone” (Farhad Souzanchi, 20 November 2025).
Claims are prioritized based on newsworthiness, the prominence of the source, and public importance, with social media content often taking precedence due to its widespread dissemination. Technological tools are used sometimes to support research, but human verification remains central. Factnameh occasionally uses several technological tools; virtual private networks (VPN) help the team access Iranian government datasets and websites that are restricted to local IP addresses, including the parliament, police, military platforms, and statistical centers: “Access to data is another challenge. Many of government data sets and data centers are shut down to foreign IPs” (Farhad Souzanchi, 20 November 2025).
Open-Source Intelligence (OSINT) techniques enable investigative work from outside the country, reducing the need for physical presence. The team also utilizes social media platforms, including X, Instagram, and Telegram, for monitoring content, while Facebook is less relevant due to limited local use for public outreach. Additionally, the platform participates in the Meta fact-checking program, which helps monitor and verify content circulating on social media. While Factnameh’s verification remains human-driven, the team is exploring ways to incorporate AI tools in the future to enhance efficiency and coverage.
Factnameh faces several operational challenges. Security concerns are paramount, Farhad explained, as many team members use aliases to protect themselves and their families, and anonymity complicates full IFCN signatory status. Access to government data is restricted, requiring VPNs and collaboration with trusted contacts inside Iran to obtain essential information. Political polarization, internet restrictions, and social conditions within Iran influence public perception and the reception of fact checks, although censorship itself does not prevent the team from producing content. Operating from outside Iran provides some advantages, such as the ability to fact-check high-ranking officials without legal or physical threats, but limits the team’s capacity for on-the-ground reporting during events such as protests or conflicts.
“The fact that we are more or less anonymous… We had to deal with this challenge that who are you, who is behind this? You know, why should we trust you?”
(Farhad Souzanchi, 20 November 2025)
The head editor emphasized that maintaining credibility and transparency is central to Factnameh’s work. Despite the use of aliases and partial anonymity, the platform emphasizes neutral tone, clear sourcing, and professional standards. Engagement and impact are assessed via social media platforms, including Telegram, Instagram, X, YouTube, and a weekly podcast, which allow the team to reach audiences and measure reception. Factnameh also conducts surveys to understand audience demographics, media habits, and content priorities, and produces annual reports summarizing fact-checking activity, trends, and notable claims, although these are primarily in Persian.
The interview with the head editor highlights that Factnameh participates in Meta’s fact-checking program, which provides an additional layer of focus on verifying claims circulating on social media platforms. While the program does not impose restrictions on the platform, it helps the team identify trending misinformation and prioritize verification efforts accordingly. This participation complements Factnameh’s primarily human-driven methodology, allowing the team to monitor viral claims, social media interactions, and emerging narratives more effectively.
Farhad emphasized the platform’s evolving approach to technology and AI. While verification is currently human-driven, Factnameh is exploring AI-assisted tools to expedite labor-intensive tasks, including fact-gathering and data organization, with human verification remaining central. The team is cautious about AI-generated outputs due to concerns over accuracy, bias, and source reliability but acknowledges that AI will inevitably play a role in global fact-checking practices. Factnameh also monitors the reliability of AI outputs in Persian-language contexts, evaluating how platforms respond to politically sensitive questions and ensuring that AI-generated results do not propagate misinformation.

3.4. Interpretation of Results

This research, due to its methodological design, presents exploratory results that describe the landscape of fact-checking across three platforms based in the Middle East region. All three platforms rely primarily on human verification, consistent with Nakov et al. (2021) human-in-the-loop model. Akeed emphasizes methodological rigor and aligns with Kandel’s (2020) guidance on prioritizing high-impact misinformation. Factnameh and Teyit cautiously explore AI for efficiency while maintaining human oversight, reflecting broader ethical and accuracy concerns highlighted by Wardle and Derakhshan (2017) and Graves and Cherubini (2016).
All three platforms fit the NGO-style model (Graves and Cherubini 2016), operating independently of traditional newsrooms, relying on external support, and emphasizing transparency, civic engagement, and educational outreach. Akeed operates in a transitional democracy with limited data access and newsroom capacity. Teyit faces political polarization, limited institutional cooperation, and algorithmic visibility challenges. Factnameh navigates censorship, repression, and security constraints via remote operations. These findings align with Amazeen (2020) and Revez and Corujo (2024), emphasizing fact-checking as an accountability mechanism in constrained environments.
Educational outreach and media literacy programs are essential to influencing public perception, consistent with Wardle and Derakhshan (2017) and Nakov et al. (2021). Fact-checking alone is insufficient, all platforms combine verification with proactive public education to enhance relational credibility. However, relational credibility can be influenced by the visibility of fact-checkers themselves. For instance, Factnameh relies on pseudonyms to protect contributors in Iran’s restrictive environment, which, while necessary for safety, illustrates the tension identified by Margolin et al. (2018), who found that corrections are more effective when delivered through socially connected individuals rather than anonymous or institutional actors.
The cautious adoption of AI-supported verification in the three fact-checking sites highlights the importance of transparency and accountability. Brandtzaeg et al. (2018) noted that users value verification tools but hesitate to rely on them when methods or funding are unclear. Consistently, all three initiatives maintain transparency through open archives, accessible reports, and mechanisms like Akeed’s “right to response,” reinforcing trust and credibility in their fact-checking practices.
The comparative analysis highlights opportunities for cross-platform learning. Akeed and Factnameh produce regular numerical and analytical reports, strengthening transparency and enabling trend tracking. Teyit could benefit from adopting a similar approach, developing annual reports summarizing verification activity, audience engagement, and emerging misinformation trends. Akeed could adopt Teyit’s experience with international fact-checking standards to enhance credibility and benchmarking. Teyit could also integrate Akeed’s open archives and “right to response” system, as well as Factnameh’s multimedia strategies for wider engagement. Factnameh could benefit from Akeed’s systematic reporting and Teyit’s educational outreach to strengthen domestic visibility and trust. All three platforms could share lessons on cautious AI adoption, ethical considerations, and strategies for operating under platform-specific restrictions, like Meta, to improve operational resilience.

4. Conclusions

This study examined three fact-checking sites, Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran), to understand how they operate, verify information, engage audiences, and navigate political and technological constraints. These exploratory findings reveal that while all three initiatives share the overarching goal of countering misinformation, their strategies and operational approaches are deeply shaped by their respective socio-political contexts and are presented in this paper as case-specific insights rather than general conclusions.
Table 3 presents a cross-case comparison of the three platforms across standardized analytical variables to highlight similarities and differences in verification practices, transparency, audience engagement, challenges, and AI use.
Akeed emphasizes methodological rigor, human-driven verification, and transparency within a transitional democracy. The platform currently avoids AI, reflecting their association between automated tools and potential inaccuracies, though AI or other tools may be explored in the future to enhance efficiency. Teyit balances verification with public education and stakeholder engagement in a politically polarized environment, cautiously exploring AI tools while addressing growing audience skepticism toward information sources; Teyit invests in offline engagement and media literacy initiatives, including workshops, university visits, and community programs. The platform also benefits from Meta’s fact-checking program, which helps monitor trending social media claims and prioritize verification efforts, although staff reductions following Meta’s program restructuring have posed operational difficulties, highlighting the challenges of relying on external initiatives for operational continuity. Factnameh operates under strict censorship, relying on remote operations, multimedia content, and human-centered verification to maintain credibility and reach both domestic and diaspora audiences. Additionally, the platform participates in Meta’s fact-checking program, which helps the team monitor trending claims on social media and prioritize verification efforts efficiently, complementing their primarily human-driven methodology. Unlike Akeed, which experienced operational challenges due to Meta-imposed restrictions, Factnameh’s collaboration with Meta is framed as supportive rather than obstructive, enhancing the platform’s capacity to respond to misinformation in a timely and targeted manner within the cases examined.
Across the three cases studied, AI use is empirically limited. Verification is predominantly human-driven, with AI-related tools used only cautiously to support efficiency without compromising accuracy or ethical standards, AI remains supplementary rather than foundational to verification practices. This observed practice contrasts with broader discussions in the literature, where fact-checking organizations globally are increasingly experimenting with AI-based tools such as automated claim detection and verification systems. Despite this, Teyit itself is conducting exploratory research on AI-assisted fact-checking, although its current use remains supplementary rather than foundational to its methodology. Importantly, the Factnameh team also monitors the reliability of AI outputs in Persian-language contexts, evaluating how platforms respond to politically sensitive questions and ensuring that AI-generated results do not propagate misinformation, even as active AI integration into their workflow remains limited. For Akeed, claim detection and visual analysis are fully manual, prioritizing professional judgment and contextual interpretation.
Audience engagement, educational outreach, and transparency emerge as central components of credibility-building in the three platforms examined, demonstrating that fact-checking alone is insufficient to counter misinformation; relational and contextual trust with audiences is equally important. The study also highlights how operational constraints, from political repression to platform-specific restrictions like Meta’s content policies, require adaptive strategies to maintain workflow, reach, and public trust within the specific organizational and political contexts studied. The Factnameh team faces significant operational challenges, including security concerns that require many members to use aliases to protect themselves and their families, a factor that complicates achieving full IFCN signatory status. By contrast, Akeed is not a member of the IFCN, likely because it operates independently without formal affiliation, focusing on local monitoring and verification standards.
The findings, although exploratory, suggest important contextual differences. Unlike many Western-based fact-checkers that operate in relatively stable legal environments with greater access to open data, institutional transparency, and funding opportunities, the three platforms examined here work under varying degrees of political pressure, censorship, and resource constraints. These conditions shape verification priorities, organizational structures, and cautious adoption of AI tools. This highlights the limits of transferring best practices from WEIRD contexts without accounting for political risk and safety concerns.
So, we can answer the research question, How do three fact-checking platforms in the Middle East organize their verification processes, navigate socio-political challenges, and integrate technological tools, including AI, into their work? All platforms rely primarily on human-driven verification, using structured procedures to ensure accuracy and transparency. Teyit follows IFCN principles, whereas Akeed and Factnameh operate independently, adapting methods to local and political constraints. Multimedia verification, cross-checking sources, and contextual interpretation are key features. The platforms face distinct pressures, Akeed operates in a constrained democracy and focuses on methodological rigor without AI, Teyit works in a politically polarized environment, balancing verification with public education, and Factnameh operates under strict censorship, relying on remote operations to protect staff. All contend with social media misinformation, political pressures, and platform-specific restrictions. AI is currently supplementary rather than foundational: Factnameh uses technology tools for monitoring, Teyit is experimenting with AI-assisted verification, and Akeed relies fully on manual processes. Across platforms, AI adoption is cautious, prioritizing accuracy and ethical standards.
This study contributes to the literature in several ways. Importantly, it represents one of the few studies examining fact-checking and the use of AI in the Middle East, addressing a notable gap in research. First, it addresses a limited body of research on Akeed, providing empirical insight into a Jordanian media credibility initiative that had previously received scarce academic attention, and fills the absence of research on Factnameh. Second, it confirms and extends the findings of Revez and Corujo (2024) regarding the challenges faced by fact-checking platforms in politically constrained environments. Third, by studying platforms outside Western contexts, this research advances Sreberny’s (2008) call to move beyond Western-centered models in misinformation research. Finally, the alignment of Akeed’s practices with Graves et al. (2015) and Amazeen (2020) further illustrates how fact-checking functions as an accountability mechanism under constrained democratic institutions.
Beyond its theoretical contributions, this study also offers practical insights for fact-checking organizations operating in constrained or polarized environments. The findings highlight the continued centrality of human-driven verification, suggesting that emerging platforms should prioritize training, editorial judgment, and transparent methodologies before investing heavily in automated tools. The cases further show the operational value of combining online verification with offline audience engagement and media literacy activities to strengthen public trust. In addition, the experiences of Akeed, Teyit, and Factnameh illustrate that fact-checkers should avoid over-reliance on external fact-checking programs (e.g., Meta) and develop internal monitoring and prioritization routines to maintain operational continuity under changing platform policies. These insights can inform the organizational design, strategic planning, and technological adoption of similar fact-checking initiatives globally, especially in regions facing political constraints, censorship, or limited institutional support.
Methodologically, this study demonstrates the utility of qualitative document analysis and semi-structured interviews in contexts where numerical reports are incomplete or unavailable, consistent with Morgan (2022). Document analysis enabled systematic exploration of platform strategies, challenges, and audience engagement, while interviews provided rich contextual insights, especially regarding human-driven verification, AI adoption, and operational constraints such as platform-specific restrictions (e.g., Meta).
The conclusions drawn here are limited to the three cases examined and should be interpreted as analytically suggestive rather than representative of fact-checking organizations more broadly. Future research should build on these findings by:
  • Conducting in-depth studies on the Iranian and Jordanian media landscapes, where less is known compared to Turkey.
  • Comparing these three platforms with international best-practice fact-checking sites to evaluate performance, transparency, and methodological standards, including systematic comparisons with platforms in WEIRD countries.
  • Undertaking analytical research comparing Akeed and Factnameh, which maintain structured numerical reports of their work.
  • As the interviews were conducted with senior representatives, the findings may reflect institutional perspectives, so future research could include interviews with a broader range of staff members to capture internal diversity of views.
  • While only one interviewee per platform was included, this reflects the small team size of each organization; nevertheless, future research would benefit from including multiple staff members per platform to capture internal diversity of perspectives.
  • Future studies could complement this qualitative design with quantitative approaches (e.g., content analysis of fact checks or audience metrics) to further enhance methodological triangulation and comparative analysis.
Finally, the comparative analysis underscores opportunities for cross-platform learning. Sharing best practices, such as structured reporting, multimedia engagement, educational outreach, and cautious AI adoption, can enhance transparency, operational resilience, and public trust across platforms operating in similar contexts. These findings provide context-specific practical guidance for emerging fact-checking initiatives and a foundation for future research on media credibility in diverse global contexts.

Author Contributions

Conceptualization, J.V.-H. and H.A.; Methodology, J.V.-H. and H.A.; Software, H.A.; Validation, H.A.; Formal Analysis, H.A.; Investigation, H.A.; Resources, J.V.-H.; Data Curation, H.A.; Writing—Original Draft Preparation, H.A.; Writing—Review & Editing, J.V.-H.; Supervision, J.V.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This article is part of the R&D project Artificial Intelligence in Digital Media in Spain: Effects and Roles (PID2024-156034OB-C22), funded by MICIU/AEI/10.13039/501100011033 and by “ERDF/EU”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Universidade de Santiago de Compostela (protocol code 80/2025, 22 October 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

No new datasets were generated for the document analysis, which is fully reported in the article. Interview data were collected for this study but cannot be publicly shared due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IFCNInternational Fact-Checking Network
EFCSNEuropean Federation of Fact-Checking Standards Network
AIArtificial Intelligence
MetaReferring to the platform (Facebook/Meta)
MILMedia and Information Literacy
AFCAutomated Fact-Checking Systems

Appendix A

Table A1. Documents Analyzed for Akeed, Teyit and Factnameh.
Table A1. Documents Analyzed for Akeed, Teyit and Factnameh.
Fact-Checking SiteDocument TitlesURL
AkeedAbout the Observatoryhttps://akeed.jo/page/aan-almrsd (accessed on 3 November 2025)
Organizational Structurehttps://www.akeed.jo/en/page/alhykl-altnthymy (accessed on 4 November 2025)
Transparency Policyhttps://akeed.jo/en/page/syas-alshfafy (accessed on 4 November 2025)
Credibility Coalition Profilehttps://credibilitycoalition.org/credcatalog/project/jordanian-media-credibility-monitor-akeed/ (accessed on 4 November 2025)
Mision Statementhttps://akeed.jo/en/page/taref (accessed on 4 November 2025)
Decision-Making Processhttps://www.akeed.jo/en/page/alhakmy-kyf-ytkhth-alkrar (accessed on 4 November 2025)
Committee of Expertshttps://akeed.jo/en/page/committee%20of%20experts (accessed on 4 November 2025)
TeyitWho Are Wehttps://en.teyit.org/who-are-we (accessed on 4 November 2025)
Financial Resourceshttps://en.teyit.org/financial-resources (accessed on 5 November 2025)
Publishing Principleshttps://en.teyit.org/publishing-principles (accessed on 5 November 2025)
Methodologyhttps://en.teyit.org/methodology (accessed on 5 November 2025)
Our Valueshttps://en.teyit.org/our-values (accessed on 5 November 2025)
IFCN Profilehttps://ifcncodeofprinciples.poynter.org/profile/teyit (accessed on 5 November 2025)
IFCN Applicationhttps://ifcncodeofprinciples.poynter.org/application/public/teyit/66249bab2a2dd97eb513f804 (accessed on 6 November 2025)
YouTube Channel Postshttps://www.youtube.com/@teyitorg/posts (accessed on 6 November 2025)
COVID-19 Fact-Checking Adaptationhttps://www.poynter.org/reporting-editing/2020/how-covid-19-made-teyit-rethink-their-fact-checking-for-the-small-screen/ (accessed on 6 November 2025)
FactnamehPrincipleshttps://factnameh.com/en/principles (accessed on 6 November 2025)
Abouthttps://factnameh.com/en/about (accessed on 6 November 2025)
IFCN Profilehttps://ifcncodeofprinciples.poynter.org/profile/factnameh (accessed on 6 November 2025)
IFCN Applicationhttps://ifcncodeofprinciples.poynter.org/application/public/factnameh/66d776335d7ff118061609f9 (accessed on 6 November 2025)
Annual Reporthttps://factnameh.com/fa/blog/2025-04-04-factnameh-1403-annual-report (accessed on 6 November 2025)
GIJN Article on Fact-Checking in Restricted Environmentshttps://gijn.org/stories/how-to-fact-check-politics-in-countries-with-no-press-freedom/ (accessed on 6 November 2025)
Factnameh Podcasthttps://podcasts.apple.com/pl/podcast/%D9%81%DA%A9%D8%AA-%D9%86%D8%A7%D9%85%D9%87-factnameh/id1553940495 (accessed on 6 November 2025)
Table A2. Coding Categories for Document Analysis.
Table A2. Coding Categories for Document Analysis.
CategoryDescription
Verification MethodsProcedures and practices for checking and validating information
Ethics and TransparencyPolicies regarding disclosure, decision-making, roles, accountability, and adherence to professional standards
Use of Technology and AIAdoption of automated or semi-automated tools, including AI, in fact-checking workflows
AudienceMethods for interacting with the public, media literacy, and educational initiatives
Challenges and LimitationsBarriers such as political pressure, censorship and limited data access

Appendix B

Table A3. Interview Guide (List of Questions).
Table A3. Interview Guide (List of Questions).
ThemeQuestionsProbes/Follow-Up
Background
& Role
Can you describe your role at the platform?Years of experience, responsibilities, team structure
Verification MethodsHow does your team verify information?Steps, criteria for verification, human-driven vs. technological tools
How do you decide which information to verify?Prioritization: social media vs. traditional media, urgent vs. routine
Operational
Challenges
What are the main challenges in verifying information?Political pressures, censorship, lack of data, audience trust
How do social and political conditions shape your work?Censorship, government pressure, internet shutdowns, political polarization
Have you adjusted methods due to local restrictions or risks?Legal risks, staff safety, content moderation
How do you maintain credibility and transparency amid these challenges?Internal processes, review systems, public communication
Technological Tools/AIDo you use technological tools or AI in verification?Specific tools, databases, monitoring systems
How do these tools affect your workflow?Benefits, limitations, accuracy, efficiency
Audience
Engagement
How do you communicate fact-check results to your audiences?Social media, multimedia, newsletters, educational programs
How do you measure impact or engagement?Feedback, reach, behavior change, media uptake
Future &
AI Integration
How do you see fact-checking evolving in your country or globally?Plans for technology, AI adoption, scaling operations
Are there innovations or practices you hope to adopt in verification?Collaboration, multimedia, cross-border work

References

  1. Adams, William C. 2015. Conducting semi-structured interviews. In Handbook of Practical Program Evaluation. Edited by Kathryn E. Newcomer, Harry P. Hatry and Joseph S. Wholey. Hoboken: Wiley, pp. 492–505. [Google Scholar] [CrossRef]
  2. Akser, Murat, and Banu Baybars-Hawks. 2012. Media and democracy in Turkey: Toward a model of neoliberal media autocracy. Middle East Journal of Culture and Communication 5: 302–21. [Google Scholar] [CrossRef]
  3. Alhindi, Tariq, Savvas Petridis, and Smaranda Muresan. 2018. Where is your evidence: Improving fact-checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Brussels: Association for Computational Linguistics, pp. 85–90. [Google Scholar] [CrossRef]
  4. Aljalabneh, Abd A. 2024. Visual media literacy: Educational strategies to combat image and video disinformation on social media. Frontiers in Communication 9: 1490798. [Google Scholar] [CrossRef]
  5. Al-Jalabneh, Abd A., Amjad O. Safori, and Hatem Shlool. 2023. COVID-19 and Misinformation Prevalence: A Content Analysis of Fake News Stories Spread in Jordan. In The Implementation of Smart Technologies for Business Success and Sustainability. Studies in Systems, Decision and Control. Edited by Allam Hamdan, Haneen M. Shoaib, Bahaaeddin Alareeni and Reem Hamdan. Berlin/Heidelberg: Springer, pp. 535–45. [Google Scholar] [CrossRef]
  6. Alshwayyat, Hala, and Jorge Vázquez-Herrero. 2025. Effectiveness analysis of akeed: A media credibility monitor against information disorders in Jordan. In Proceedings of the XVI International Conference on Online Journalism and Digital Communication. Leioa: Universidad del País Vasco/Euskal Herriko Unibertsitatea, pp. 9–24. [Google Scholar]
  7. Alzubi, Ahmad M. 2022. Impact of new digital media on conventional media and visual communication in Jordan. Journal of Engineering, Technology, and Applied Science 4: 105–13. [Google Scholar] [CrossRef]
  8. Amazeen, Michelle A. 2020. Journalistic interventions: The structural factors affecting the global emergence of fact-checking. Journalism 21: 95–111. [Google Scholar] [CrossRef]
  9. Ayeb, Marina, and Tiziano Bonini. 2024. “It was very hard for me to keep doing that job”: Understanding troll farm’s working in the Arab world. Social Media + Society 2024: 1–10. [Google Scholar] [CrossRef]
  10. Badji, Samba D., Kristin S. Orgeret, and Bruce Mutsvairo. 2024. An Exploratory Study of Fact-Checking Practices in Conflict and Authoritarian Contexts. Media and Communication 12: 8698. [Google Scholar] [CrossRef]
  11. Birmingham, Peter, and David Wilkinson. 2003. Using Research Instruments: A Guide for Researchers. London: Routledge. [Google Scholar]
  12. Blout, E. L. 2017. Soft war: Myth, nationalism, and media in Iran. The Communication Review 20: 212–24. [Google Scholar] [CrossRef]
  13. Bowen, Glenn A. 2009. Document analysis as a qualitative research method. Qualitative Research Journal 9: 27–40. [Google Scholar] [CrossRef]
  14. Boyce, Carolyn, and Palena Neale. 2006. Conducting In-Depth Interviews: A Guide for Designing and Conducting In-Depth Interviews for Evaluation Input. Washington, DC: Pathfinder International. [Google Scholar]
  15. Brandtzaeg, Petter B., Asbjørn Følstad, and María Ángeles Chaparro Domínguez. 2018. How journalists and social media users perceive online fact-checking and verification services. Journalism Practice 12: 1109–29. [Google Scholar] [CrossRef]
  16. Budak, Emrah, and Enes Baloğlu. 2025. Accurate Information Sharing and Combating Disinformation on Climate Change: A Comparison of Climatefeedback.org and Teyit.org. Türk Kütüphaneciliği 39: 265–87. [Google Scholar] [CrossRef]
  17. Campbell, Steve, Melanie Greenwood, Sarah Prior, Toniele Shearer, Kerrie Walkem, Sarah Young, Danielle Bywaters, and Kim Walker. 2020. Purposive sampling: Complex or simple? Research case examples. Journal of Research in Nursing 25: 652–61. [Google Scholar] [CrossRef]
  18. Corke, Susan, Andrew Finkel, David J. Kramer, Carla A. Robbins, and Nate Schenkkan. 2014. Democracy in Crisis: Corruption, Media, and Power in Turkey. Washington, DC: Freedom House. [Google Scholar]
  19. Coşkun, Gülçin B. 2020. Media capture strategies in new authoritarian states: The case of Turkey. Publizistik 65: 637–54. [Google Scholar] [CrossRef]
  20. Çetinkaya, Ahmet, Özgür Şahin, and Ali M. Kırık. 2014. A research on social and political use of social media in Turkey. International Journal of Sport Culture and Science 2: 49–60. [Google Scholar] [CrossRef]
  21. Dayyeh, Abeer M. A., and Khawla Y. Al-Zaghlawan. 2024. The Jordanian experience in media and information literacy: A case study on the Jordanian Media Institute project. Dirasat: Human and Social Sciences 51: 174–87. [Google Scholar]
  22. Demir, Vedat. 2020. Freedom of the media in Turkey under the AKP government. In Human Rights in Turkey: Assaults on Human Dignity. Cham: Springer International Publishing, pp. 51–88. [Google Scholar]
  23. Flick, Uwe. 2019. The concepts of qualitative data: Challenges in neoliberal times for qualitative inquiry. Qualitative Inquiry 25: 713–20. [Google Scholar] [CrossRef]
  24. Graves, Lucas. 2018. Understanding the Promise and Limits of Automated Fact-Checking. Oxford: Reuters Institute for the Study of Journalism. [Google Scholar]
  25. Graves, Lucas, and Federica Cherubini. 2016. The Rise of Fact-Checking Sites in Europe. Oxford: Reuters Institute for the Study of Journalism. [Google Scholar]
  26. Graves, Lucas, and Michelle A. Amazeen. 2019. Fact-checking as idea and practice in journalism. In Oxford Research Encyclopedia of Communication. Oxford: Oxford University Press. [Google Scholar] [CrossRef]
  27. Graves, Lucas, Brendan Nyhan, and Jason Reifler. 2015. The Diffusion of Fact-Checking. Arlington: American Press Institute. [Google Scholar]
  28. Guo, Zhijiang., Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics 10: 178–206. [Google Scholar] [CrossRef]
  29. Gutiérrez-Caneda, Beatriz, and Jorge Vázquez-Herrero. 2024. Redrawing the lines against disinformation: How AI is shaping the present and future of fact-checking. Tripodos 55: 55–74. [Google Scholar] [CrossRef]
  30. Habes, Mohammed, Mokhtar Elareshi, Ahmed Mansoori, Saadia Pasha, Said A. Salloum, and Waleed M. Al-Rahmi. 2023. Factors indicating media dependency and online misinformation sharing in Jordan. Sustainability 15: 1474. [Google Scholar] [CrossRef]
  31. Hassan, Naeemul, Bill Adair, James T. Hamilton, Chengkai Li, Mark Tremayne, Jun Yang, and Cong Yu. 2015. The quest to automate fact-checking. In Proceedings of the 2015 Computation+Journalism Symposium. New York: The Computation+Journalism Symposium. [Google Scholar]
  32. Juneja, Prerna, and Tanushree Mitra. 2022. Human and technological infrastructures of fact-checking. Proceedings of the ACM on Human-Computer Interaction 6: 1–36. [Google Scholar] [CrossRef]
  33. Kandel, Nirmal. 2020. Information disorder syndrome and its management. JNMA: Journal of the Nepal Medical Association 58: 280. [Google Scholar] [CrossRef]
  34. Karadağ, Gökmen H., and Adem Ayten. 2020. A comparative study of verification/fact-checking organizatons in Turkey: Dogrulukpayi.com and teyit.org. Motif Akademi Halkbilimi Dergisi 13: 483–501. [Google Scholar] [CrossRef]
  35. Karppinen, Kari, and Hallvard Moe. 2012. What we talk about when we talk about document analysis. In Trends in Communication Policy Research: New Theories, Methods and Subjects. Bristol: Intellect, pp. 177–93. [Google Scholar]
  36. Karppinen, Kari, and Hallvard Moe. 2019. Texts as data I: Document analysis. In The Palgrave Handbook of Methods for Media Policy Research. Cham: Springer International Publishing, pp. 249–62. [Google Scholar]
  37. Kovach, Bill, and Tom Rosenstiel. 2001. The Elements of Journalism: What Newspeople Should Know and the Public Should Expect. New York: Crown Publishers. [Google Scholar]
  38. Kurban, Dilek, and Ceren Sözeri. 2013. Policy Suggestions for Free and Independent Media in Turkey. Beşiktaş: Turkish Economic and Social Studies Foundation. [Google Scholar]
  39. Liu, Xingyu, Li Qi, Laurent Wang, and Miriam J. Metzger. 2025. Checking the fact-checkers: The role of source type, perceived credibility, and individual differences in fact-checking effectiveness. Communication Research 52: 719–46. [Google Scholar] [CrossRef]
  40. Margolin, Drew B., Aniko Hannak, and Ingmar Weber. 2018. Political fact-checking on Twitter: When do corrections have an effect? Political Communication 35: 196–219. [Google Scholar] [CrossRef]
  41. Morgan, Hani. 2022. Conducting a qualitative document analysis. The Qualitative Report 27: 64–77. [Google Scholar] [CrossRef]
  42. Nakov, Preslav, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021. Automated fact-checking for assisting human fact-checkers. arXiv arXiv:2103.07769. [Google Scholar] [CrossRef]
  43. Nyhan, Brendan, Ethan Porter, Jason Reifler, and Thomas J. Wood. 2020. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior 42: 939–60. [Google Scholar] [CrossRef]
  44. Pies, Judith, and Philip Madanat. 2011. Beyond State Regulation: How Online Practices Contribute to Holding the Media Accountable in Jordan. Tampere: MediaAcT. [Google Scholar] [CrossRef]
  45. Rahimi, Babak. 2015. Censorship and the Islamic Republic: Two modes of regulatory measures for media in Iran. The Middle East Journal 69: 358–78. [Google Scholar] [CrossRef]
  46. Revez, Jprge, and Luís Corujo. 2024. Scientists’ behaviour towards information disorder: A systematic review. Journal of Information Science. Online First. [Google Scholar] [CrossRef]
  47. Saka, Erkan. 2018. Social media in Turkey as a space for political battles: AKTrolls and other politically motivated trolling. Middle East Critique 27: 161–77. [Google Scholar] [CrossRef]
  48. Sohrabi-Haghighat, Mohammad H. 2011. New Media and Social-political Change in Iran. CyberOrient 5: 90–109. [Google Scholar] [CrossRef]
  49. Sreberny, Annabelle. 2008. The analytic challenges of studying the Middle East and its evolving media environment. Middle East Journal of Culture and Communication 1: 8–23. [Google Scholar] [CrossRef]
  50. Tweissi, Basim. 2019. Media reform in Jordan: Severe transformations. Confluences Méditerranée 110: 113–26. [Google Scholar] [CrossRef]
  51. Ünal, Recep, and Alp Ş. Çiçeklioğlu. 2019. The function and importance of fact-checking organizations in the era of fake news: Teyit.org, an example from turkey. Media Studies 10: 140–60. [Google Scholar] [CrossRef]
  52. Ünal, Recep, and Alp Ş. Çiçeklioğlu. 2022. Fake news pandemic: Fake news and false information about COVID-19 and an analysis on factchecking from Turkey in sample teyit. org. Erciyes İletişim Dergisi 9: 117–43. [Google Scholar] [CrossRef]
  53. Van Damme, Thomas. 2021. Global Trends in Fact-Checking. A Data-Driven Analysis of ClaimReview. Antwerp: Faculty of Social Sciences, University of Antwerp. [Google Scholar]
  54. Vázquez-Herrero, Jorge, Ángel Vizoso, and Xosé López-García. 2019. Innovación tecnológica y comunicativa para combatir la desinformación: 135 experiencias para un cambio de rumbo. Profesional de la Información 28: e280301. [Google Scholar] [CrossRef]
  55. Vinhas, Otávio, and Marco Bastos. 2024. When Fact-Checking Is Not WEIRD: Negotiating Consensus Outside Western, Educated, Industrialized, Rich, and Democratic Countries. The International Journal of Press/Politics 30: 256–76. [Google Scholar] [CrossRef]
  56. Vlachos, Andreas, and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science. Baltimore: Association for Computational Linguistics, pp. 18–22. [Google Scholar]
  57. Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking. Strasbourg: Council of Europe. [Google Scholar]
  58. Zanconato, Alberto, and Farian Sabahi. 2019. Iran-Media Landscape. Maastricht: European Journalism Centre. [Google Scholar]
Table 1. Interview participants.
Table 1. Interview participants.
NameOrganizationRoleDate of InterviewDuration
Hussein Abu RummanAkeedHead Editor19 November 20251 h 23 min
Dilge TemizTeyitFact-checker and Editor20 November 202528 min
Farhad SouzanchiFactnamehHead Editor20 November 202542 min
Table 2. Comparative overview of Akeed, Teyit, and Factnameh based on document analysis.
Table 2. Comparative overview of Akeed, Teyit, and Factnameh based on document analysis.
CategoryAkeed (Jordan)Teyit (Turkey)Factnameh (Iran)
Focus AreasMedia accountability
Journalistic standards
Accuracy, balance and transparency
Indirect focus on politics and social issues
Broad misinformation
Critical thinking
Digital literacy
Education and empowerment
Misinformation/disinformation campaigns
Public opinion
Civil society and government accountability
Politics and social issues
Verification MethodsPrimary-source verification
Human-driven
Professional standards-based
Structured verification
Adheres to IFCN and EFCSN principles
Transparent methodology
Human-driven
Uses technology tools and collaboration with in-country and diaspora audiences
Multimedia verification
Human-driven
Nonpartisan Fact-Checking TeamYes (publicly frames itself as neutral and professional)Yes (aligned with IFCN principles of nonpartisanship)Yes (self-described nonpartisan fact-checking initiative)
Transparency of MethodologyYes (public documentation of methods)Yes (IFCN principles)Yes (publicly communicates verification approach)
Audience
Engagement
Alerts media and public to breaches
Promotes media literacy
Online observatory
Educational programs
Stakeholder cooperation
Public feedback and complaint mechanisms
Social enterprise model
Interactive multimedia platforms (videos, podcasts, and short documentaries)
Engages in-country and diaspora audiences
Civic empowerment focus
ChallengesMisinformation in transitional democratic context
Promoting standards under social/political changes
Maintaining impartiality and transparency
Balancing fact-checking with education and social enterprise goals
Heavy censorship
Limited freedom of expression
State-backed
Disinformation campaigns
Internet shutdowns
AI UsageNot mentionedNot mentionedTechnology tools for monitoring, but no explicit AI
Table 3. Cross-case comparison of fact-checking practices across Akeed, Teyit, and Factnameh.
Table 3. Cross-case comparison of fact-checking practices across Akeed, Teyit, and Factnameh.
VariableAkeed (Jordan)Teyit (Turkey)Factnameh (Iran)
Human-driven verificationYesYesYes
Adheres to IFCN principlesNoYesLimited
Uses technological tools for monitoringNoLimited/ExploratoryYes
Multimedia verificationLimitedYesYes
Nonpartisan positioningYesYesYes
Transparency of methodologyYesYesYes
Media literacy initiativesYesYesYes
Public feedback mechanismsYesYesYes
AI used in verification workflowNoExploratoryNo
Tech tools used for monitoring claimsNoLimitedYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshwayyat, H.; Vázquez-Herrero, J. Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence. Soc. Sci. 2026, 15, 185. https://doi.org/10.3390/socsci15030185

AMA Style

Alshwayyat H, Vázquez-Herrero J. Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence. Social Sciences. 2026; 15(3):185. https://doi.org/10.3390/socsci15030185

Chicago/Turabian Style

Alshwayyat, Hala, and Jorge Vázquez-Herrero. 2026. "Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence" Social Sciences 15, no. 3: 185. https://doi.org/10.3390/socsci15030185

APA Style

Alshwayyat, H., & Vázquez-Herrero, J. (2026). Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence. Social Sciences, 15(3), 185. https://doi.org/10.3390/socsci15030185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop