Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = disinformation detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1171 KB  
Article
Journalists’ Perceptions of Artificial Intelligence and Disinformation Risks
by Urko Peña-Alonso, Simón Peña-Fernández and Koldobika Meso-Ayerdi
Journal. Media 2025, 6(3), 133; https://doi.org/10.3390/journalmedia6030133 - 30 Aug 2025
Viewed by 315
Abstract
This study examines journalists’ perceptions of the impact of artificial intelligence (AI) on disinformation, a growing concern in journalism due to the rapid expansion of generative AI and its influence on news production and media organizations. Using a quantitative approach, a structured survey [...] Read more.
This study examines journalists’ perceptions of the impact of artificial intelligence (AI) on disinformation, a growing concern in journalism due to the rapid expansion of generative AI and its influence on news production and media organizations. Using a quantitative approach, a structured survey was administered to 504 journalists in the Basque Country, identified through official media directories and with the support of the Basque Association of Journalists. This survey, conducted online and via telephone between May and June 2024, included questions on sociodemographic and professional variables, as well as attitudes toward AI’s impact on journalism. The results indicate that a large majority of journalists (89.88%) believe AI will considerably or significantly increase the risks of disinformation, and this perception is consistent across genders and media types, but more pronounced among those with greater professional experience. Statistical analyses reveal a significant association between years of experience and perceived risk, and between AI use and risk perception. The main risks identified are the difficulty in detecting false content and deepfakes, and the risk of obtaining inaccurate or erroneous data. Co-occurrence analysis shows that these risks are often perceived as interconnected. These findings highlight the complex and multifaceted concerns of journalists regarding AI’s role in the information ecosystem. Full article
Show Figures

Figure 1

25 pages, 1183 KB  
Article
Decoding Disinformation: A Feature-Driven Explainable AI Approach to Multi-Domain Fake News Detection
by Steve Nwaiwu, Nipat Jongsawat and Anucha Tungkasthan
Appl. Sci. 2025, 15(17), 9498; https://doi.org/10.3390/app15179498 - 29 Aug 2025
Viewed by 287
Abstract
Digital misinformation presents a dual challenge: achieving high detection accuracy while ensuring interpretability. This paper introduces X-FRAME (Explainable FRAMing Engine), a hybrid framework that combines semantic representations from XLM-RoBERTa with theory-informed features related to psycholinguistic framing, source credibility, and social context. Unlike fact-checking [...] Read more.
Digital misinformation presents a dual challenge: achieving high detection accuracy while ensuring interpretability. This paper introduces X-FRAME (Explainable FRAMing Engine), a hybrid framework that combines semantic representations from XLM-RoBERTa with theory-informed features related to psycholinguistic framing, source credibility, and social context. Unlike fact-checking systems that verify claims directly, X-FRAME detects linguistic, contextual, and stylistic indicators statistically associated with misinformation. Evaluated across eight publicly available datasets totaling 286,260 samples, X-FRAME achieves 86% accuracy and 81% recall on the minority Fake class, significantly outperforming text-only and features-only baselines. The model demonstrates cross-domain adaptability potential, attaining 97% accuracy on formal news articles and 72% on social media content. Importantly, X-FRAME provides transparent, human-understandable rationales via Local Interpretable Model-agnostic Explanations (LIME) and Permutation Importance, anchoring predictions in interpretable features such as sensationalism and speaker credibility. This work advances misinformation detection by unifying high performance with explainability and cross-domain adaptability. Full article
Show Figures

Figure 1

20 pages, 2833 KB  
Article
A Multi-Level Annotation Model for Fake News Detection: Implementing Kazakh-Russian Corpus via Label Studio
by Madina Sambetbayeva, Anargul Nekessova, Aigerim Yerimbetova, Abdygalym Bayangali, Mira Kaldarova, Duman Telman and Nurzhigit Smailov
Big Data Cogn. Comput. 2025, 9(8), 215; https://doi.org/10.3390/bdcc9080215 - 20 Aug 2025
Viewed by 493
Abstract
This paper presents a multi-level annotation model for detecting fake news in Kazakh and Russian languages, aiming to enhance understanding of disinformation strategies in multilingual digital media environments. Unlike traditional binary models, our approach captures the complexity of disinformation by accounting for both [...] Read more.
This paper presents a multi-level annotation model for detecting fake news in Kazakh and Russian languages, aiming to enhance understanding of disinformation strategies in multilingual digital media environments. Unlike traditional binary models, our approach captures the complexity of disinformation by accounting for both linguistic and cultural factors. To support this, a corpus of over 5000 news texts was manually annotated using the Label Studio platform. The annotation scheme consists of seven interrelated categories: CLAIM, SOURCE, EVIDENCE, DISINFORMATION_TECHNIQUE, AUTHOR_INTENT, TARGET_AUDIENCE, and TIMESTAMP. Inter-annotator agreement, evaluated using Cohen’s Kappa, ranged from 0.72 to 0.81, indicating substantial consistency. The annotated data reveals recurring patterns of disinformation, such as emotional manipulation, targeting of vulnerable individuals, and the strategic concealment of intent. Semantic relations between entities, such as CLAIM → EVIDENCE and CLAIM → AUTHOR_INTENT were formalized to represent disinformation narratives as knowledge graphs. This study contributes the first linguistically and culturally adapted annotation model for Kazakh and Russian languages, providing a robust and empirical resource for building interpretable and context-aware fake news detection systems. The resulting annotated corpus and its semantic structure offer valuable empirical material for further research in natural language processing, computational linguistics, and media studies in low-resource language environments. Full article
Show Figures

Figure 1

23 pages, 6919 KB  
Article
Addressing the Information Asymmetry of Fake News Detection Using Large Language Models and Emotion Embeddings
by Kirishnni Prabagar, Kogul Srikandabala, Nilaan Loganathan, Shalinka Jayatilleke, Gihan Gamage and Daswin De Silva
Symmetry 2025, 17(8), 1290; https://doi.org/10.3390/sym17081290 - 11 Aug 2025
Viewed by 470
Abstract
Fake news generation and propagation occurs in large volumes, at high speed, in diverse formats, while also being short-lived to evade detection and counteraction. Despite its role as an enabler, Artificial Intelligence (AI) has been effective at fake news detection and prediction through [...] Read more.
Fake news generation and propagation occurs in large volumes, at high speed, in diverse formats, while also being short-lived to evade detection and counteraction. Despite its role as an enabler, Artificial Intelligence (AI) has been effective at fake news detection and prediction through diverse techniques of both supervised and unsupervised machine learning. In this article, we propose a novel Artificial Intelligence (AI) approach that addresses the underexplored attribution of information asymmetry in fake news detection. This approach demonstrates how fine-tuned language models and emotion embeddings can be used to detect information asymmetry in intent, emotional framing, and linguistic complexity between content creators and content consumers. The intensity and temperature of emotion, selection of words, and the structure and relationship between words contribute to detecting this asymmetry. An empirical evaluation conducted on five benchmark datasets demonstrates the generalizability and real-time detection capabilities of the proposed AI approach. Full article
Show Figures

Figure 1

13 pages, 248 KB  
Article
Fake News: Offensive or Defensive Weapon in Information Warfare
by Iuliu Moldovan, Norbert Dezso, Daniela Edith Ceană and Toader Septimiu Voidăzan
Soc. Sci. 2025, 14(8), 476; https://doi.org/10.3390/socsci14080476 - 30 Jul 2025
Viewed by 627
Abstract
Background and Objectives: Rumors, disinformation, and fake news are problems of contemporary society. We live in a world where the truth no longer holds much importance, and the line that divides the truth from lies, between real news and disinformation, becomes increasingly blurred [...] Read more.
Background and Objectives: Rumors, disinformation, and fake news are problems of contemporary society. We live in a world where the truth no longer holds much importance, and the line that divides the truth from lies, between real news and disinformation, becomes increasingly blurred and difficult to identify. The purpose of this study is to describe this concept, to draw attention to one of the “pandemics” of the 21st-century world, and to find methods by which we can defend ourselves against them. Materials and methods. A cross-sectional study was conducted based on a sample of 442 respondents. Results. For 77.8% of the people surveyed, the concept of “fake news” is important in Romania. Regarding trust in the mass media, a clear dominance (72.4%) was observed among participants who have little trust in the mass media. Although 98.2% of participants detect false information found on the internet, 78.5% are occasionally deceived by the information provided. Of the participants, 47.3% acknowledged their vulnerability to disinformation. The main source of disinformation is the internet, as 59% of the interviewed subjects believed. As the best measure against disinformation, the study group was divided almost equally according to the three possible answers, all of which were considered to be equally important: imposing legal restrictions and blocking the posting of certain news (35.4%), imposing stricter measures for authors (33.9%), and increasing vigilance among people (30.5%). Conclusions. According to the statistics based on the participants’ responses, the main purposes of disinformation are propaganda, manipulation, distracting attention from the truth, making money, and misleading the population. It can be observed that the main intention of disinformation, in the perception of the study participants, is manipulation. Full article
(This article belongs to the Special Issue Disinformation and Misinformation in the New Media Landscape)
1 pages, 161 KB  
Correction
Correction: Sandu et al. Machine Learning and Deep Learning Applications in Disinformation Detection: A Bibliometric Assessment. Electronics 2024, 13, 4352
by Andra Sandu, Liviu-Adrian Cotfas, Camelia Delcea, Corina Ioanăș, Margareta-Stela Florescu and Mihai Orzan
Electronics 2025, 14(15), 3017; https://doi.org/10.3390/electronics14153017 - 29 Jul 2025
Viewed by 164
Abstract
In the original publication [...] Full article
18 pages, 627 KB  
Review
Mapping the Impact of Generative AI on Disinformation: Insights from a Scoping Review
by Alexandre López-Borrull and Carlos Lopezosa
Publications 2025, 13(3), 33; https://doi.org/10.3390/publications13030033 - 21 Jul 2025
Viewed by 1962
Abstract
This article presents a scoping review of the academic literature published between 2021 and 2024 on the intersection of generative artificial intelligence (AI) and disinformation. Drawing from 64 peer-reviewed studies, the review examines the current research landscape and identifies six key thematic areas: [...] Read more.
This article presents a scoping review of the academic literature published between 2021 and 2024 on the intersection of generative artificial intelligence (AI) and disinformation. Drawing from 64 peer-reviewed studies, the review examines the current research landscape and identifies six key thematic areas: political disinformation and propaganda; scientific disinformation; fact-checking; journalism and the media; media literacy and education; and deepfakes. The findings reveal that generative AI plays a dual role: it enables the rapid creation and targeted dissemination of synthetic content but also offers new opportunities for detection, verification, and public education. Beyond summarizing research trends, this review highlights the broader societal and practical implications of generative AI in the context of information disorder. It outlines how AI tools are already reshaping journalism, challenging scientific communication, and transforming strategies for media literacy and fact-checking. The analysis also identifies key policy and governance challenges, particularly the need for coordinated responses from governments, platforms, educators, and civil society actors. By offering a structured overview of the field, the article enhances our understanding of how generative AI can both exacerbate and help mitigate disinformation, and proposes directions for research, regulation, and public engagement. Full article
Show Figures

Figure 1

44 pages, 7066 KB  
Article
A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations
by Zoi Lygizou and Dimitris Kalles
Appl. Sci. 2025, 15(11), 6125; https://doi.org/10.3390/app15116125 - 29 May 2025
Viewed by 490
Abstract
Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; [...] Read more.
Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; changing behaviors, where previously reliable agents may degrade over time; and the cold start problem, which hinders the evaluation of newly introduced agents due to a lack of prior data. To address these issues, we introduced a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally. This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy. Despite these advantages, prior evaluations revealed the limitations of our model in adapting to provider population changes and continuous performance fluctuations. This study proposes a novel algorithm, incorporating a self-classification mechanism for providers to detect performance drops that are potentially harmful for service consumers. The simulation results demonstrate that the new algorithm outperforms its original version and FIRE, a well-known trust and reputation model, particularly in handling dynamic trustee behavior. While FIRE remains competitive under extreme environmental changes, the proposed algorithm demonstrates greater adaptability across various conditions. In contrast to existing trust modeling research, this study conducts a comprehensive evaluation of our model using widely recognized trust model criteria, assessing its resilience against common trust-related attacks while identifying strengths, weaknesses, and potential countermeasures. Finally, several key directions for future research are proposed. Full article
Show Figures

Figure 1

32 pages, 4415 KB  
Review
Disinformation in the Digital Age: Climate Change, Media Dynamics, and Strategies for Resilience
by Andrea Tomassi, Andrea Falegnami and Elpidio Romano
Publications 2025, 13(2), 24; https://doi.org/10.3390/publications13020024 - 6 May 2025
Cited by 2 | Viewed by 4190
Abstract
Scientific disinformation has emerged as a critical challenge at the interface of science and society. This paper examines how false or misleading scientific content proliferates across both social media and traditional media and evaluates strategies to counteract its spread. We conducted a comprehensive [...] Read more.
Scientific disinformation has emerged as a critical challenge at the interface of science and society. This paper examines how false or misleading scientific content proliferates across both social media and traditional media and evaluates strategies to counteract its spread. We conducted a comprehensive literature review of research on scientific misinformation across disciplines and regions, with particular focus on climate change and public health as exemplars. Our findings indicate that social media algorithms and user dynamics can amplify false scientific claims, as seen in case studies of viral misinformation campaigns on vaccines and climate change. Traditional media, meanwhile, are not immune to spreading inaccuracies—journalistic practices such as sensationalism or “false balance” in reporting have at times distorted scientific facts, impacting public understanding. We review efforts to fight disinformation, including technological tools for detection, the application of inoculation theory and prebunking techniques, and collaborative approaches that bridge scientists and journalists. To empower individuals, we propose practical guidelines for critically evaluating scientific information sources and emphasize the importance of digital and scientific literacy. Finally, we discuss methods to quantify the prevalence and impact of scientific disinformation—ranging from social network analysis to surveys of public belief—and compare trends across regions and scientific domains. Our results underscore that combating scientific disinformation requires an interdisciplinary, multi-pronged approach, combining improvements in science communication, education, and policy. We conducted a scoping review of 85 open-access studies focused on climate-related misinformation and disinformation, selected through a systematic screening process based on PRISMA criteria. This approach was chosen to address the lack of comprehensive mappings that synthesize key themes and identify research gaps in this fast-growing field. The analysis classified the literature into 17 thematic clusters, highlighting key trends, gaps, and emerging challenges in the field. Our results reveal a strong dominance of studies centered on social media amplification, political denialism, and cognitive inoculation strategies, while underlining a lack of research on fact-checking mechanisms and non-Western contexts. We conclude with recommendations for strengthening the resilience of both the public and information ecosystems against the spread of false scientific claims. Full article
Show Figures

Figure 1

27 pages, 3907 KB  
Article
Detecting Disinformation in Croatian Social Media Comments
by Igor Ljubi, Zdravko Grgić, Marin Vuković and Gordan Gledec
Future Internet 2025, 17(4), 178; https://doi.org/10.3390/fi17040178 - 17 Apr 2025
Viewed by 773
Abstract
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in [...] Read more.
The frequency with which fake news or misinformation is published on social networks is constantly increasing. Users of social networks are confronted with many different posts every day, often with sensationalist titles and content of dubious veracity. The problem is particularly common in times of sensitive social or political situations, such as epidemics of contagious diseases or elections. As such messages can have an impact on democratic processes or cause panic among the population, many countries and the European Commission itself have recently stepped up their activities to combat disinformation campaigns on social networks. Since previous research has shown that there are no tools available to combat disinformation in the Croatian language, we proposed a framework to detect potentially misinforming content in the comments on social media. The case study was conducted with real public comments published on Croatian Facebook pages. The initial results of this framework were encouraging as it can successfully classify and detect disinformation content. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

21 pages, 1965 KB  
Article
Integrating Message Content and Propagation Path for Enhanced False Information Detection Using Bidirectional Graph Convolutional Neural Networks
by Jie Hu, Mei Yang, Bingbing Tang and Jianjun Hu
Appl. Sci. 2025, 15(7), 3457; https://doi.org/10.3390/app15073457 - 21 Mar 2025
Viewed by 826
Abstract
We investigate the impact of textual content and its structural characteristics on the detection of false information. We propose a Bidirectional Graph Convolutional Neural Network (ICP-BGCN) that integrates message content with its propagation paths for enhanced detection performance. Our approach leverages web propagation [...] Read more.
We investigate the impact of textual content and its structural characteristics on the detection of false information. We propose a Bidirectional Graph Convolutional Neural Network (ICP-BGCN) that integrates message content with its propagation paths for enhanced detection performance. Our approach leverages web propagation topology by transforming disconnected user posts into a bidirectional propagation graph, which integrates top-down and bottom-up pathways derived from post forwarding and commenting relationships. Using BERT embeddings, we extract contextual semantic features from both source texts and their propagated counterparts, which are embedded as node attributes within the propagation graph. The bidirectional graph convolutional neural network subsequently learns the feature representations of the event propagation network during information dissemination, merging these representations with the original text content features to achieve comprehensive disinformation detection. Experimental results demonstrate significant improvements over existing methods. On benchmark datasets Twitter15 and Twitter16, our model achieves accuracy rates of 89.7% and 91.7%, respectively, outperforming state-of-the-art baselines by 1.1% and 3.7%. The proposed ICP-BGCN exhibits strong cross-domain generalization, attaining 84.4% accuracy on the Pheme dataset and achieving improvements of 1.8% in accuracy and 3.8% in Macro-F1 score on SemEval-2017 Task 8. Full article
(This article belongs to the Collection Innovation in Information Security)
Show Figures

Figure 1

30 pages, 1605 KB  
Article
From Misinformation to Insight: Machine Learning Strategies for Fake News Detection
by Despoina Mouratidis, Andreas Kanavos and Katia Kermanidis
Information 2025, 16(3), 189; https://doi.org/10.3390/info16030189 - 28 Feb 2025
Cited by 2 | Viewed by 7591
Abstract
In the digital age, the rapid proliferation of misinformation and disinformation poses a critical challenge to societal trust and the integrity of public discourse. This study presents a comprehensive machine learning framework for fake news detection, integrating advanced natural language processing techniques and [...] Read more.
In the digital age, the rapid proliferation of misinformation and disinformation poses a critical challenge to societal trust and the integrity of public discourse. This study presents a comprehensive machine learning framework for fake news detection, integrating advanced natural language processing techniques and deep learning architectures. We rigorously evaluate a diverse set of detection models across multiple content types, including social media posts, news articles, and user-generated comments. Our approach systematically compares traditional machine learning classifiers (Naïve Bayes, SVMs, Random Forest) with state-of-the-art deep learning models, such as CNNs, LSTMs, and BERT, while incorporating optimized vectorization techniques, including TF-IDF, Word2Vec, and contextual embeddings. Through extensive experimentation across multiple datasets, our results demonstrate that BERT-based models consistently achieve superior performance, significantly improving detection accuracy in complex misinformation scenarios. Furthermore, we extend the evaluation beyond conventional accuracy metrics by incorporating the Matthews Correlation Coefficient (MCC) and Receiver Operating Characteristic–Area Under the Curve (ROC–AUC), ensuring a robust and interpretable assessment of model efficacy. Beyond technical advancements, we explore the ethical implications of automated misinformation detection, addressing concerns related to censorship, algorithmic bias, and the trade-off between content moderation and freedom of expression. This research not only advances the methodological landscape of fake news detection but also contributes to the broader discourse on safeguarding democratic values, media integrity, and responsible AI deployment in digital environments. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Graphical abstract

19 pages, 2333 KB  
Review
Detection of Manipulations in Digital Images: A Review of Passive and Active Methods Utilizing Deep Learning
by Paweł Duszejko, Tomasz Walczyna and Zbigniew Piotrowski
Appl. Sci. 2025, 15(2), 881; https://doi.org/10.3390/app15020881 - 17 Jan 2025
Cited by 3 | Viewed by 4185
Abstract
The modern society generates vast amounts of digital content, whose credibility plays a pivotal role in shaping public opinion and decision-making processes. The rapid development of social networks and generative technologies, such as deepfakes, significantly increases the risk of disinformation through image manipulation. [...] Read more.
The modern society generates vast amounts of digital content, whose credibility plays a pivotal role in shaping public opinion and decision-making processes. The rapid development of social networks and generative technologies, such as deepfakes, significantly increases the risk of disinformation through image manipulation. This article aims to review methods for verifying images’ integrity, particularly through deep learning techniques, addressing both passive and active approaches. Their effectiveness in various scenarios has been analyzed, highlighting their advantages and limitations. This study reviews the scientific literature and research findings, focusing on techniques that detect image manipulations and localize areas of tampering, utilizing both statistical properties of images and embedded hidden watermarks. Passive methods, based on analyzing the image itself, are versatile and can be applied across a broad range of cases; however, their effectiveness depends on the complexity of the modifications and the characteristics of the image. Active methods, which involve embedding additional information into the image, offer precise detection and localization of changes but require complete control over creating and distributing visual materials. Both approaches have their applications depending on the context and available resources. In the future, a key challenge remains the development of methods resistant to advanced manipulations generated by diffusion models and further leveraging innovations in deep learning to protect the integrity of visual content. Full article
(This article belongs to the Special Issue Integration of AI in Signal and Image Processing)
Show Figures

Figure 1

29 pages, 863 KB  
Article
Fake News Detection and Classification: A Comparative Study of Convolutional Neural Networks, Large Language Models, and Natural Language Processing Models
by Konstantinos I. Roumeliotis, Nikolaos D. Tselikas and Dimitrios K. Nasiopoulos
Future Internet 2025, 17(1), 28; https://doi.org/10.3390/fi17010028 - 9 Jan 2025
Cited by 7 | Viewed by 12098
Abstract
In an era where fake news detection has become a pressing issue due to its profound impacts on public opinion, democracy, and social trust, accurately identifying and classifying false information is a critical challenge. In this study, the effectiveness is investigated of advanced [...] Read more.
In an era where fake news detection has become a pressing issue due to its profound impacts on public opinion, democracy, and social trust, accurately identifying and classifying false information is a critical challenge. In this study, the effectiveness is investigated of advanced machine learning models—convolutional neural networks (CNNs), bidirectional encoder representations from transformers (BERT), and generative pre-trained transformers (GPTs)—for robust fake news classification. Each model brings unique strengths to the task, from CNNs’ pattern recognition capabilities to BERT and GPTs’ contextual understanding in the embedding space. Our results demonstrate that the fine-tuned GPT-4 Omni models achieve 98.6% accuracy, significantly outperforming traditional models like CNNs, which achieved only 58.6%. Notably, the smaller GPT-4o mini model performed comparably to its larger counterpart, highlighting the cost-effectiveness of smaller models for specialized tasks. These findings emphasize the importance of fine-tuning large language models (LLMs) to optimize the performance for complex tasks such as fake news classifier development, where capturing subtle contextual relationships in text is crucial. However, challenges such as computational costs and suboptimal outcomes in zero-shot classification persist, particularly when distinguishing fake content from legitimate information. By highlighting the practical application of fine-tuned LLMs and exploring the potential of few-shot learning for fake news detection, this research provides valuable insights for news organizations seeking to implement scalable and accurate solutions. Ultimately, this work contributes to fostering transparency and integrity in journalism through innovative AI-driven methods for fake news classification and automated fake news classifier systems. Full article
Show Figures

Graphical abstract

18 pages, 1233 KB  
Article
The Factuality of News on Twitter According to Digital Qualified Audiences: Expectations, Perceptions, and Divergences with Journalism Considerations
by José Luis Rojas Torrijos and Álvaro Garrote Fuentes
Journal. Media 2025, 6(1), 3; https://doi.org/10.3390/journalmedia6010003 - 1 Jan 2025
Cited by 2 | Viewed by 3167
Abstract
This research analyzes to what extent qualified digital audiences perceive, understand, and value the factuality of news published by news media within a communicative ecosystem where unverified information proliferates on social media. Additionally, it examines which factors may influence what highly educated and [...] Read more.
This research analyzes to what extent qualified digital audiences perceive, understand, and value the factuality of news published by news media within a communicative ecosystem where unverified information proliferates on social media. Additionally, it examines which factors may influence what highly educated and critically capable information audiences expect to find when consuming journalism. A qualitative, comparative study was conducted from a sample obtained of the ten most relevant statements on socio-political topics with the highest number of interactions published on the Twitter (X) accounts of six European digital and legacy media (Médiapart and Le Monde, France; Tortoise and The Guardian, United Kingdom; El Diario.es and El País, Spain), along with their reflection and development on the respective websites. With an expanded analytical scope to 300 tweet-news items (n = 300), two in-person focus groups were held at the College of Europe in Natolin (Poland) with postgraduate students from nine countries to assess their perception of the degree of truthfulness, bias, quality, and credibility of the displayed information. The results indicate that young, qualified digital audiences feel secure and capable of detecting any disinformation disorder. They value the variety of mentioned and verifiable sources, the presence of expert voices, and data-based claims as key elements in constructing credible media narratives. Full article
Show Figures

Figure 1

Back to TopTop