Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (765)

Search Parameters:
Keywords = media intelligence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 3591 KB  
Review
Ethics in Artificial Intelligence: A Cross-Sectoral Review of 2019–2025
by Charalampos M. Liapis, Nikos Fazakis, Sotiris Kotsiantis and Yannis Dimakopoulos
Informatics 2026, 13(4), 51; https://doi.org/10.3390/informatics13040051 (registering DOI) - 27 Mar 2026
Viewed by 134
Abstract
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, [...] Read more.
Artificial Intelligence (AI) has transitioned from a specialized research area to a ubiquitous socio-technical infrastructure influencing sectors from healthcare and law to manufacturing and defense. In tandem with its transformative promise, AI has created an exponentially expanding ethics literature questioning, fairness, transparency, accountability, and justice. This review synthesizes publications and key policy developments between 2019 and 2025, bringing sectoral discourses together with cross-cutting frameworks. Grounded in a systematic scoping review methodology, we frame the field along four meta-dimensions: trust and transparency, bias and fairness, governance & regulation, and justice, while we investigate their expression across diverse sectors. Special attention is dedicated to healthcare (patient trust and algorithmic bias), education (integrity and authorship), media (misinformation), law (accountability), and the industrial sector (data integrity, intellectual property protection, and environmental safety). We ground abstract principles in concrete case studies to illustrate real-world harms and mitigation strategies. Furthermore, we incorporate pluralistic ethics (e.g., Ubuntu, Islamic perspectives), environmental ethics, and emerging challenges posed by Generative AI and neuro-AI interfaces. To bridge theory and practice, we propose an operational governance framework for organizations. We contend that success involves transitioning from principles toward ethics-by-design, pluralistic governance, sustainability, and adaptive oversight. This review is intended for scholars, practitioners, and policymakers who need a comprehensive and actionable framework for navigating the complex landscape of AI ethics. Full article
Show Figures

Figure 1

26 pages, 623 KB  
Article
AI-Assisted Learning Systems for Enhancing English as a Foreign Language Outcomes in Lebanese High Schools
by Amal EL Arid, Obada Al-Khatib, Rayan Osman, Ghalia Nassreddine and Abdallah EL Chakik
Educ. Sci. 2026, 16(4), 517; https://doi.org/10.3390/educsci16040517 - 26 Mar 2026
Viewed by 248
Abstract
The pedagogical efficacy of artificial intelligence (AI) technologies in education heavily depends on cultural, technological, and cognitive contexts. Prior studies examined AI-driven learning outcomes without accounting for cultural variability or sufficiently anchoring their analyses in robust theoretical frameworks. The current study discusses the [...] Read more.
The pedagogical efficacy of artificial intelligence (AI) technologies in education heavily depends on cultural, technological, and cognitive contexts. Prior studies examined AI-driven learning outcomes without accounting for cultural variability or sufficiently anchoring their analyses in robust theoretical frameworks. The current study discusses the interconnection between AI technologies, learner competencies, and educational outcomes, in addition to the significance of digital and media literacy in secondary foreign language teaching. It employs Hofstede’s cultural dimensions theory, the technology acceptance model, and sociocultural learning theory to examine how AI technologies affect learning outcomes of English as a foreign language among Lebanese high school students. One hundred and eighty high school students in Mount Lebanon were given a 20-item survey using a quantitative research design. The results were analyzed using statistical tests and analyses in SPSS version 27.0.1. The findings indicate that AI technologies significantly enhance student learning outcomes: affective and motivational outcomes (45%), social and collaborative competencies (35%), and English language proficiency (accounting for 43% of variance). Furthermore, these relationships are strongly moderated by digital and media literacy, which increases the beneficial effects of AI on learning outcomes. The findings also show that students’ opinions, engagement, and acceptance of AI-supported language learning are influenced by cultural traits. Full article
(This article belongs to the Special Issue The Use of AI in ESL/EFL Education: Challenges and Opportunities)
Show Figures

Figure 1

27 pages, 18731 KB  
Article
Intelligent Analysis of Data Flows for Real-Time Classification of Traffic Incidents
by Gary Reyes, Roberto Tolozano-Benites, Cristhina Ortega-Jaramillo, Christian Albia-Bazurto, Laura Lanzarini, Waldo Hasperué, Dayron Rumbaut and Julio Barzola-Monteses
Information 2026, 17(3), 310; https://doi.org/10.3390/info17030310 - 23 Mar 2026
Viewed by 210
Abstract
Social media platforms have been established as relevant sources of real-time information for urban traffic analysis. This study proposes an intelligent framework for the classification and spatiotemporal analysis of traffic incidents based on semi-synthetic data streams constructed from historical geolocated seeds for controlled [...] Read more.
Social media platforms have been established as relevant sources of real-time information for urban traffic analysis. This study proposes an intelligent framework for the classification and spatiotemporal analysis of traffic incidents based on semi-synthetic data streams constructed from historical geolocated seeds for controlled validation, utilizing real reports from platforms such as X and Telegram. The approach integrates adaptive machine learning and incremental density-based clustering. An Adaptive Random Forest (ARF) incremental classifier is used to identify the type of incident, allowing for continuous updating of the model in response to changes in traffic flow and concept drift. The classified events are then processed using DenStream, a clustering algorithm that incorporates a temporal decay mechanism designed to identify dynamic spatial patterns and discard older information. The evaluation is performed in a controlled streaming simulation environment that replicates the dynamics of cities such as Panama and Guayaquil. The proposed framework demonstrated robust quantitative performance, achieving a prequential accuracy of up to 86.4% and a weighted F1-score of 0.864 in the Panama scenario, maintaining high stability against semantic noise. The results suggest that this hybrid architecture is a highly viable approach for urban traffic monitoring, providing useful information for Intelligent Transportation Systems (ITS) by processing authentic social signals. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 848 KB  
Article
Mapping European Countries’ Resilience to Cognitive Warfare
by Costel Marian Dalban, Ecaterina Coman, Vlad Bătrânu-Pințea, Mihail Anton, Iulia Para and Luminița Ioana Mazuru
Adm. Sci. 2026, 16(3), 160; https://doi.org/10.3390/admsci16030160 - 23 Mar 2026
Viewed by 306
Abstract
This study maps European countries’ resilience to cognitive warfare by developing a cross-national composite measure. The framework integrates three pillars: information ecology, institutional-digital capacity, and socioeconomic context—drawing on a systemic perspective linking social structures to societal functions. Publicly available secondary indicators are compiled [...] Read more.
This study maps European countries’ resilience to cognitive warfare by developing a cross-national composite measure. The framework integrates three pillars: information ecology, institutional-digital capacity, and socioeconomic context—drawing on a systemic perspective linking social structures to societal functions. Publicly available secondary indicators are compiled from online sources for EU (European Union) and EEA (European Economics Area) states. The dataset is examined through descriptive analysis, association testing, multivariate modelling, dimensionality reduction to derive a composite resilience score, and unsupervised clustering to produce a country typology. Indicators capture governance effectiveness, e-government maturity, public-sector AI (Artificial Intelligence) readiness, digital connectivity and infrastructure, media freedom and broader media-ecosystem quality, academic freedom, and socioeconomic vulnerabilities such as youth labour market exclusion. Results show that resilience aligns most strongly with institutional capacity and governance performance; a healthy ecology acts as a reinforcing layer. Digital infrastructure appears necessary but insufficient without capable, credible institutions and coherent public policy. Socioeconomic vulnerabilities tend to erode resilience and heighten susceptibility to hostile cognitive influence. The study concludes that policy efforts should prioritise governance integrity and effectiveness, end-to-end digital government, responsible public-sector AI capability, and safeguards for media and academic autonomy, alongside measures that improve youth inclusion. Full article
Show Figures

Figure 1

14 pages, 1535 KB  
Article
Artificial Intelligence, Algorithmic Ethics, and Digital Inequality: A Bibliometric Mapping in the Digital Media Era
by Soledad Zabala, José Javier Galán Hernández, Jesús Cáceres-Tello, Eloy López-Meneses and María Belén Morales Cevallos
Appl. Sci. 2026, 16(6), 3056; https://doi.org/10.3390/app16063056 - 22 Mar 2026
Viewed by 297
Abstract
The accelerated expansion of advanced technologies—particularly artificial intelligence, intelligent systems, and interactive digital environments—is influencing contemporary media ecosystems and contributing to changes in educational practices. This study provides a systematic and descriptive bibliometric mapping of recent scientific production on artificial intelligence in education, [...] Read more.
The accelerated expansion of advanced technologies—particularly artificial intelligence, intelligent systems, and interactive digital environments—is influencing contemporary media ecosystems and contributing to changes in educational practices. This study provides a systematic and descriptive bibliometric mapping of recent scientific production on artificial intelligence in education, algorithmic ethics, and digital inequality. A total of 229 Scopus-indexed documents published between 2021 and 2026 were analyzed using Biblioshiny and VOSviewer to examine publication patterns, influential authors and sources, and the conceptual structure of the field. Results indicate a marked increase in research output since 2024, with an annual growth rate of 47.58%, an average of 8.68 citations per document, and an international co-authorship rate of 24.45%. These indicators reflect an expanding and increasingly collaborative research landscape, accompanied by a diversification of thematic priorities within the field. The analysis identifies five thematic clusters: (1) the technical foundations of AI and digital transformation; (2) intelligent and immersive learning environments; (3) personalized and adaptive learning systems; (4) AI literacy and pedagogical integration; and (5) ethical considerations, including algorithmic bias and educational robotics. The findings highlight the need for explicit justification of database selection, strengthened critical AI literacy, and context-sensitive strategies that address disparities in access, skills, and institutional capacity. Overall, this study offers a coherent overview of a research area that is currently expanding and undergoing conceptual reorganization, providing evidence-informed insights for future research, policy development, and the design of equitable AI-driven educational technologies. Full article
(This article belongs to the Special Issue Advanced Technologies Applied in Digital Media Era)
Show Figures

Figure 1

21 pages, 2237 KB  
Article
Analyzing the Accuracy and Determinants of Generative AI Responses on Nearest Metro Station Information for Tourist Attractions: A Case Study of Busan, Korea
by Jaehyoung Yang and Seong-Yun Hong
Sustainability 2026, 18(6), 3082; https://doi.org/10.3390/su18063082 - 20 Mar 2026
Viewed by 244
Abstract
The emergence of Generative Artificial Intelligence (GenAI), capable of interpreting and reasoning with human language, has catalyzed a paradigm shift across various societal sectors. Within the tourism industry, GenAI is increasingly utilized to facilitate personalized itinerary planning, destination recommendations, and the provision of [...] Read more.
The emergence of Generative Artificial Intelligence (GenAI), capable of interpreting and reasoning with human language, has catalyzed a paradigm shift across various societal sectors. Within the tourism industry, GenAI is increasingly utilized to facilitate personalized itinerary planning, destination recommendations, and the provision of optimal route information. This study evaluates the reliability of GenAI in identifying the nearest metro station within a walking distance from tourist attractions in Busan, South Korea. Furthermore, it aims to empirically verify the determinants influencing the correctness of AI-generated responses compared to network-based shortest-path analyses. The empirical results demonstrate that Google’s Gemini 3 Pro model achieved superior performance, recording an accuracy rate of 65.0%. Regression analysis revealed that for both Gemini and GPT models, the volume of news articles associated with an attraction—representing media visibility—significantly increased the likelihood of accurate information provision. Notably, the Gemini model exhibited distinct sensitivity to geographic factors and text similarity metrics, suggesting a difference in how it processes spatial context compared to other models. Consequently, this study underscores the importance of high-quality AI-generated tourism data and offers significant contributions to the advancement of sophisticated personalized travel planning systems and GeoAI research focused on spatial problem-solving. Full article
Show Figures

Figure 1

39 pages, 1614 KB  
Article
LLM-Powered Proactive Cyber-Defense Framework Using Cyber-Threat Indicators Collected from X Platform
by Nawal Almutairi
Electronics 2026, 15(6), 1305; https://doi.org/10.3390/electronics15061305 - 20 Mar 2026
Viewed by 218
Abstract
Security organizations increasingly rely on cyber threat intelligence (CTI) sharing to enhance their resilience against cyberattacks. Indicators of Compromise (IoCs) play a critical operational role in CTI by providing malicious artifacts that support threat detection, incident response, and facilitate proactive defense. However, the [...] Read more.
Security organizations increasingly rely on cyber threat intelligence (CTI) sharing to enhance their resilience against cyberattacks. Indicators of Compromise (IoCs) play a critical operational role in CTI by providing malicious artifacts that support threat detection, incident response, and facilitate proactive defense. However, the rapid growth of social media as CTI sources, characterized by short-text content, poses significant challenges to automated IoC extraction, contextual interpretation, operational integration, and reliable verification. To address these challenges, this study proposes a comprehensive framework that integrates Large Language Models (LLMs) across multiple stages of the CTI pipeline. The framework leverages LLM-driven data augmentation, a hybrid classification model, and contextual summarization to enhance short-text understanding while supporting expert-in-the-loop validation for operational reliability. Extensive experimental evaluations demonstrate that LLM-driven data augmentation substantially improves model robustness and generalization while reducing false-positive alerts, achieving a precision of 98.87%. Quantitative diversity analysis and expert-based human evaluation further confirm the linguistic quality and correctness of the generated augmented samples. In addition, IoC reports are validated using both reference-based and reference-free evaluation metrics that show strong alignment and high semantic adequacy. Moreover, a technology acceptance model was integrated with cybersecurity domain constructs to assess the acceptance factors of the proposed framework. Regression analysis showed that perceived usefulness, behavioral intention, trust in automation, and risk were the strongest predictors of actual use. These predictors are commonly interpreted as indicators of technology acceptance. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
Show Figures

Figure 1

25 pages, 2031 KB  
Article
A Hybrid Machine Learning Approach for Classifying Indonesian Cybercrime Discourse Using a Localized Threat Taxonomy
by Firman Arifman, Teddy Mantoro and Dini Oktarina Dwi Handayani
Information 2026, 17(3), 301; https://doi.org/10.3390/info17030301 - 20 Mar 2026
Viewed by 188
Abstract
Indonesia’s rapid digital growth has been accompanied by escalating cyber threats, with public discourse on social media emerging as a critical but underutilized source of threat intelligence. This discourse is characterized by informal language and local nuances that render existing international cybercrime taxonomies [...] Read more.
Indonesia’s rapid digital growth has been accompanied by escalating cyber threats, with public discourse on social media emerging as a critical but underutilized source of threat intelligence. This discourse is characterized by informal language and local nuances that render existing international cybercrime taxonomies ineffective, creating a gap in scalable, locally relevant threat analytics. This study introduces the Indonesian Cybercrime Threat Taxonomy (ICTT), a novel five-dimensional framework tailored to Indonesian online environments. An end-to-end OSINT pipeline was developed to collect 2344 samples from X (formerly Twitter) and YouTube, employing weak supervision with 12 high-precision regex patterns to generate training labels. A state-of-the-art IndoBERT model was fine-tuned on this data, and its performance was compared against rule-based and hybrid classification models. On a manually annotated gold-standard dataset of 600 samples, both the IndoBERT and hybrid models achieved 96.8% accuracy, significantly outperforming the rule-based baseline (66.7%). The models demonstrated strong generalization across both social media platforms, and the hybrid approach provided an effective balance of high performance and interpretability. This research demonstrates that informal public discourse can be systematically transformed into structured threat intelligence. The ICTT and the accompanying hybrid classification system provide a scalable, interpretable, and locally relevant foundation for cyber threat analytics in Indonesia, establishing a methodological blueprint for other low-resource language contexts. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

24 pages, 1098 KB  
Article
AI Chatbot Showdown in News Fact Checking: Exploring Automated Verification in the Greek Media Landscape
by Evangelos Lamprou and Aikaterini Marmouta
Journal. Media 2026, 7(1), 66; https://doi.org/10.3390/journalmedia7010066 - 19 Mar 2026
Viewed by 649
Abstract
The circulation of non-true stories in digital media environments presents ongoing challenges for journalism and fact-checking. The development of artificial intelligence has led to the use of AI chatbots in content verification processes. This study evaluates the performance of AI chatbot systems in [...] Read more.
The circulation of non-true stories in digital media environments presents ongoing challenges for journalism and fact-checking. The development of artificial intelligence has led to the use of AI chatbots in content verification processes. This study evaluates the performance of AI chatbot systems in identifying non-true stories within the Greek media context. A quantitative comparative research design was applied, using claims previously assessed by professional fact-checking organizations. Chatbot responses were compared with established verification verdicts to examine detection accuracy, variation across categories of non-true stories, and differences related to source characteristics. The results indicate that AI chatbots demonstrate measurable capability in identifying non-true stories, while also exhibiting limitations in specific content categories, particularly those involving complex or AI-generated material. Performance differences between chatbot systems suggest that design characteristics and task orientation influence verification outcomes. The findings support the view that AI-based tools function most effectively as components of broader verification processes in which human judgment remains essential. Full article
Show Figures

Figure 1

15 pages, 1923 KB  
Article
Journalistic Values and GenAI: A Transnational Study of Editorial Policies
by Rubén Rivas-de-Roca, Tania Forja-Pena, Artai Bringas-Gómez and Berta García-Orosa
Soc. Sci. 2026, 15(3), 198; https://doi.org/10.3390/socsci15030198 - 18 Mar 2026
Viewed by 335
Abstract
The consolidation of artificial intelligence (AI) is transforming the journalistic sector, to the point that its ethical dimension is being altered. However, the mission and values of the media in the face of the current emergence of generative artificial intelligence (GenAI) have barely [...] Read more.
The consolidation of artificial intelligence (AI) is transforming the journalistic sector, to the point that its ethical dimension is being altered. However, the mission and values of the media in the face of the current emergence of generative artificial intelligence (GenAI) have barely been explored. Bearing this in mind, it is important to understand not only how journalists perceive AI, but also to examine the role that the media assign to themselves and the audience’s participation in this context. This research explores the roles defined by a sample of leading media outlets (n = 21) in seven countries in Western Europe and North America: France, Germany, Italy, Spain, the United Kingdom, Canada, and the United States. To this end, a discursive content analysis is applied to three newspapers (printed or digital) per country. The findings reflect differences between countries and media outlets, within a common trend of prioritizing responsibility as the primary editorial value, followed by truthfulness. We also found scant direct references to AI regulation, alongside the development of participatory interactivity within readership established by the media outlet. Furthermore, greater participation of audiences was observed in publicly funded publications, granting audiences a deliberative role. Full article
(This article belongs to the Special Issue Big Data and Political Communication)
Show Figures

Figure 1

29 pages, 3711 KB  
Article
Artificial Intelligence Chatbots as Assistants for Media Users: The Cases of El País and El Espectador
by Gema Sánchez-Muñoz, Isabel García Casado and David Varona Aramburu
Journal. Media 2026, 7(1), 59; https://doi.org/10.3390/journalmedia7010059 - 18 Mar 2026
Viewed by 341
Abstract
In recent months, some media outlets have been launching artificial intelligence-based chatbots that serve as assistants to users in their search, selection and consumption of content. This research analyses two such examples: Vera, a conversational assistant launched by the Spanish newspaper El País, [...] Read more.
In recent months, some media outlets have been launching artificial intelligence-based chatbots that serve as assistants to users in their search, selection and consumption of content. This research analyses two such examples: Vera, a conversational assistant launched by the Spanish newspaper El País, and the model used by the Colombian newspaper El Espectador, which operates on the WhatsApp platform. Both chatbots share the same approach: they are tools designed for users to interact with newspaper content. This interaction takes place through natural language conversations: the technology understands ‘users’ questions or requests and provides answers based on the content hosted in the newspapers. This changes the way media content is explored. We are moving from a paradigm centred on search engines and keywords to one in which conversation determines the discovery of content. The research analyses the results of these two pioneering experiences in the Spanish-language media. The aim is to understand the extent to which they are changing the relationship with content and how they are affecting the media. Full article
(This article belongs to the Special Issue Reimagining Journalism in the Era of Digital Innovation)
Show Figures

Figure 1

31 pages, 580 KB  
Article
Seeing the Message but Not the Machine: Digital Skepticism and AI Discernment in Online Information Environments
by Lersak Phothong, Anupong Sukprasert, Nattakarn Shutimarrungson and Mehtabhorn Obthong
Information 2026, 17(3), 295; https://doi.org/10.3390/info17030295 - 18 Mar 2026
Viewed by 235
Abstract
Artificial intelligence (AI) increasingly mediates how information is generated, ranked, and circulated in digital environments. However, it remains unclear under what conditions users explicitly articulate recognition of AI involvement in routine news-related discourse. This study examines how digital skepticism and AI-related discernment are [...] Read more.
Artificial intelligence (AI) increasingly mediates how information is generated, ranked, and circulated in digital environments. However, it remains unclear under what conditions users explicitly articulate recognition of AI involvement in routine news-related discourse. This study examines how digital skepticism and AI-related discernment are expressed in naturally occurring social media discussions. Using an exploratory observational design, 6065 user-generated comments from 305 news-related Reddit threads were analyzed through a rule-based framework distinguishing general skepticism, structural suspicion, and explicit AI-related discernment. Within the sampled corpus, generalized digital skepticism is proportionally more visible than explicit attribution to AI-generated or synthetically produced content. Explicit AI-related attribution is unevenly distributed across discourse contexts, appearing more frequently in technology-oriented communities and remaining limited in mainstream news-related discussions. Differences across score-based visibility contexts do not correspond to a consistently higher representation of explicit AI attribution. These findings indicate a distributional difference between generalized skepticism and publicly articulated recognition of AI mediation. Rather than measuring levels of awareness, the results illuminate the contextual and linguistic conditions under which AI involvement becomes explicitly named in public interaction. By focusing on observable discourse rather than self-reported attitudes, the study provides a corpus-bound account of when AI mediation becomes discursively articulated in algorithmically mediated environments. Full article
Show Figures

Figure 1

31 pages, 1934 KB  
Review
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
by Félix Díaz, Nhell Cerna, Rafael Liza and Bryan Motta
Information 2026, 17(3), 292; https://doi.org/10.3390/info17030292 - 17 Mar 2026
Viewed by 286
Abstract
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining [...] Read more.
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining performance indicators, science mapping, and a focused full-text synthesis of highly cited papers. The literature grows sharply after 2019, peaks in 2025, and shows geographically uneven production, with collaboration structured around a small set of hubs. The thematic structure suggests that, during the pandemic era, infodemic-related research served as a catalyst, intensifying scientific attention to fake news and disinformation and expanding the associated detection and monitoring agendas. In addition, socio-political harm constructs such as hate speech, extremism, and polarization appear as recurrent and structurally central targets, highlighting that election-relevant work often extends beyond veracity assessment toward monitoring discourse risks. Blockchain also emerges as a novel and adjacent integrity theme, aligned with authenticity and provenance-oriented mitigation rather than mainstream detection pipelines. AI for electoral disinformation is not reducible to veracity classification, as influential studies also target automation and coordinated behavior, verification support, diffusion analysis, and estimation frameworks that focus on exposure and impact. Evaluation remains heterogeneous and is often shaped by benchmark settings, making high accuracy values hard to compare and potentially misleading when labeling quality, topic leakage, or context shift are not characterized. Overall, the findings motivate evaluation protocols that align operational objectives with modeling roles and explicitly address robustness to temporal and platform changes, asymmetric error costs during election windows, and representativeness across electoral contexts and languages, while also guiding future work on emerging integrity challenges and governance-relevant deployment settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1690 KB  
Article
“Virality Alert”: The Construction, Imagination, and Algorithmic Falsification of a Local Disaster
by Giacomo Buoncompagni
Journal. Media 2026, 7(1), 58; https://doi.org/10.3390/journalmedia7010058 - 17 Mar 2026
Viewed by 232
Abstract
This paper investigates the strategies employed by local journalists to verify AI-generated and manipulated imagery during the 2026 Romagna earthquake. Drawing on a qualitative methodology, this study identifies a multi-layered process of “situated verification.” The findings reveal that verification efficacy is predicated on [...] Read more.
This paper investigates the strategies employed by local journalists to verify AI-generated and manipulated imagery during the 2026 Romagna earthquake. Drawing on a qualitative methodology, this study identifies a multi-layered process of “situated verification.” The findings reveal that verification efficacy is predicated on territorial familiarity, professional networks, and direct institutional triangulation, which collectively compensate for technological and resource constraints. Local journalists emerge as epistemic mediators who stabilize the information ecosystem, mitigate public anxiety, and curb the spread of disinformation. Furthermore, institutional interventions, such as police-led fact-checking, function as both pragmatic verification tools and symbolic signals that promote responsible information sharing. By highlighting how verification is deeply rooted in temporality, social embeddedness, and local expertise, this research underscores the critical role of proximity journalism in crisis communication. The study contributes to the fields of visual epistemology and media literacy, demonstrating that relational and context-aware practices are essential for maintaining information integrity in an era of AI-driven visual disinformation. Full article
Show Figures

Figure 1

12 pages, 3058 KB  
Proceeding Paper
AI Facial Acupuncture Point Interactive Voice Health Care Teaching System
by Wen-Cheng Chen, Yu-Hsuan Chen, Yu-Hsing Chen, Jiu-Wen Wang, Hung-Jen Chen and Jr-Wei Tsai
Eng. Proc. 2026, 128(1), 37; https://doi.org/10.3390/engproc2026128037 - 16 Mar 2026
Viewed by 175
Abstract
We developed an AI-based system for facial acupoint recognition and healthcare support, integrating MediaPipe facial and hand tracking technologies to address the problems of inaccurate and non-standardized acupoint identification in traditional Chinese medicine (TCM). By leveraging facial landmark detection and fingertip tracking, the [...] Read more.
We developed an AI-based system for facial acupoint recognition and healthcare support, integrating MediaPipe facial and hand tracking technologies to address the problems of inaccurate and non-standardized acupoint identification in traditional Chinese medicine (TCM). By leveraging facial landmark detection and fingertip tracking, the system enables accurate localization of facial acupoints to ensure precise stimulation. The system contributes to the standardization of acupoint recognition, intelligent health consultation, and the digital transformation of TCM practices. Further enhancements are necessary by expanding acupoint recognition to other body parts (e.g., ears, hands, feet, and back) and integrating with wearable devices to further promote personalized and precise TCM healthcare. Full article
Show Figures

Figure 1

Back to TopTop