The Impact of Algorithms on Public Opinion: Disinformation, Social Media Use and Generative Artificial Intelligence

A special issue of Societies (ISSN 2075-4698).

Deadline for manuscript submissions: 31 December 2025 | Viewed by 1251

Special Issue Editor


E-Mail Website
Guest Editor
Department of Journalism and Corporate Communication, Faculty of Communication Sciences, Rey Juan Carlos University, 28933 Madrid, Spain
Interests: disinformation; fact-checking; generative AI; podcast; radio; journalism
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Although not a phenomenon exclusive to our time, disinformation has been a focal point in communication research since 2016. Scholars and analysts such as Graves (2016), D'Ancona (2017), McIntyre (2018), and McNair (2018), among others, have concentrated their efforts on understanding the causes, motivations, dissemination processes, and the political, economic, cultural, and social implications of fake news. This Special Issue represents one of the greatest challenges that the media system and society at large must face today, especially during crises, emergencies, or electoral periods (García-Marín et al., 2023).

Simultaneously, social media has radically transformed the way information is distributed and consumed in contemporary society. The ease with which unverified content can be shared on these platforms has led to the spread of erroneous information, which can have serious consequences for public opinion and decision-making.

Moreover, social media algorithms tend to create filter bubbles, where users are primarily exposed to content that reinforces their preexisting beliefs. This can limit access to balanced information and foster social polarization.

This situation has been exacerbated by the popularization of generative AI, a technology capable of significantly impacting disinformation campaigns across four distinct dimensions: (1) the generation of false content, (2) its amplification and viralization, (3) the personalization of disinformative narratives, and (4) the detection and verification of such false information.

This Special Issue aims to gather theoretical or empirical works on the convergence between social media, disinformation, and generative AI in order to reflect on the impact algorithms have on social representations of reality and the formation of public opinion.

Suggested topics include, but are not limited to, the following:

  • Social media, algorithms, and public opinion.
  • Use of generative artificial intelligence for the production of false content.
  • Potential of artificial intelligence to combat disinformation.
  • Automated fact-checking.
  • Algorithms, disinformation, and polarization.

References

D'Ancona, M. (2017). Post-Truth. The new war on truth and how to fight back. Ebury Press.

García-Marín, D., Rubio-Jordán, A. V. y Salvat-Martinrey, G. (2023). Chequeando al fact-checker. Prácticas de verificación política y sesgos partidistas en Newtral (España). Revista de Comunicación, 22(2), 207–223. https://doi.org/10.26441/RC22.2-2023-3184.

Graves, L. (2016). Deciding what ́s true. The rise of political fact-checking in American journalism. Columbia University.

Mclntyre, L. (2018). Post-Truth. MIT Press.

McNair, B. (2018). Fake news. Falsehood, fabrication and fantasy in journalism. Routledge.

Prof. Dr. David García-Marín
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as conceptual papers are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Societies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • algorithms
  • disinformation
  • public opinion
  • social media
  • generative artificial intelligence
  • polarization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1923 KiB  
Article
Shooting the Messenger? Harassment and Hate Speech Directed at Journalists on Social Media
by Simón Peña-Fernández, Urko Peña-Alonso, Ainara Larrondo-Ureta and Jordi Morales-i-Gras
Societies 2025, 15(5), 130; https://doi.org/10.3390/soc15050130 - 10 May 2025
Viewed by 172
Abstract
Journalists have incorporated social networks into their work as a standard tool, enhancing their ability to produce and disseminate information and making it easier for them to connect more directly with their audiences. However, this greater presence in the digital public sphere has [...] Read more.
Journalists have incorporated social networks into their work as a standard tool, enhancing their ability to produce and disseminate information and making it easier for them to connect more directly with their audiences. However, this greater presence in the digital public sphere has also increased their exposure to harassment and hate speech, particularly in the case of women journalists. This study analyzes the presence of harassment and hate speech in responses (n = 60,684) to messages that 200 journalists and media outlets posted on X (formerly Twitter) accounts during the days immediately preceding and following the July 23 (23-J) general elections held in Spain in 2023. The results indicate that the most common forms of harassment were insults and political hate, which were more frequently aimed at personal accounts than institutional ones, highlighting the significant role of political polarization—particularly during election periods—in shaping the hostility that journalists face. Moreover, although, generally speaking, the total number of harassing messages was similar for men and women, it was found that a greater number of sexist messages were aimed at women journalists, and an ideological dimension was identified in the hate speech that extremists or right-wing populists directed at them. This study corroborates that this is a minor but systemic issue, particularly from a political and gender perspective. To counteract this, the media must develop proactive policies and protective actions extending even to the individual level, where this issue usually applies. Full article
Show Figures

Figure 1

Back to TopTop