Combating Hate Speech on Social Media: Applying Targeted Regulation, Developing Civil-Communicative Skills and Utilising Local Evidence-Based Anti-Hate Speech Interventions
Abstract
:1. Introduction: Social Media as a Communicative Battleground
2. Hate Speech on Social Media as a Means of Group Oppression
2.1. Defining Hate Speech on Social Media
2.2. Group Oppression
3. A Peacebuilding Combinatorial Approach to Combating Hate Speech on Social Media
3.1. Aspect 1: Targeted Regulation of the Mechanics of Social Media Platforms
The statement further adds that X is firmly committed “to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized” (X 2023). In a similar vein, Facebook (n.d.) definesWe recognize that if people experience abuse on X, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature, and more harmful.
and makes clear that Facebook removes hate speech because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence” (Facebook n.d.).hate speech as a direct attack against people (…) on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation
3.2. Aspect 2: Developing Civil-Communicative Skills for Responsible and Informed Social Media Use
- Civil digital media literacy
- (i)
- How tech companies are run
- (ii)
- How search engines work, emphasising that the top result is not by any means the most reliable
- (iii)
- How the online and the offline world might or might not be the same, i.e., how online noise distorts reality or reflects society
- (iv)
- What the role of bots and algorithms is in the amplification of posts
- (v)
- What happens when users comment, post, share or like
- (vi)
- How the affordances on social media websites amplify messages/posts and comments and can make them go viral
- (vii)
- The sophisticated ways in which pictures and videos can be audio-visually manipulated, including deep fakes which are nearly impossible to detect with the naked eye (Schick 2020; Woolley 2020)
- (viii)
- The meaning and significance of coded bias in terms of programmed injustices, inequality and discrimination against certain groups
- (ix)
- How any unintentional participation in the spread of hate speech can be avoided.
- 2.
- The skill to identify different types of “hate speech” and to understand their potential consequences for “the other”
- (a)
- Stereotyping, Prejudice and Scapegoating
- (b)
- Dehumanising Speech
- 3.
- Counter-speech skills
- 4.
- Discursive civility
3.3. Aspect 3: Utilising Local Evidence-Based Anti-Hate Speech Interventions on Social Media
3.4. Digital Media Arts for an Inclusive Public Sphere (DMAPS)
4. Conclusion: Responsibility and Capacity to Prevent Group Oppression
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
1 | Our focus here is on major platforms such as Facebook, Twitter/X, TikTok and Reddit rather than community boards or chatrooms that have little influence on and power over societies. |
2 | |
3 | |
4 | |
5 | Jones (2015, p. 684) argues that “Waldron characterises free expression as itself a demand of dignity, so that the trade-off takes place within the confines of dignity, with the implication that dignity will not lose out in the trade”. On dignity vs. freedom of expression see also Heyman (2008), Simpson (2013) and Bousquet (2022). |
6 | |
7 | These can be any civil or political actor. |
8 | We refer to the US as two of the largest social media platforms are based there and operate under US law. It is a matter of fact that the US is where many such large tech companies are based, where the great digital revolution of Silicon Valley took place at the turn of the century, and that it is within this legal context that social media platforms with such global reach operate. This does not render our discussion US-centric. We will cover the MENA region later on in this paper. |
9 | This is not to suggest that circulation is not also facilitated by the sheer heterogeneity of social media platforms and the emergence of a new alt-tech landscape that is used circumvent moderation and restriction of conventional social media platforms (see Ebner and Guhler 2024). |
10 | Change the Terms (n.d.) runs a communicative campaign entitled ‘Fix the Feed’ which includes three main steps: to fix the algorithms to stop ‘promoting the most incendiary, hateful content’, to ‘protect people equally’ and ‘disclose business models and moderation practices). |
11 | Other more proactive suggestions include encouraging social media platforms to formulate a purpose such as bridging partisan divides and to reward users that help contribute to the achievement of the purpose (see e.g., Bail 2021). |
12 | See also Carlson and Rousselle (2020). Our argument is not solely about content moderation and such, we do not address this issue in detail. We simply advocate holding social media companies to account in terms of the enforcement of their community guidelines, it is a responsibility point. However, for more information on the difficulty of content moderation see Gerrard (2018), Gillespie (2018), Wilson and Land (2020), Díaz and Hecht-Felella (2021). |
13 | There are those who believe in absolute freedom of expression and then there is the generally accepted idea that freedom of expression needs to be limited to prevent harm to others (see e.g., Gorenc 2022). In this paper, we are in favour of freedom of expression within the confines of dignity (in agreement with Waldron) and are advocating for tech companies to deprive users of the algorithmic megaphone. |
14 | The imperative to teach children such media literacy skills form an early stage has been acknowledged and to some extent pioneered by Common Sense, a nonprofit organisation, that has since 2003 engaged in designing curricula for school children focusing on a variety of aspects including the importance of language and the harm that can derive form language as well as norm education about community membership. |
15 | A clear example of Islamophobic stereotyping, rumour and disinformation was the conspiracies that were widely circulated on social media by the Leave.EU campaign in the United Kingdom prior to the 2016 Brexit referendum. Leave.EU propaganda directly linked the topical issue of immigration to Islamic extremism and terrorism in the crude trope ‘immigration without assimilation equals invasion’ (Bakir 2020, p. 11). The spreading of this Islamophobic conspiracy and its success in persuading popular opinion relied directly on social media users’ algorithms. See Bakir (2020) and Caeser (2019). |
16 | Such training has been developed in various ways. On example is the simulation engine Bad News (Van der Linden 2023), another is Common Cause (n.d.); for lists of training see: RAND (n.d.) and IREX (2023). |
17 | For examples of what governments across the world are doing to combat mis-/disinformation see Poynter 2024 (Funke and Flamini 2024). |
18 | Our approach focuses on ordinary civil and political actors rather than professional hate speech producers and disseminators such as those working for troll farms being paid for using hate speech for political gain. |
19 | A 2023 study by Zheng, Ross and Magdy interestingly tests whether counter-speech generated by ChatGPT can be effective in countering hate speech but comes with challenges and ethical questions. They (Zheng et al. 2023, p. 70) argue: that ‘Looking to the future, our analysis shows that the automatic generation of counterspeech remains a challenging task, even for current large language models’ and that the ‘prospect of using automatically generated counterspeech to counter hate speech on social media raises important ethical questions’ (ibid.). |
20 | This need is also recognised by PEN America (2024) in their Guidelines for Safely Practicing Counterspeech when they provide specific recommendations on how to avoid escalation and achieve de-escalation. |
21 | For reasons of participant security, confidentiality and research ethics and integrity we cannot provide any further information on these partners. |
References
- Allport, Gordon. 1958. The Nature of Prejudice. London: Basic Books. [Google Scholar]
- Aral, Sinan. 2021. The Hype Machine: How Social Media Disrupts Our Elections, Our Economy and Our Health—And How We Must Adapt. London: HarperCollins Publishing. [Google Scholar]
- Arthur, Catherine, and Stefanie Pukallus. 2022. Theoretical Foundations of the DMAPS Approach. Position Paper 1. Edgecliff: British Council. [Google Scholar]
- Arthur, Charles. 2021. Social Warming: The Dangerous and Polarising Effects of Social Media. Edinburgh: One World. [Google Scholar]
- Bail, Chris. 2021. Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing. Princeton: Princeton University Press. [Google Scholar]
- Bakir, Vian. 2020. Psychological operations in digital political campaigns: Assessing Cambridge Analytica’s psychographic profiling and targeting. Frontiers in Communication 5: 67. [Google Scholar] [CrossRef]
- Bar-Ilan, Judit. 2007. Manipulating search engine algorithms: The case of Google. Journal of Information, Communication and Ethics in Society 5: 155–66. [Google Scholar] [CrossRef]
- Benesch, Susan, Derek Ruths, Kelly P. Dillon, Haji Mohammad Saleem, and Lucas Wright. 2016. Counterspeech on Twitter: A Field Study. Report, Public Safety Canada. Available online: https://dangerousspeech.org/counterspeech-on-twitter-a-field-study/ (accessed on 11 July 2023).
- Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press. [Google Scholar]
- Bousquet, Chris. 2022. Words That Harm: Defending the Dignity Approach to Hate Speech Regulation. The Canadian Journal of Law & Jurisprudence XXXV: 31–57. [Google Scholar]
- British Council. n.d. Call for Applications—Digital Media Arts for an Inclusive Public Sphere. Available online: https://iraq.britishcouncil.org/en/about/jobs/call-applications-–-digital-media-arts-inclusive-public-sphere (accessed on 18 April 2023).
- Build Up. 2022a. Digital Media Arts for an inclusive Public Sphere. Final Report April 2022 unpublished. [Google Scholar]
- Build Up. 2022b. Participatory Action Research, Polarisation, and Social Media: Ongoing Lessons from the Digital Maps Program. Edgecliff: British Council. [Google Scholar]
- Caeser, Ed. 2019. The Chaotic Triumph of Aaron Banks, the “Bad Boy of Brexit”. The New Yorker. March 18. Available online: https://www.newyorker.com/magazine/2019/03/25/the-chaotic-triumph-of-arron-banks-the-bad-boy-of-brexit (accessed on 3 April 2024).
- Carlson, Caitlin. 2021. Hate Speech. Massachusetts: MIT Press. [Google Scholar]
- Carlson, Caitlin, and Hayley Rousselle. 2020. Report and repeat: Investigating Facebook’s hate speech removal process. First Monday 25. [Google Scholar] [CrossRef]
- Castaño-Pulgarín, Sergio Andrés, Natalia Suárez-Betancur, Luz Magnolia Telano Vega, and Harvey Mauricio Herrera López. 2021. Internet, social media and online hate speech. Systematic review. Aggression and Violent Behaviour 58: 101608. [Google Scholar] [CrossRef]
- Center for Countering Digital Hate. 2023. X Content Moderation Failure. Report. September. Available online: https://counterhate.com/wp-content/uploads/2023/09/230907-X-Content-Moderation-Report_final_CCDH.pdf (accessed on 23 January 2024).
- Change the Terms. n.d. Fix the Feed. Available online: https://www.changetheterms.org (accessed on 23 January 2024).
- Chua, Amy. 2018. Political Tribes. Group Instinct and the Fate of Nations. London: Bloomsbury. [Google Scholar]
- Citron, Danielle, and Helen Norton. 2011. Intermediaries and hate speech: Fostering digital citizenship for our information age. Boston University Law Review 91: 1435–84. Available online: https://scholarship.law.bu.edu/faculty_scholarship/614 (accessed on 26 March 2024).
- Cohen-Almagor, Rafael. 2011. Fighting Hate and Bigotry on the Internet. Policy & Internet 3: 6. [Google Scholar]
- Coleman, Peter. 2021. The Way Out. How to Overcome Toxic Polarization. New York: Columbia University Press. [Google Scholar]
- Common Cause. n.d. Stop Disinformation Training. Available online: https://www.commoncause.org/stopdisinformationtraining (accessed on 22 January 2024).
- Davidson, Thomas, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. Paper Presented at the International AAAI Conference on Web and Social Media, Montreal, QC, Canada, May 15–18; vol. 11. [Google Scholar]
- Dewey, John. 2011. Democracy and Education. An Introduction to the Philosophy of Education. New York: Simon and Brown. First published 1916. [Google Scholar]
- Díaz, Ángel, and Laura Hecht-Felella. 2021. Double Standards in Social Media Content Moderation. Brennan Center for Justice. August. Available online: https://www.skeyesmedia.org/documents/bo_filemanager/Double_Standards_Content_Moderation.pdf (accessed on 23 January 2024).
- Ebner, Jakob, and Julia Guhler. 2024. Extremism, the extreme right and conspiracy myths on social media. In Handbook of Conflict and Peace Communication. Edited by Stacey Connaughton and Stefanie Pukallus. New York: Routledge, Forthcoming. [Google Scholar]
- Emejulu, Akwugo, and Callum McGregor. 2019. Towards a radical digital citizenship in digital education. Critical Studies in Education 60: 131–47. [Google Scholar] [CrossRef]
- European Parliament. 2023a. EU AI Act: First Regulation on Artificial Intelligence. European Parliament News. December 19. Available online: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (accessed on 23 January 2024).
- European Parliament. 2023b. Parliament’s Negotiating Position on the Artificial Intelligence Act. Plenary: At a Glance. June. Available online: https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/747926/EPRS_ATA(2023)747926_EN.pdf (accessed on 23 January 2024).
- Facebook. n.d. Hate Speech. Available online: https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/ (accessed on 18 April 2023).
- Fox, Chris. 2020. Social Media: How Might it Be Regulated? BBC News. November 12. Available online: https://www.bbc.co.uk/news/technology-54901083 (accessed on 18 April 2023).
- Foxman, Abraham, and Christopher Wolf. 2012. Viral Hate. Containing Its Spread on the Internet. Basingstoke: Palgrave Macmillan. [Google Scholar]
- Frenkel, Sheera, and Kate Conger. 2022. Hate Speech’s Rise on Twitter is Unprecedented, Researchers Find. New York Times. December 2. Available online: https://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html (accessed on 14 November 2023).
- Funke, Daniel, and Daniela Flamini. 2024. A Guide to Anti-misinformation Actions around the World. Poynter. Available online: https://www.poynter.org/ifcn/anti-misinformation-actions (accessed on 22 January 2024).
- Gagliardone, Iginio, Danit Gal, Thiago Alves, and Gabriela Martinez. 2015. Countering Online Hate Speech. Paris: UNESCO. [Google Scholar]
- Garland, Joshua, Keyan Ghazi-Zahedi, Jean-Gabriel Young, Laurent Hébert-Dufresne, and Mirta Galesic. 2022. Impact and dynamics of hate and counter speech online. EPJ Data Science 11: 3. [Google Scholar] [CrossRef]
- Gerrard, Ysabel. 2018. Beyond the hashtag: Circumventing content moderation on social media. New Media & Society 20: 4492–511. [Google Scholar]
- Gillespie, Tarleton. 2018. Custodians of the Internet. Yale: Yale University Press. [Google Scholar]
- Gorenc, Nina. 2022. Hate speech or free speech: An ethical dilemma? International Review of Sociology 32: 413–25. [Google Scholar] [CrossRef]
- Green, Penny, Thomas MacManus, and Alicia de la Cour Venning. 2015. Countdown to annihilation: Genocide in Myanmar. International State Crime Initiative. Available online: http://statecrime.org/state-crime-research/isci-report-countdown-to-annihilation-genocide-in-myanmar/ (accessed on 15 November 2023).
- Hagan, John, and Wenona Rymond-Richmond. 2008. The Collective Dynamics of Racial Dehumanization and Genocidal Victimization in Darfur. American Sociological Review 73: 875–902. [Google Scholar] [CrossRef]
- Halperin, Eran. 2011. Emotional Barriers to Peace: Emotions and Public Opinion of Jewish Israelis about the Peace Process in the Middle East. Peace and Conflict 17: 22–45. [Google Scholar] [CrossRef]
- Hangartner, Dominik, Gloria Gennaro, Sary Alasiri, Nicholas Bahrich, Alexandra Bornhoft, Joseph Boucher, Buket Buse Demirci, Laurenz Derksen, Aldo Hall, Matthias Jochum, and et al. 2021. Empathy-based counterspeech can reduce racist hate speech in a social media field experiment, Brief Report. Proceedings of the National Academy of Sciences of the United States of America 118: e2116310118. [Google Scholar] [CrossRef] [PubMed]
- Harrison, Jackie, and Stefanie Pukallus. 2018. The Politics of Impunity: A Study of Journalists’ Experiential Accounts of Impunity in Bulgaria, Democratic Republic of Congo, India, Mexico and Pakistan. Journalism 22: 303–19. [Google Scholar] [CrossRef]
- Hasen, Richard. 2022. Cheap Speech: How Disinformation Poisons Our Politics―And How to Cure It. Yale: Yale University Press. [Google Scholar]
- Heyman, Steven. 2008. Free Speech and Human Dignity. Yale: Yale University Press. [Google Scholar]
- Hintz, Arne, Lena Dencik, and Karin Wahl-Jorgensen. 2018. Digital Citizenship in a Datafied Society. Cambridge: Polity Press. [Google Scholar]
- Howard, Philipp. 2020. Lie Machines. Yale: Yale University Press. [Google Scholar]
- IREX. 2023. Supporting Information Integrity and Resilience: Tools and Resources. Available online: https://www.irex.org/supporting-information-integrity-and-resilience-tools-and-resources (accessed on 22 January 2024).
- Jardina, Ashley, and Spencer Piston. 2021. Hiding in plain sight: Dehumanization as a foundation of white racial prejudice. Sociology Compass 15. [Google Scholar] [CrossRef]
- Jones, Peter. 2015. Dignity, Hate and Harm. Political Theory 43: 678–86. [Google Scholar] [CrossRef]
- Klein, Ezra. 2020. Why We Are Polarized. London: Profile Nooks. [Google Scholar]
- Littman, Rebecca, and Elizabeth Paluck. 2015. The cycle of violence: Understanding individual participation in collective violence. Political Psychology 36: 79–99. [Google Scholar] [CrossRef]
- Livingston Smith, David. 2011. Less than Human. Why We Demean, Enslave, and Exterminate Others. New York: St. Martin’s Griffin. [Google Scholar]
- Livingston Smith, David. 2020. Making Monsters. The Uncanny Power of Dehumanization. Cambridge: Harvard University Press. [Google Scholar]
- Locally Driven Peacebuilding. 2015. Signed Letter. March 27. Available online: https://www.cla.purdue.edu/ppp/documents/publications/Locally.pdf (accessed on 15 November 2023).
- Mason, Liliana. 2018. Uncivil Agreement: How Politics Became Our Identity. Chicago: Chicago University Press. [Google Scholar]
- Meta. n.d. Our Principles. Available online: https://about.meta.com/uk/company-info/ (accessed on 18 April 2023).
- Mihailidis, Paul. 2018. Civic media literacies: Re-Imagining engagement for civic intentionality. Learning, Media and Technology 43: 152–64. [Google Scholar] [CrossRef]
- Mouffe, Chantal. 2005. On the Political. London: Routledge. [Google Scholar]
- Neilsen, Rhiannon. 2015. ‘Toxification’ as a more precise early warning sign for genocide than dehumanization? An emerging research agenda. Genocide Studies and Prevention: An International Journal 9: 83–95. [Google Scholar] [CrossRef]
- Noble, Safia. 2018. Algorithms of Oppression. How Search Engines Reinforce Racism. New York: New York University Press. [Google Scholar]
- Opotow, Susan. 1990. Moral Exclusion and Injustice: An Introduction. Journal of Social Issues 46: 1–20. [Google Scholar] [CrossRef]
- Oppenheimer, Louis. 2006. The Development of Enemy Images: A Theoretical Contribution. Peace and Conflict: Journal of Peace Psychology 12: 269–92. [Google Scholar] [CrossRef]
- Ozalp, Sefa, Matthew Williams, Pete Burnap, Han Liu, and Mohamed Mostafa. 2020. Antisemitism on Twitter: Collective Efficacy and the Role of Community Organisations in Challenging Online Hate Speech. Social Media + Society 6: 1–20. [Google Scholar] [CrossRef]
- Papcunová, Jana, Marcel Martončik, Denisa Fedáková, Michal Kentoš, Miroslava Bozogáňová, Ivan Srba, Robert Moro, Matúš Pikuliak, Marián Šimko, and Matúš Adamkovič. 2023. Hate speech operationalization: A preliminary examination of hate speech indicators and their structure. Complex & Intelligent Systems 9: 2827–42. [Google Scholar] [CrossRef]
- PEN America. 2024. Guidelines for Safely Practicing Counterspeech. Available online: https://onlineharassmentfieldmanual.pen.org/guidelines-for-safely-practicing-counterspeech (accessed on 22 January 2024).
- Pukallus, Stefanie. 2022. Communication in Peacebuilding. Civil Wars, Civility and Safe Spaces. Basingstoke: Palgrave Macmillan. [Google Scholar]
- Pukallus, Stefanie. 2024a. Discursive civility. Theory and practice. In Handbook of Conflict and Peace Communication. Edited by Stacey Connaughton and Stefanie Pukallus. New York: Routledge, Forthcoming. [Google Scholar]
- Pukallus, Stefanie. 2024b. The three communicative dimension of hate speech. In Handbook of Conflict and Peace Communication. Edited by Stacey Connaughton and Stefanie Pukallus. New York: Routledge. [Google Scholar]
- RAND. n.d. Tools that Fight Disinformation Online. Available online: https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html (accessed on 23 January 2024).
- Reich, Rob, Mehran Sahami, and Jeremy Weinstein. 2023. System Error: Where Big Tech Went Wrong and How We Can Reboot. London: Hodder Paperbacks. [Google Scholar]
- Ressa, Maria. 2023. How to Stand Up to a Dictator. London: Penguin. [Google Scholar]
- Royzman, Edward, Clark McCauley, and Paul Rozin. 2005. From Plato to Putnam: Four ways to think about hate. In The Psychology of Hate. Edited by R. J. Sternberg. Washington, DC: American Psychological Association, pp. 3–35. [Google Scholar]
- Said, Edward. 2001. Orientalism. London: Penguin Classics. [Google Scholar]
- Savage, Rowan. 2012. ‘With scorn and bias’: Genocidal dehumanisation in bureaucratic discourse. In Genocide Perspectives IV. Essays on Holocaust and Genocide. Edited by Colin Tatz. Sydney: The Australian Institute for Holocaust & Genocide Studies, pp. 21–64. [Google Scholar]
- Savage, Rowan. 2013. Modern genocidal dehumanization: A new model. Patterns of Prejudice 47: 139–61. [Google Scholar] [CrossRef]
- Schick, Nina. 2020. Deep Fakes and the Infocalypse. What You Urgently Need to Know. London: Monoray. [Google Scholar]
- Schmitt, Carl. 2007. The Concept of the Political. Chicago: Chicago University Press. First published 1932. [Google Scholar]
- Sellars, Andy. 2016. Defining Hate Speech. The Berkman Klein Center for Internet & Society. December. Available online: https://cyber.harvard.edu/publications/2016/DefiningHateSpeech (accessed on 23 January 2024).
- Sherry, Mark. 2010. Disability Hate Crimes: Does Anyone Really Hate Disabled People? London: Routledge. [Google Scholar]
- Siapera, Eugenia, and Paloma Viejo-Otero. 2021. Governing Hate: Facebook and Digital Racism. Television & New Media 22: 112–30. [Google Scholar] [CrossRef]
- Simpson, Robert. 2013. Dignity, Harm, and Hate Speech. Law and Philosophy 32: 701–28. [Google Scholar] [CrossRef]
- Singer, Peter, and Emerson Brooking. 2018. LikeWar: The Weaponization of Social Media. New York: Mariner Books. [Google Scholar]
- Stanton, Gregory. 2004. Could the Rwandan genocide have been prevented? Journal of Genocide Research 6: 211–28. [Google Scholar] [CrossRef]
- Strossen, Nadine. 2018. Hate: Why We Should Resist It with Free Speech, Not Censorship. Oxford: Oxford University Press. [Google Scholar]
- Susskind, Jamie. 2022. The Digital Republic: On Freedom and Democracy in the 21st Century. London: Bloomsbury. [Google Scholar]
- Tsesis, Alexander. 2009. Dignity and Speech: The Regulation of Hate Speech in a Democracy. Law Review 44: 497–532. Available online: http://lawecommons.luc.edu/facpubs (accessed on 28 March 2024).
- Ullmann, Stefanie, and Marcus Tomalin. 2020. Quarantining online hate speech: Technical and ethical perspectives. Ethics and Information Technology 22: 69–80. [Google Scholar] [CrossRef]
- United Nations. n.d. Understanding Hate Speech: What Is Hate Speech. Available online: https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech (accessed on 7 March 2024).
- Van der Linden, Sander. 2023. Foolproof: Why We Fall for Misinformation and How to Build Immunity. London: Fourth Estate. [Google Scholar]
- Waldron, Jeremy. 2012. The Harm in Hate Speech. Cambridge: Harvard University Press. [Google Scholar]
- Waller, James. 2007. Becoming Evil: How Ordinary People Commit Genocide and Mass Killing. Oxford: Oxford University Press. [Google Scholar]
- Wan, Sai, and Ki Joon Kim. 2023. Content Moderation on Social Media: Does It Matter Who and Why Moderates Hate Speech? Cyberpsychology, Behavior, And Social Network 26: 527–34. [Google Scholar] [CrossRef]
- Weitz, Eric. 2005. A Century of Genocide. Utopias of Race and Nation. Princeton: Princeton University Press. [Google Scholar]
- Williams, Amanda, Clio Oliver, Katherine Aumer, and Chanel Meyers. 2016. Racial microaggressions and perceptions of Internet memes. Computers in Human Behavior 63: 424–32. [Google Scholar] [CrossRef]
- Williams, Matthew. 2021. The Science of Hate. How Prejudice Becomes Hate and What We Can Do to Stop It. London: Faber & Faber Limited. [Google Scholar]
- Wilson, Carolyn. 2019. Media and Information Literacy: Challenges and Opportunities for the World of Education. Ontario: The Canadian Commission for UNESCO’s IdeaLab, November. [Google Scholar]
- Wilson, Richard Ashby, and Molly Land. 2020. Hate Speech on Social Media: Content Moderation in Context. Connecticut Law Review 52: 1029–76. Available online: https://ssrn.com/abstract=3690616 (accessed on 15 November 2023).
- Woolley, Samuel. 2020. The Reality Game. How the Next Wave of Technology Will Break the Truth and What We Can Do about It. London: Endeavour. [Google Scholar]
- Woolley, Samuel. 2023. Manufacturing Consensus. Understanding Propaganda in the Era of Automation and Anonymity. Yale: Yale University Press. [Google Scholar]
- Wylie, Christopher. 2019. Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World. London: Profile books. [Google Scholar]
- X. 2023. Hateful Conduct Policy. Available online: https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy (accessed on 10 November 2023).
- Young, Iris Marion. 1990. Justice and the Politics of Difference. Princeton: Princeton University press. [Google Scholar]
- Zhang, Ziqi, and Lei Luo. 2019. Hate speech detection: A solved problem? the challenging case of long tail on twitter. Semantic Web 10: 925–45. [Google Scholar] [CrossRef]
- Zheng, Yi, Björn Ross, and Walid Magdy. 2023. What Makes Good Counterspeech? A Comparison of Generation Approaches and Evaluation Metrics. Paper Presented at the 1st Workshop on Counter Speech for Online Abuse (CS4OA), Prague, Czech Republic, September 11–12; Prague: Association for Computational Linguistics, pp. 62–71. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pukallus, S.; Arthur, C. Combating Hate Speech on Social Media: Applying Targeted Regulation, Developing Civil-Communicative Skills and Utilising Local Evidence-Based Anti-Hate Speech Interventions. Journal. Media 2024, 5, 467-484. https://doi.org/10.3390/journalmedia5020031
Pukallus S, Arthur C. Combating Hate Speech on Social Media: Applying Targeted Regulation, Developing Civil-Communicative Skills and Utilising Local Evidence-Based Anti-Hate Speech Interventions. Journalism and Media. 2024; 5(2):467-484. https://doi.org/10.3390/journalmedia5020031
Chicago/Turabian StylePukallus, Stefanie, and Catherine Arthur. 2024. "Combating Hate Speech on Social Media: Applying Targeted Regulation, Developing Civil-Communicative Skills and Utilising Local Evidence-Based Anti-Hate Speech Interventions" Journalism and Media 5, no. 2: 467-484. https://doi.org/10.3390/journalmedia5020031
APA StylePukallus, S., & Arthur, C. (2024). Combating Hate Speech on Social Media: Applying Targeted Regulation, Developing Civil-Communicative Skills and Utilising Local Evidence-Based Anti-Hate Speech Interventions. Journalism and Media, 5(2), 467-484. https://doi.org/10.3390/journalmedia5020031