Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (298)

Search Parameters:
Keywords = hate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 9603 KB  
Article
Understanding Modality-Specific Vulnerabilities in Vision–Language Models Under Adversarial Attacks
by Maisha Binte Rashid and Pablo Rivas
AI 2026, 7(4), 135; https://doi.org/10.3390/ai7040135 - 9 Apr 2026
Abstract
Vision–language models (VLMs), such as Contrastive Language–Image Pretraining (CLIP), are increasingly deployed in real-world applications, including content moderation, misinformation detection, and fraud analysis, making their robustness to adversarial attacks a critical concern. While adversarial robustness has been widely studied in unimodal models, modality-specific [...] Read more.
Vision–language models (VLMs), such as Contrastive Language–Image Pretraining (CLIP), are increasingly deployed in real-world applications, including content moderation, misinformation detection, and fraud analysis, making their robustness to adversarial attacks a critical concern. While adversarial robustness has been widely studied in unimodal models, modality-specific vulnerabilities in multimodal models remain underexplored. In this work, we analyze CLIP by applying gradient-based adversarial attacks to its vision and language modalities, both independently and jointly, and evaluating performance on two multimodal classification benchmarks: the Facebook Hateful Memes dataset and a large-scale Suspicious Car Parts dataset. Using Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks along with multiple adversarial retraining strategies, we show that adversarial perturbations on the image modality consistently cause the most severe and unstable performance degradation. These results demonstrate that the vision modality is the primary vulnerability in CLIP, highlighting the need for modality-specific defense strategies that focus more on the weaker modality in multimodal systems. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Graphical abstract

5 pages, 159 KB  
Proceeding Paper
Migrant Adults and Instagram Reels: A Narrative Review on Visual Micro-Formats for the Informal Learning of L2
by Francesco Pio Dilillo, Caterina Sapone, Stefano Triberti and Laura Sara Agrati
Proceedings 2026, 139(1), 3; https://doi.org/10.3390/proceedings2026139003 - 2 Apr 2026
Viewed by 287
Abstract
Digital platforms, and social media in particular, play a central role in daily life. At the same time, they may also amplify hate speech, stereotypes, and polarization. In this context, Instagram appears as a hybrid space where self-representation, social connection, functional access to [...] Read more.
Digital platforms, and social media in particular, play a central role in daily life. At the same time, they may also amplify hate speech, stereotypes, and polarization. In this context, Instagram appears as a hybrid space where self-representation, social connection, functional access to information and content creation coexist. This paper offers a narrative review of the literature on the use of Instagram Reels as tools for the informal learning of an L2 language among migrants. The review of seven studies shows that Instagram’s short visual formats can support language acquisition in non-formal settings and help users navigate cultural negotiation in everyday communication. Full article
31 pages, 1934 KB  
Review
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
by Félix Díaz, Nhell Cerna, Rafael Liza and Bryan Motta
Information 2026, 17(3), 292; https://doi.org/10.3390/info17030292 - 17 Mar 2026
Viewed by 423
Abstract
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining [...] Read more.
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining performance indicators, science mapping, and a focused full-text synthesis of highly cited papers. The literature grows sharply after 2019, peaks in 2025, and shows geographically uneven production, with collaboration structured around a small set of hubs. The thematic structure suggests that, during the pandemic era, infodemic-related research served as a catalyst, intensifying scientific attention to fake news and disinformation and expanding the associated detection and monitoring agendas. In addition, socio-political harm constructs such as hate speech, extremism, and polarization appear as recurrent and structurally central targets, highlighting that election-relevant work often extends beyond veracity assessment toward monitoring discourse risks. Blockchain also emerges as a novel and adjacent integrity theme, aligned with authenticity and provenance-oriented mitigation rather than mainstream detection pipelines. AI for electoral disinformation is not reducible to veracity classification, as influential studies also target automation and coordinated behavior, verification support, diffusion analysis, and estimation frameworks that focus on exposure and impact. Evaluation remains heterogeneous and is often shaped by benchmark settings, making high accuracy values hard to compare and potentially misleading when labeling quality, topic leakage, or context shift are not characterized. Overall, the findings motivate evaluation protocols that align operational objectives with modeling roles and explicitly address robustness to temporal and platform changes, asymmetric error costs during election windows, and representativeness across electoral contexts and languages, while also guiding future work on emerging integrity challenges and governance-relevant deployment settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 1134 KB  
Article
Gen Alpha in the Arena: The Parental Paradox in Mitigating Cyber-Trauma and Mental Health Risks in Online Gaming
by Mostafa Aboulnour Salem
Soc. Sci. 2026, 15(3), 181; https://doi.org/10.3390/socsci15030181 - 12 Mar 2026
Viewed by 353
Abstract
Cyber-trauma has emerged as an important concern within online gaming environments, with growing implications for children’s mental health and well-being. Multiplayer games increasingly function as routine spaces for interaction, competition, and informal learning, which may expose young players to hostile behaviours such as [...] Read more.
Cyber-trauma has emerged as an important concern within online gaming environments, with growing implications for children’s mental health and well-being. Multiplayer games increasingly function as routine spaces for interaction, competition, and informal learning, which may expose young players to hostile behaviours such as harassment, hate speech, exclusion, and repeated targeting. Understanding the psychological consequences of these experiences and the protective role of family support is therefore essential. This study investigates the relationship between cyber-trauma victimisation (CV) and four mental health outcomes—depressive symptoms (DS), anxiety symptoms (AS), perceived stress (PS), and emotional distress (ED)—among Generation Alpha student gamers, while examining parental support as a moderating factor. Survey data were collected from 1223 students of diverse Arab nationalities enrolled in schools in Saudi Arabia, with Saudi nationals representing approximately 15% of the sample. The results indicate that CV is a strong and consistent predictor of all examined mental health outcomes. Higher levels of CV are significantly associated with increased depressive symptoms (β = 0.58), anxiety symptoms (β = 0.55), perceived stress (β = 0.52), and emotional distress (β = 0.60) (all p < 0.001). Parental support significantly moderates these relationships, weakening the association between cyber-trauma exposure and adverse psychological outcomes. These findings contribute to the growing literature on children’s digital well-being by demonstrating that online gaming environments can serve as meaningful psychosocial stressors for young players. The results further highlight the importance of family-centred protective mechanisms, suggesting that parental emotional support, guidance, and communication can play a critical role in buffering the mental health risks associated with hostile online interactions. Full article
Show Figures

Figure 1

34 pages, 7889 KB  
Article
Examining Topics and Trends in Cyber Aggression and Abuse: A Latent Dirichlet Allocation Analysis
by Amir Alipour Yengejeh and Larry Tang
Mathematics 2026, 14(6), 932; https://doi.org/10.3390/math14060932 - 10 Mar 2026
Viewed by 368
Abstract
Cyber aggression and abuse (CAA) has become a major interdisciplinary research area spanning psychology, communication, public health, and computer science. Existing reviews have largely focused on detection methods and model performance, offering limited insight into how CAA research themes have evolved over time [...] Read more.
Cyber aggression and abuse (CAA) has become a major interdisciplinary research area spanning psychology, communication, public health, and computer science. Existing reviews have largely focused on detection methods and model performance, offering limited insight into how CAA research themes have evolved over time at the field level. This study addresses this gap by, to the best of our knowledge, applying Latent Dirichlet Allocation (LDA) to 2309 Web of Science–indexed publications with English-language abstracts published between 2000 and 2024, providing a large-scale, longitudinal, and multi-level analysis of the literature. The model identifies 29 latent topics, which are organized using the User–Activity–Content (UAC) framework to link psychosocial research, platform-mediated behaviors, and computational detection approaches. Temporal analysis reveals a clear methodological transition: early dominance of survey-based and psychosocial themes gradually declines in relative prominence, while computational topics related to machine learning, deep learning, and pre-trained language models exhibit sustained growth, particularly after 2010. A Hot–Cold topic classification further distinguishes emerging, stable, and declining research directions. Journal-level, disciplinary, and geographic analyses reveal systematic differentiation across venues and regions, with complementary emphases on psychosocial and computational approaches. These findings provide a structured, field-level perspective on the evolution of CAA research and offer practical value for researchers, funding agencies, journal editors, and publishers by identifying dominant, emerging, and declining themes that can inform research prioritization, editorial planning, and strategic investment. Full article
(This article belongs to the Special Issue Statistics and Data Science)
Show Figures

Figure 1

19 pages, 3114 KB  
Article
Nano-Biocatalysis for Enhanced Lignocellulosic Bioethanol Fermentation: Synergistic Effects of Nanomaterials on Substrate-Induced Enzyme Activity
by Chinmay Hate, Sejal Shirke and Mamata S. Singhvi
Catalysts 2026, 16(3), 237; https://doi.org/10.3390/catal16030237 - 3 Mar 2026
Viewed by 717
Abstract
The conversion of lignocellulosic biomass (LCB) into biofuels is hindered by its inherent resistance and the drawbacks of conventional pretreatment, which include high cost, intensive energy use, and inhibitor formation. Here, we present a novel, one-pot bioconversion process that bypasses pretreatment by integrating [...] Read more.
The conversion of lignocellulosic biomass (LCB) into biofuels is hindered by its inherent resistance and the drawbacks of conventional pretreatment, which include high cost, intensive energy use, and inhibitor formation. Here, we present a novel, one-pot bioconversion process that bypasses pretreatment by integrating cerium-doped iron oxide nanoparticles (CeFeO4NPs) with a specialized enzyme system. The system utilizes enzyme supernatant from Penicillium janthinellum mutant EU-30, a strain developed via chemical–physical mutagenesis, which exhibits stable hemicellulase activity and a 25–30% increase in cellulase activity. The integrated approach effectively saccharified raw sugarcane bagasse (SB) within 24 h, generating the highest yields of 12.8 ± 0.5 g/L glucose and 11.54 ± 0.5 g/L xylose compared to other substrates tested. Subsequent fermentation with Saccharomyces cerevisiae yielded 13.47 g/L ethanol (1.21 g/L/h productivity) and demonstrated concurrent consumption of both hexose and pentose sugars. We propose that residual CeFe3O4NPs in the hydrolysate mitigate carbon catabolite inhibition, thereby increasing xylose utilization. This was attributed to the residual CeFe3O4NPs in the hydrolysate, which are thought to upregulate xylose-metabolism-related genes in S. cerevisiae, thereby alleviating carbon catabolite inhibition. This method offers a streamlined, economical, and sustainable platform for producing carbon-neutral bioethanol from agricultural waste, eliminating costly pretreatment and simplifying downstream processing. Full article
(This article belongs to the Section Biocatalysis)
Show Figures

Graphical abstract

13 pages, 510 KB  
Article
Authoritarian Aggression: A Unique Predictor of Attitudes to Sex- and Gender-Based Crime
by Blake A. Kozlowski, Ashlyn S. Olson, Alizay R. Naqvi, Alexis S. Amos and Andrew S. Franks
Sexes 2026, 7(1), 12; https://doi.org/10.3390/sexes7010012 - 24 Feb 2026
Viewed by 537
Abstract
A recently developed nonpartisan authoritarian aggression scale (NAAS) has a robust nomological network that includes attitudes toward women and LGBTQ+ individuals. The current research was meant to further validate the scale by demonstrating its ability to predict unique variance in attitudes relating to [...] Read more.
A recently developed nonpartisan authoritarian aggression scale (NAAS) has a robust nomological network that includes attitudes toward women and LGBTQ+ individuals. The current research was meant to further validate the scale by demonstrating its ability to predict unique variance in attitudes relating to sex crimes (i.e., rape myth acceptance) and anti-transgender hate crimes when controlling for potentially relevant cognitive (i.e., need for cognition, intolerance of uncertainty) and cultural (i.e., Christian nationalism) variables. A sample of 100 U.S. participants was recruited from Prolific and completed an online survey via Qualtrics. A series of correlation analyses showed that the NAAS was significantly related to all of the other predictor variables as well as both the sex and hate crime outcomes at the bivariate level, adding to the nomological network of the NAAS. Multiple regression analyses showed that the combination of predictors explained significant variance in both outcomes and that the NAAS was the only predictor to explain unique variance in both sex crime and anti-transgender hate crime attitudes. The results imply that authoritarian aggression poses a danger for women, transgender individuals, and victims of sex crimes and hate crimes more broadly. Future research should examine ways of attenuating authoritarian aggression in individuals and communities to protect those who are vulnerable due to their sex, sexual orientation, or gender identity. Full article
(This article belongs to the Section Sexual Behavior and Attitudes)
Show Figures

Figure 1

40 pages, 1792 KB  
Article
Why So Meme? A Comparative and Explainable Analysis of Multimodal Hateful Meme Detection
by Nor Saiful Azam Bin Nor Azmi, Michal Ptaszynski, Fumito Masui and Abu Nowhash Chowdhury
Mach. Learn. Knowl. Extr. 2026, 8(2), 50; https://doi.org/10.3390/make8020050 - 21 Feb 2026
Viewed by 731
Abstract
The rise of toxic content, particularly in the form of hateful memes, poses a significant challenge to social media platforms. This paper presents an empirical comparative study of unimodal and multimodal architectures for toxic content detection. Rather than proposing a novel architecture, the [...] Read more.
The rise of toxic content, particularly in the form of hateful memes, poses a significant challenge to social media platforms. This paper presents an empirical comparative study of unimodal and multimodal architectures for toxic content detection. Rather than proposing a novel architecture, the study evaluates the efficacy of a modular Late Fusion framework (RoBERViT) against specialized unimodal baselines (RoBERTa and ViT) and a generalist Large Multimodal (LLaVA). Both unimodal and multimodal configurations across two distinct benchmarks—the imbalanced Innopolis Hateful Memes dataset and the confounder-driven Facebook Hateful Meme dataset—were explored. Beyond quantitative metrics, this study conducts a qualitative analysis using Explainable AI (LIME) and a Large Multimodal Model (LLaVA) to investigate model reasoning. Results demonstrate that the multimodal fusion model consistently outperformed its unimodal counterparts on the Innopolis Hateful Meme dataset, achieving a toxic class F1-score of 0.6439 compared to the text-only score of 0.5794. However, on the Facebook Hateful Meme dataset, text-only models remain competitive, highlighting the “benign confounder” challenge. The qualitative analysis reveals that text remains the dominant modality, with models often relying on surface-level keywords. Notably, the Vision Transformer frequently uses text overlays as a visual proxy for hate, while the LLaVA model struggles with hallucinated toxicity in benign confounder contexts. These findings underscore the persistent challenge of achieving true multimodal understanding in hate speech detection. Full article
(This article belongs to the Special Issue Language Acquisition and Understanding)
Show Figures

Figure 1

22 pages, 840 KB  
Article
Patterned Radicalization: User Behavior Analysis and Antisemitic Language in QAnon Subreddits
by Noah D. Cohen, Peter Antonaros, Dana B. Weinberg, Meyer Levy and Jeffrey S. Kopstein
Information 2026, 17(2), 179; https://doi.org/10.3390/info17020179 - 10 Feb 2026
Viewed by 908
Abstract
This study investigates how people in certain online communities engage with and adopt hateful rhetoric, specifically examining the escalation of antisemitic language in two deplatformed QAnon-related subreddits. Utilizing a lexicon of implicit and explicit antisemitic terms, the research analyzes over 1.26 million Reddit [...] Read more.
This study investigates how people in certain online communities engage with and adopt hateful rhetoric, specifically examining the escalation of antisemitic language in two deplatformed QAnon-related subreddits. Utilizing a lexicon of implicit and explicit antisemitic terms, the research analyzes over 1.26 million Reddit posts and comments. This study’s objective is to describe the process through which users become more deeply engaged with antisemitic content within hate-filled subcultures found online. Through the application of survival models, chi-square tests, and logistic regressions, the findings reveal that most users do not begin their engagement using antisemitic language and instead engage with antisemitic content before posting it on their own. Moreover, users who transition to posting antisemitic language typically commence with posts containing implicit terms, progressively transitioning from implicit to explicit hate speech. This escalation is particularly pronounced among users demonstrating higher levels of engagement and those interacting with influential community members, often referred to as hyper-posters. The results indicate increased involvement, exposure, and interaction with antisemitic language, especially with hyper-posters, significantly predict increased and more extreme antisemitic language. These findings illuminate the dynamic and socially contingent process through which users engage with and internalize antisemitic language within subcultural digital spaces. The study posits that language radicalization is a structured process shaped by both individual user behavior and the broader community’s social architecture. Full article
(This article belongs to the Special Issue Semantic Networks for Social Media and Policy Insights)
Show Figures

Graphical abstract

21 pages, 1470 KB  
Article
Hate Speech on Social Media: Unpacking How Toxic Language Fuels Anti-Immigrant Hostility
by Juan-José Igartua and Carlos A. Ballesteros-Herencia
Soc. Sci. 2026, 15(2), 91; https://doi.org/10.3390/socsci15020091 - 3 Feb 2026
Cited by 1 | Viewed by 1275
Abstract
This study investigates the influence of toxic language in hate speech targeting immigrants, particularly through narrative formats like first-person X (Twitter) threads. Hate speech, defined as promotion of hatred based on personal or group characteristics, increasingly escalates on social media, impacting public attitudes [...] Read more.
This study investigates the influence of toxic language in hate speech targeting immigrants, particularly through narrative formats like first-person X (Twitter) threads. Hate speech, defined as promotion of hatred based on personal or group characteristics, increasingly escalates on social media, impacting public attitudes and behaviors. While previous research has primarily focused on measuring the scope of hate speech through content analysis and computational methods, there has been limited attention to its effects on audiences. This study presents the results of an online experiment (N = 339) with a 2 × 2 between-subjects design that manipulates the presence of toxic language and message popularity. Results indicate that hate messages lacking toxic language promote greater identity fusion with the author of the message, which in turn increases the intention to share the message, reinforces negative attitudes toward immigrants, and increases support for harsh policies against irregular immigration. Moreover, non-toxic hate messages significantly enhance narrative transportation exclusively for individuals with conservative political views, thereby further increasing their intention to share the message. These findings highlight that subtler forms of hate speech can create strong audience connections with hostile perspectives, emphasizing the need for anti-hate campaigns to address both overt and subtle hate content. Full article
Show Figures

Figure 1

18 pages, 3705 KB  
Article
Cross-Platform Multi-Modal Transfer Learning Framework for Cyberbullying Detection
by Weiqi Zhang, Chengzu Dong, Aiting Yao, Asef Nazari and Anuroop Gaddam
Electronics 2026, 15(2), 442; https://doi.org/10.3390/electronics15020442 - 20 Jan 2026
Viewed by 515
Abstract
Cyberbullying and hate speech increasingly appear in multi-modal social media posts, where images and text are combined in diverse and fast changing ways across platforms. These posts differ in style, vocabulary and layout, and labeled data are sparse and noisy, which makes it [...] Read more.
Cyberbullying and hate speech increasingly appear in multi-modal social media posts, where images and text are combined in diverse and fast changing ways across platforms. These posts differ in style, vocabulary and layout, and labeled data are sparse and noisy, which makes it difficult to train detectors that are both reliable and deployable under tight computational budgets. Many high performing systems rely on large vision language backbones, full parameter fine tuning, online retrieval or model ensembles, which raises training and inference costs. We present a parameter efficient cross-platform multi-modal transfer learning framework for cyberbullying and hateful content detection. Our framework has three components. First, we perform domain adaptive pretraining of a compact ViLT backbone on in domain image-text corpora. Second, we apply parameter efficient fine tuning that updates only bias terms, a small subset of LayerNorm parameters and the classification head, leaving the inference computation graph unchanged. Third, we use noise aware knowledge distillation from a stronger teacher built from pretrained text and CLIP based image-text encoders, where only high confidence, temperature scaled predictions are used as soft labels during training, and teacher models and any retrieval components are used only offline. We evaluate primarily on Hateful Memes and use IMDB as an auxiliary text only benchmark to show that the deployment aware PEFT + offline-KD recipe can still be applied when other modalities are unavailable. On Hateful Memes, our student updates only 0.11% of parameters and retain about 96% of the AUROC of full fine-tuning. Full article
(This article belongs to the Special Issue Data Privacy and Protection in IoT Systems)
Show Figures

Figure 1

24 pages, 4461 KB  
Article
SD-CVD Corpus: Towards Robust Detection of Fine-Grained Cyber-Violence Across Saudi Dialects in Online Platforms
by Abrar Alsayed, Salma Elhag and Sahar Badri
Information 2026, 17(1), 76; https://doi.org/10.3390/info17010076 - 12 Jan 2026
Viewed by 528
Abstract
This paper introduces Saudi Dialects Cyber Violence Detection (SD-CVD) corpus, a large-scale, class-balanced Saudi-dialect corpus for fine-grained cyber violence detection on online platforms. The dataset contains 88,687 Saudi Arabic tweets annotated using a three-level hierarchical scheme that assigns each tweet to one of [...] Read more.
This paper introduces Saudi Dialects Cyber Violence Detection (SD-CVD) corpus, a large-scale, class-balanced Saudi-dialect corpus for fine-grained cyber violence detection on online platforms. The dataset contains 88,687 Saudi Arabic tweets annotated using a three-level hierarchical scheme that assigns each tweet to one of 11 mutually exclusive classes, covering benign sentiment (positive, neutral, negative), cyberbullying, and seven hate-speech subtypes (incitement to violence, gender, national, social class, tribal, religious, and regional discrimination). To mitigate the class imbalance common in Arabic cyber violence datasets, data augmentation was applied to achieve a near-uniform class distribution. Annotation quality was ensured through multi-stage review, yielding excellent inter-annotator agreement (Fleiss’ κ > 0.89). We evaluate three modeling paradigms: traditional machine learning with TF–IDF and n-gram features (SVM, logistic regression, random forest), deep learning models trained on fixed sentence embeddings (LSTM, RNN, MLP, CNN), and fine-tuned transformer models (AraBERTv02-Twitter, CAMeLBERT-MSA). Experimental results show that transformers perform best, with AraBERTv02-Twitter achieving the highest weighted F1-score (0.882) followed by CAMeLBERT-MSA (0.869). Among non-transformer baselines, SVM is most competitive (0.853), while CNN performs worst (0.561). Overall, SD-CVD provides a high-quality benchmark and strong baselines to support future research on robust and interpretable Arabic cyber-violence detection. Full article
Show Figures

Figure 1

20 pages, 953 KB  
Article
Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary
by Roland Kelemen, Dorina Bosits and Zsófia Réti
J. Cybersecur. Priv. 2026, 6(1), 3; https://doi.org/10.3390/jcp6010003 - 24 Dec 2025
Viewed by 1277
Abstract
Online hate speech poses a growing socio-technological threat that undermines democratic resilience and obstructs progress toward Sustainable Development Goal 16 (SDG 16). This study examines the regulatory and behavioral dimensions of this phenomenon through a combined legal analysis of platform governance and an [...] Read more.
Online hate speech poses a growing socio-technological threat that undermines democratic resilience and obstructs progress toward Sustainable Development Goal 16 (SDG 16). This study examines the regulatory and behavioral dimensions of this phenomenon through a combined legal analysis of platform governance and an empirical survey conducted on Meta platforms, based on a sample of young Hungarians (N = 301, aged 14–34). This study focuses on Hungary as a relevant case study of a Central and Eastern European (CEE) state. Countries in this region, due to their shared historical development, face similar societal challenges that are also reflected in the online sphere. The combination of high social media penetration, a highly polarized political discourse, and the tensions between platform governance and EU law (the DSA) makes the Hungarian context particularly suitable for examining digital resilience and the legal awareness of young users. The results reveal a significant “awareness gap”: While a majority of young users can intuitively identify overt hate speech, their formal understanding of platform rules is minimal. Furthermore, their sanctioning preferences often diverge from Meta’s actual policies, indicating a lack of clarity and predictability in platform governance. This gap signals a structural weakness that erodes user trust. The legal analysis highlights the limited enforceability and opacity of content moderation mechanisms, even under the Digital Services Act (DSA) framework. The empirical findings show that current self-regulation models fail to empower users with the necessary knowledge. The contribution of this study is to empirically identify and critically reframe this ‘awareness gap’. Moving beyond a simple knowledge deficit, we argue that the gap is a symptom of a deeper legitimacy crisis in platform governance. It reflects a rational user response—manifesting as digital resignation—to opaque, commercially driven, and unaccountable moderation systems. By integrating legal and behavioral insights with critical platform studies, this paper argues that achieving SDG 16 requires a dual strategy: (1) fundamentally increasing transparency and accountability in content governance to rebuild user trust, and (2) enhancing user-centered digital and legal literacy through a shared responsibility model. Such a strategy must involve both public and private actors in a coordinated, rights-based approach. Ultimately, this study calls for policy frameworks that strengthen democratic resilience not only through better regulation, but by empowering citizens to become active participants—rather than passive subjects—in the governance of online spaces. Full article
(This article belongs to the Special Issue Multimedia Security and Privacy)
Show Figures

Figure 1

17 pages, 989 KB  
Article
Sustainable Hatred: Tesla as a Political Product and the Environmental Impact of Hate Crimes Committed on E-Vehicles
by Judit Glavanits, Gergely G. Karácsony and Gábor Kecskés
Future Transp. 2025, 5(4), 200; https://doi.org/10.3390/futuretransp5040200 - 15 Dec 2025
Viewed by 1130
Abstract
The production and sales figures for electric vehicles are showing a steady upward trend, clearly indicating the growing importance of sustainability goals. A unique historical situation has developed in the US: the owner of the leading electric car manufacturer (Tesla), Elon Musk, has [...] Read more.
The production and sales figures for electric vehicles are showing a steady upward trend, clearly indicating the growing importance of sustainability goals. A unique historical situation has developed in the US: the owner of the leading electric car manufacturer (Tesla), Elon Musk, has taken an active role in political life. Amid a rising trend in electric vehicle (EV) adoption aligned with global sustainability goals, the political activism of Musk has provoked public backlash, including acts of vandalism and aggression toward Tesla vehicles. Using a multidisciplinary approach, the study explores (1) the psychological underpinnings of object-directed violence, (2) the legal classification of politically motivated vandalism, and (3) the broader market implications of corporate politicization. Our findings confirm that object-directed aggression stems from displaced frustration, especially when individuals feel politically powerless or morally outraged. Our analysis revealed that most Tesla-related vandalism will likely be prosecuted as property crimes. Although U.S. officials have labeled some acts as domestic terrorism or hate crimes, legal thresholds are generally not met. Our interdisciplinary model suggests that the politicization of Tesla has broader implications. Tesla’s symbolic status in the electric vehicle market means that attacks on it risk triggering a decline in public trust toward electric mobility. Full article
(This article belongs to the Special Issue Future of Vehicles (FoV2025))
Show Figures

Figure 1

18 pages, 299 KB  
Article
The Public Perception of Hate Speech Regulation in Unconventional Media
by Ismael Crespo Martínez, Inmaculada Melero López and María Isabel López Palazón
Soc. Sci. 2025, 14(12), 705; https://doi.org/10.3390/socsci14120705 - 10 Dec 2025
Viewed by 594
Abstract
This study provides one of the first quantitative analyses regarding citizens’ perception of hate speech regulation in Spain, based on the influential, empirical study of the Torre Pacheco case. The research at hand statistically validates the correlation between the consumption of content through [...] Read more.
This study provides one of the first quantitative analyses regarding citizens’ perception of hate speech regulation in Spain, based on the influential, empirical study of the Torre Pacheco case. The research at hand statistically validates the correlation between the consumption of content through unconventional media and a reduced tendency to accept regulatory measures, a significant finding given the current climate of growing disinformation and digital polarization. The results indicate that women are more likely to support regulation, while individuals who are politically more conservative tend to reject such intervention. The conclusions highlight a potential association between political affiliation, trust in state institutions, and resistance to content regulation in the digital environment, which provide key insights into the current challenges facing democratic governance. Full article
(This article belongs to the Special Issue Understanding the Influence of Alternative Political Media)
Back to TopTop