Next Article in Journal
Application of the Kurganov–Tadmor Scheme in Curvilinear Coordinates for Supersonic Flow
Previous Article in Journal
Last-Mile Decomposition Heuristics with Multi-Period Embedded Optimization Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection

by
Mohammed A. Mahdi
1,
Suliman Mohamed Fati
2,*,
Mohammed Gamal Ragab
3,
Mohamed A. G. Hazber
2,
Shahanawaj Ahamad
4,
Sawsan A. Saad
5 and
Mohammed Al-Shalabi
1
1
Information and Computer Science Department, College of Computer Science and Engineering, University of Ha’il, Ha’il 55476, Saudi Arabia
2
Information Systems Department, College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
3
Department of Computer and Information Sciences, Universiti Teknologi Petronas, Seri Iskandar 32610, Malaysia
4
Software Engineering Department, College of Computer Science and Engineering, University of Ha’il, Ha’il 55476, Saudi Arabia
5
Computer Engineering Department, College of Computer Science and Engineering, University of Ha’il, Ha’il 55476, Saudi Arabia
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2025, 30(4), 91; https://doi.org/10.3390/mca30040091
Submission received: 1 August 2025 / Revised: 16 August 2025 / Accepted: 19 August 2025 / Published: 21 August 2025

Abstract

The escalating scale and psychological harm of cyberbullying across digital platforms present a critical social challenge, demanding the urgent development of highly accurate and reliable automated detection systems. Standard fine-tuned transformer models, while powerful, often fall short in capturing the nuanced, context-dependent nature of online harassment. This paper introduces a novel hybrid deep learning model called Robustly Optimized Bidirectional Encoder Representations from the Transformers with the Bidirectional Long Short-Term Memory-based Attention model (RoBERTa-BiLSTM), specifically designed to address this challenge. To maximize its effectiveness, the model was systematically optimized using the Optuna framework and rigorously benchmarked against eight state-of-the-art transformer baseline models on a large cyberbullying dataset. Our proposed model achieves state-of-the-art performance, outperforming BERT-base, RoBERTa-base, RoBERTa-large, DistilBERT, ALBERT-xxlarge, XLNet-large, ELECTRA-base, DeBERTa-v3-small with an accuracy of 94.8%, precision of 96.4%, recall of 95.3%, F1-score of 95.8%, and an AUC of 98.5%. Significantly, it demonstrates a substantial improvement in F1-score over the strongest baseline and reduces critical false negative errors by 43%, all while maintaining moderate computational efficiency. Furthermore, our efficiency analysis indicates that this superior performance is achieved with a moderate computational complexity. The results validate our hypothesis that a specialized hybrid architecture, which synergizes contextual embedding with sequential processing and attention mechanism, offers a more robust and practical solution for real-world social media applications.

1. Introduction

The proliferation of social media (SM) has fundamentally reshaped human interaction, creating unprecedented opportunities for connection and communication [1,2]. However, this digital landscape has also cultivated a pervasive dark side: cyberbullying. Defined as intentional and repeated harm inflicted through electronic text and media, cyberbullying has emerged as a significant societal problem with severe psychological consequences for its victims, including depression, anxiety, and social isolation [3,4]. The sheer volume and velocity of online content make manual moderation untenable, creating a critical need for automated systems that can accurately and reliably detect harmful language in real time [5,6].
Early attempts at automated detection relied on keyword matching and traditional machine learning models, but these methods were often brittle and failed to grasp the contextual nuances of human language [7,8,9]. The advent of deep learning, particularly the development of large pre-trained language models (PLMs) based on the Transformer architecture, marked a paradigm shift [10,11]. Models like BERT (Bidirectional Encoder Representations from Transformers) [12] and its successor, RoBERTa [13], established new standards in natural language understanding by learning deep contextual relationships between words, becoming the de facto foundation for a wide array of NLP tasks, including cyberbullying detection [14,15,16,17].
Despite their success, the standard fine-tuning approach for these powerful models—typically involving the addition of a simple linear classification layer—presents its own limitations [18]. This approach may not fully leverage the rich, sequential information encoded in the transformer’s outputs or effectively focus on the most toxic parts of a sentence, especially in cases of sarcasm [19], indirect aggression [20], or long-form harassment [21]. A significant research gap therefore exists in developing more sophisticated architectures that can better interpret the contextual and sequential patterns specific to harmful online language [1].
To address these shortcomings, this study introduces a novel hybrid deep learning architecture, the Hybrid-RoBERTa-BiLSTM with the Attention model. Our aim is to develop and validate a more powerful and nuanced detection system by synergizing the strengths of multiple neural components. The primary contributions of this work are threefold:
  • We propose a novel hybrid architecture, the Hybrid-RoBERTa-BiLSTM-Attention model, that synergizes a pre-trained RoBERTa encoder with a BiLSTM to capture long-range sequential dependencies and an attention mechanism to focus on the most discriminative features for classification.
  • We introduce a methodologically rigorous optimization process using the Optuna framework to systematically tune the model’s hyperparameters, ensuring the robustness and reproducibility of our findings.
  • We conduct a comprehensive benchmark against eight state-of-the-art models, demonstrating that our proposed architecture achieves new state-of-the-art performance and offers an optimal balance between predictive accuracy and computational efficiency.
The remainder of this paper is organized as follows. Section 2 provides a review of related work in automated cyberbullying detection and relevant deep learning architectures. Section 3 details our proposed methodology, including the data preprocessing techniques, the architecture of the Hybrid-RoBERTa-BiLSTM-Attention model, and the systematic hyperparameter optimization process. Section 4 presents the experimental results, offering a comprehensive performance analysis of our model and a comparative benchmark against state-of-the-art baselines. Finally, Section 5 concludes the paper by summarizing our findings, discussing their implications, and suggesting directions for future research.

2. Related Works

The rise of SM and online communication platforms has led to an alarming increase in cyberbullying incidents, causing significant psychological and emotional harm to individuals [1,5]. This pervasive issue has spurred researchers to explore automated methods for detecting cyberbullying to enhance online safety. However, detecting cyberbullying remains challenging due to the nuanced and context-dependent nature of online language, which often involves sarcasm, slang, and cultural references that complicate automated detection [14].
Early efforts in cyberbullying detection focused on keyword-based filtering and rule-based approaches. These traditional methods, while straightforward, struggled to capture the complexities of online communication. For instance, a basic keyword-based system might detect offensive words but fail to consider context, leading to high false positives or negatives. Studies such as those by Dinakar et al. [22] and Nahar et al. [23] explored these initial approaches but found them insufficient in handling the nuanced language of SM, especially for cyberbullying, where context is critical. The limitations of keyword-based detection led to the adoption of machine learning (ML) models, which provided more flexibility by learning from data rather than relying solely on predefined rules. Traditional ML algorithms such as Support Vector Machines (SVMs) [24] and logistic regression were employed to improve cyberbullying detection by considering various textual features. Although these models showed improvement over keyword filtering, they required hand-crafted features, which limited their adaptability and performance in dynamic online environments. Additionally, these models were prone to generalization issues as they struggled to capture the contextual and sequential nature of language [25].
With the advent of deep learning, researchers shifted towards neural networks, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), to enhance cyberbullying detection. CNNs, known for their ability to detect local patterns, were initially applied to sentiment analysis [26] and extended to cyberbullying detection tasks [27]. RNNs, especially Long Short-Term Memory (LSTM) networks, proved effective in capturing sequential dependencies, as seen in works by Agrawal and Awekar [28]. These models provided a significant leap forward by allowing automatic feature extraction, but they still struggled with capturing long-range dependencies, a critical factor in understanding context-dependent bullying language. Suliman et al. [29] introduces a stacking ensemble learning approach that combines multiple deep neural network methods, along with an enhanced BERT model (BERT-M), to detect cyberbullying on SM platforms like X (formerly known as Twitter) and Facebook. The dataset was preprocessed to remove irrelevant information, and word2vec with Continuous Bag of Words (CBOWs) was utilized for feature extraction. The stacked model achieved an F1-score of 0.964, precision of 0.950, and recall of 0.92, with a detection time of 3 min. The authors in [30] utilized word embeddings combined with a CNN for cyberbullying detection in SM text, achieving an accuracy of 94.2%. Their experiments were conducted on Twitter data. Al-Ajlan and Ykhlef [31] employed a CNN enhanced by a metaheuristic optimization algorithm for cyberbullying classification, using a dataset of 20,000 randomly selected tweets. Zhang et al. [32] introduced a pronunciation-based CNN to detect cyberbullying, drawing on datasets from Twitter and Formspring. Zhao and Mao [33] developed a text-based detection method that extracted cyberbullying features using a variant of stacked denoising autoencoders called marginalized stacked denoising autoencoders, conducting experiments on Twitter and MySpace data. In Lu et al. [34], the authors implemented a character-level CNN with shortcut connections to detect cyberbullying in both Chinese and English datasets, performing their experiments on the Chinese Weibo and English Tweet datasets. Lastly, Kumari and Singh [35] focused on analyzing multimodal data, integrating both text and images to detect cyberbullying content, using a dataset of 2100 posts, where each post included an image accompanied by a comment.
Despite these advancements, both CNNs and RNNs faced challenges in handling long-term dependencies [36] and complex language constructs [26]. Sarcasm, cultural references, and implicit offensive language often went undetected, as these models were limited by their inability to fully understand context over long sequences. This gap in detecting nuanced language led to the adoption of Transformer-based architectures, which could overcome these limitations by utilizing self-attention mechanisms to model relationships across entire sentences or even larger text bodies.
Transformer models, introduced by Vaswani et al. [10], revolutionized NLP by implementing a self-attention mechanism that allowed for efficient handling of dependencies between words. The BERT model, introduced by Devlin et al. [12], further advanced this approach by incorporating bidirectional context, allowing it to understand each word based on both preceding and succeeding words in a sentence. BERT’s success in various NLP tasks has made it a popular choice for cyberbullying detection. Paul and Saha [37] employed a pre-trained BERT model for cyberbullying identification using three corpora: Formspring (12k posts), Twitter (16k posts), and Wikipedia (100k posts). Their findings show that BERT outperforms slot-gated or attention-based deep learning models. Similarly, studies by Mishra et al. [38] and Muneer et al. [1] demonstrated that BERT outperformed traditional and deep learning models in detecting harmful language due to its robust contextual comprehension. However, the standard fine-tuning paradigm, which typically adds a simple linear layer on top of the transformer’s output, may not be fully optimized for the intricate task of cyberbullying detection. This approach can underutilize the rich sequential information encoded by the transformer and may fail to dynamically focus on the most salient, toxic cues within a text.
As the summary in Table 1 indicates, while the field has progressed to powerful transformer models, a significant research gap persists in the architectural design of the classification head. The standard fine-tuning approach for models like BERT and RoBERTa often employs a simple linear layer that underutilizes the rich sequential information from the encoder and lacks a dynamic mechanism to focus on the most salient toxic cues. To address these shortcomings and advance the state of the art, this study makes the following primary contributions. First, we propose a hybrid architecture that combines the powerful contextual embeddings of RoBERTa with a BiLSTM network to capture sequential dependencies and an attention mechanism to focus on the most salient features. Second, we employ a systematic hyperparameter optimization strategy using the Optuna framework to ensure a methodologically rigorous and reproducible result.

3. Materials and Methods

This section details the comprehensive methodology for cyberbullying classification, centered on a novel hybrid deep learning model, RoBERTa-BiLSTM with Attention. Our approach integrates sophisticated model architecture with a systematic hyperparameter optimization framework to ensure peak performance. The workflow encompasses data preprocessing and augmentation, the proposed model architecture, automated hyperparameter tuning with Optuna [39], and a final evaluation against state-of-the-art baseline models. The experiment was implemented using Python 3.11 with PyTorch, Hugging Face Transformers, NLTK, and Optuna libraries. For reproducibility, all random seeds were fixed to 42.

3.1. Dataset and Preprocessing

The study utilizes a dataset of 39,880 text samples focused on cyberbullying, characterized by a notable class imbalance with 25,074 instances labeled as cyberbullying (label 1) and 14,806 as non-cyberbullying (label 0). A multi-step preprocessing pipeline was applied to standardize and clean the raw text. The process began with converting all text to lowercase, followed by the anonymization of user mentions and URLs with generic <user> and <url> tokens, respectively. Hashtag symbols were removed while retaining the associated text, and emojis were converted into their textual descriptions to preserve their semantic meaning. The text was then tokenized using NLTK’s TweetTokenizer [40], which is optimized for SM content. Finally, non-alphanumeric characters were removed, and each token was lemmatized to its base form while common English stop words were eliminated to reduce noise. Table 2 shows some examples before and after the preprocessing steps that were randomly selected from our dataset.
To address the class imbalance, we employed data augmentation techniques specifically on the minority class. Samples from the minority class were augmented using a combination of synonym replacement [41] and back-translation [42]. This process creates new, semantically similar training instances, helping to create a more balanced and robust dataset. Following this pipeline, the dataset was partitioned into training (28,713 samples; ~70%), validation (7179 samples; ~20%), and testing (3988 samples; ~10%) sets using a stratified sampling strategy to maintain a consistent class distribution across all splits. Figure 1 illustrates the dataset word cloud before and after preprocessing.

3.2. Proposed Model: RoBERTa-BiLSTM with Attention

To enhance cyberbullying detection, we propose a novel hybrid deep learning architecture, the Hybrid-RoBERTa-BiLSTM with Attention model, which was deliberately designed to leverage the distinct and complementary strengths of both Transformer and Recurrent Neural Network paradigms. We selected RoBERTa as the foundational layer for its unparalleled ability to generate deep, bidirectional contextual embeddings. However, to more explicitly model the sequential flow and long-range dependencies inherent in conversational text, we introduced a BiLSTM layer. While transformers capture context globally, the BiLSTM provides a dedicated mechanism for processing the sequence of rich embeddings step-by-step. This combination allows our model to first understand the deep meaning of each word (via RoBERTa) and then reason about the ordered sequence of those meanings (via BiLSTM), before the final attention mechanism identifies the most critical features for classification. An overview of our complete methodology is presented in Figure 2. The RoBERTa architecture was specifically selected as the foundational encoder for our hybrid model because our initial benchmark of standard transformers (detailed in Section 4.3) identified it as one of the strongest performers. Our objective was to build upon a state-of-the-art foundation to rigorously test the hypothesis that our proposed hybrid method could further enhance predictive accuracy. The following subsections detail the specific architecture of our proposed model.

3.2.1. RoBERTa Embedding Layer

To prepare the combined dataset for fine-tuning our cyberbullying detection model, we implemented a robust data preprocessing pipeline tailored to enhance data quality and align with Transformer-based models. The input to our model is the preprocessed text, which is first tokenized using the RoBERTa-base tokenizer. These tokens are then fed into a pre-trained RoBERTa model. We extract the last hidden state from the RoBERTa output, which provides a rich, context-aware embedding for each token in the input sequence. Let the output from RoBERTa be a sequence of vectors H = h 1 , h 2 , , h T , where T is the sequence length and h t R d roberta   .

3.2.2. Bidirectional LSTM (BiLSTM) Layer

The sequence of embeddings H is then passed to a two-layer BiLSTM. The BiLSTM processes the sequence in both forward and backward directions [36], capturing long-range dependencies and contextual information from both past and future tokens. The hidden state at each time step t is the concatenation of the forward and backward states, o t = h t ; h t , where o t R 2 d l s t m .

3.2.3. Attention Mechanism

To enable the model to focus on the most indicative parts of the text, we apply an attention mechanism to the BiLSTM outputs O = o 1 , o 2 , , o T . The attention mechanism computes a context vector c as a weighted sum of the BiLSTM hidden states. An alignment score e t for each hidden state is calculated, and attention weights α t are computed by normalizing these scores using a softmax function in Equation (1):
α t = e x p e t j = 1 T   e x p e j
where α t is the attention weight at time step t ; e t is the alignment score at time t ; T detonates the total number of hidden states; and exp detonates exponential function used in softmax normalization. Finally, the context vector c is calculated as the weighted sum of the BiLSTM hidden states, as in Equation (2):
c = t = 1 T   α t o t
This context vector c represents a summary of the entire sequence, with a focus on the most relevant features for the classification task.

3.2.4. Classification Layer

The context vector c is passed through a dropout layer for regularization and then fed into a final fully connected linear layer that maps it to the output dimension (2 classes). A softmax function is implicitly applied via the loss function to produce the final probability distribution over the classes.

3.3. Model Fine-Tuning and Evaluation

3.3.1. Hyperparameter Optimization with Optuna

To optimize our model performance, we employed Optuna [39]—a state-of-the-art hyperparameter optimization framework—which allowed for an efficient and systematic exploration of the hyperparameter space across all our models (Figure 3). For each architecture, including transformer-based models (e.g., BERT-base, RoBERTa variants, DistilBERT, ALBERT-xxlarge, XLNet-base, ELECTRA-base, and DeBERTa-v3) and our hybrid RoBERTa–BiLSTM model, we defined tailored search spaces encompassing common parameters such as learning rate, batch size, and number of epochs, as well as model-specific settings like weight decay, warmup steps, and dropout rates. Additionally, for the hybrid model, parameters governing the BiLSTM and attention mechanisms, such as the number of LSTM layers, hidden dimensions, and the number of attention heads, were also optimized. This dynamic configuration allowed us to conditionally sample relevant hyperparameters depending on the chosen model and optimizer, ensuring a comprehensive and flexible search.
During the tuning process, each trial involved training the model for a predetermined number of epochs while monitoring intermediate performance metrics (e.g., validation loss and F1-score). Optuna’s integrated pruning strategies, such as the MedianPruner, were utilized to terminate underperforming trials early, thereby reallocating computational resources to more promising configurations [39]. Trials were executed in parallel across multiple GPUs, enabling a thorough exploration of the parameter space within a feasible timeframe. The best-performing hyperparameter configurations were then selected and used to fine-tune the final models on the combined training and validation sets, ensuring that our reported results reflect optimally tuned models. This systematic approach not only enhanced the performance of individual models but also underscored the critical role of fine-grained hyperparameter tuning in achieving state-of-the-art results in text classification tasks.
To determine the optimal fine-tuning configuration, we conducted automated hyperparameter optimization using the Optuna framework, which systematically searches for the best-performing set of hyperparameters by maximizing a predefined objective function. For this study, roberta-base served as the target model for the search, with the objective of maximizing the F1-score on the validation set over the course of 20 trials. The comprehensive search space for the optimization included a learning rate sampled from a log-uniform distribution between 1 × 10−6 and 1 × 10−4, a batch size chosen from the categorical set (4, 8, 16), a number of fine-tuning epochs selected from integers between 3 and 6, and a warmup ratio sampled from a uniform distribution between 0.0 and 0.2. The best combination of hyperparameters discovered during this search was then adopted for the final fine-tuning of all models in our study.

3.3.2. Baseline Models

To rigorously evaluate the performance of our proposed Hybrid-RoBERTa-BiLSTM-Attention model, we benchmarked it against a comprehensive suite of eight state-of-the-art transformer architectures. These baselines were selected to represent a diverse range of pre-training strategies and architectural innovations, ensuring a thorough comparison. The suite includes foundational models such as BERT (bert-base-uncased), the pioneering bidirectional architecture [12], and its lightweight variant DistilBERT, which is optimized for efficiency [43]. We also compare against the robustly optimized RoBERTa model in both its base and large configurations, known for their enhanced pre-training regimen [13]. To test against more recent architectural innovations, we included XLNet [44], which utilizes a permutation language modeling objective; ALBERT [45], notable for its parameter-reduction techniques; ELECTRA [46], which employs a highly efficient pre-training task; and DeBERTa [47], featuring an advanced disentangled attention mechanism. All baseline models were fine-tuned on the same augmented training dataset and utilized the same core hyperparameters identified during our optimization process to ensure a fair and direct comparison.

3.3.3. Evaluation Metrics

The performance of our proposed model and all baseline models was assessed on the held-out test set using a suite of standard classification metrics: accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). The metrics are defined as follows:
  • Accuracy is the ratio of correctly classified instances to the total number of instances. While it provides a basic measure of overall performance, it can be misleading in cases of class imbalance, which is common in cyberbullying detection datasets.
A c c u r a c y = ( T P + T N ) / ( ( T P + F P + F N + T N )
  • Precision, also known as the positive predictive value, indicates the proportion of correctly predicted cyberbullying instances out of all instances predicted as cyberbullying. A high precision score is crucial in this context, as it means fewer non-offensive instances are misclassified as cyberbullying.
P r e c i s i o n = T P T P + F P
  • Recall, also known as sensitivity or true positive rate, measures the ability of the model to identify all actual cyberbullying instances in the dataset. High recall ensures that most offensive content is detected, reducing the chance of harmful content going unnoticed.
R e c a l l = T P T P + F N
  • The F1-score is the harmonic mean of precision and recall, providing a balanced measure that is particularly useful when the dataset is imbalanced. A high F1-score indicates that the model performs well in both detecting cyberbullying and avoiding false positives.
F 1 s c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
  • The AUC-ROC (Area Under the Receiver Operating Characteristic Curve) measures the model’s ability to distinguish between classes. It plots the true positive rate (sensitivity) against the false positive rate (1-specificity) at various threshold levels. A higher AUC-ROC score indicates better performance in distinguishing cyberbullying from non-offensive content.
A U C R O C = 0 1   T P R ( f ) d ( F P R ( f ) )
where TPR is the true positive rate and FPR is the false positive rate as the decision threshold f varies.

4. Experimental Results and Discussion

In this section, we present the empirical results of our study. We first detail the experimental setup, followed by a comprehensive evaluation of our proposed model’s performance. We then conduct a comparative analysis against established baseline models and conclude with a discussion of the key findings.

4.1. Experimental Setup

All experiments were conducted on a high-performance computing (HPC) node equipped with two NVIDIA H100 GPUs, each providing 80 GB of HBM3 memory. This powerful hardware configuration allowed for the implementation of complex architectures and the use of large batch sizes. The software stack was built on a Linux environment with CUDA 12.2, PyTorch 2.3, and the Hugging Face Transformers library (version 4.42.0). Our proposed RoBERTa-BiLSTM-Attention model and all baseline models were implemented using this framework. The fine-tuning process was accelerated using mixed-precision (FP16) computation to improve throughput. To ensure training stability and prevent overfitting, we employed an early stopping mechanism with a patience of one epoch, monitoring the validation set’s F1-score. Key hyperparameters for the proposed model, such as learning rate, LSTM dimensions, and dropout rate, were systematically determined using the Optuna framework, as detailed in Section 3.3.1. The final parameters used for fine-tuning are summarized in Table 3 for full reproducibility.

4.2. Performance of the Proposed Model

To validate the training stability and generalization capability of our proposed model, we present the learning curves in Figure 4. The plots for training and validation loss show a steady decrease before converging, while the accuracy curves show a corresponding increase, plateauing as the model reaches optimal performance. The minimal and stable gap between the training and validation curves indicates that our use of dropout and an early stopping strategy was effective in preventing overfitting, confirming that the model is robust and generalizes well to unseen data.
Our proposed Hybrid-RoBERTa-BiLSTM-Attention model achieved exceptional performance on the held-out test set, demonstrating its effectiveness for the complex task of cyberbullying classification. The integration of the BiLSTM and attention mechanism on top of RoBERTa’s contextual embeddings proved highly successful in identifying the nuanced patterns characteristic of cyberbullying classification. As detailed in Table 4, the model achieved an overall accuracy of 94.8%, a robust F1-score of 95.9%, and a remarkable AUC of 98.5%, indicating superior discriminative ability between classes. The model’s effectiveness is further illustrated by its confusion matrix, shown in Figure 5b. The high count of true positives (2390) and true negatives (1391) compared to the low number of false negatives (118) and false positives (89) confirms the model’s high degree of reliability. The well-balanced precision (96.4%) and recall (95.3%) signify that the architecture not only correctly identifies the majority of cyberbullying instances but also maintains a very low rate of misclassifying benign content.
Furthermore, the hybrid architecture’s integration of bidirectional LSTM layers with multi-head attention mechanisms enabled robust capture of both lexical patterns (via RoBERTa embeddings) and contextual escalation cues (via sequential modeling), reflected in its class-leading AUC of 98.5%, as shown in Figure 6.
To provide a more detailed breakdown of the model’s performance on a per-class basis, Table 5 presents the full classification report. The report shows that the model achieves a high F1-score for both the minority “non-cyberbullying” class (0.931) and the majority “cyberbullying” class (0.958). This demonstrates that our model is not biased towards one class and performs robustly across the entire dataset, effectively distinguishing between the two categories. The support values also confirm that the evaluation was conducted on a sufficient number of samples for each class.

4.3. Comparative Analysis and Benchmarking with State-of-the-Art Models

To establish the performance of our proposed model relative to the state-of-the-art (SOTA), we conducted a rigorous benchmark against eight leading transformer architectures: BERT-base, RoBERTa-base, RoBERTa-large, DistilBERT, ALBERT-xxlarge, XLNet-large, ELECTRA-base, and DeBERTa-v3-small. We opted to re-implement and evaluate these models on the same specific dataset to ensure a fair, direct comparison. The results of this comprehensive benchmark, illustrated in the performance heatmap (Figure 6), demonstrate the clear superiority of our proposed Hybrid-RoBERTa-BiLSTM model, achieving the highest scores across all five evaluated metrics. This outcome validates our hypothesis that a specialized hybrid architecture can outperform standard fine-tuning approaches for complex classification tasks.
Our proposed model achieved an F1-score of 95.8% and an AUC of 98.5%. Notably, it surpassed the strongest baseline, DeBERTa-v3-small, by a significant margin of 2.0 percentage points on the F1-score (95.8% vs. 93.8%). The improvement is even more pronounced in recall, where our model achieved 95.3%—a 3.6-point gain over DeBERTa-v3—indicating a substantially better ability to identify true cyberbullying cases. Furthermore, when compared to its own backbone, RoBERTa-base, our hybrid model shows a marked improvement of 1.6 points on the F1-score and 0.9 points on the AUC, confirming the significant contribution of the BiLSTM and attention layers.
The superior performance can be attributed to the architectural synergy of the model’s components. While all models leverage powerful contextual embeddings from a transformer base, our architecture enhances the classification process. The BiLSTM layer captures long-range sequential dependencies and word-order nuances critical for understanding context, sarcasm, and indirect aggression. Subsequently, the attention mechanism allows the model to dynamically weigh the importance of different words, focusing on the most discriminative tokens (e.g., insults, threats) before making a final prediction. This sophisticated classification head provides a more nuanced understanding of the text than the standard linear layer used in the baseline models.
To provide a more granular view of model behavior, Figure 7 presents the confusion matrices for the proposed model and all eight baselines. A visual inspection of these matrices reveals not just that our model performs better, but how. The matrix for our Hybrid-RoBERTa-BiLSTM model (Figure 7i) shows the most desirable error profile among all contenders. When compared directly to the strongest baseline, DeBERTa-v3 (Figure 7g), our model demonstrates a substantial 43% reduction in False Negatives (118 vs. 209). This is a critical improvement, as it signifies an enhanced capability to correctly identify instances of actual cyberbullying that other models miss, a key requirement for any effective content moderation system. Simultaneously, our model maintains a marginally lower False Positive count (89 vs. 96), indicating it achieves this higher sensitivity without sacrificing precision. This superior trade-off between minimizing missed cyberbullying cases and avoiding the incorrect flagging of benign content underscores the practical advantages of our proposed architecture.
The F1-score, which represents the harmonic mean of precision and recall, is arguably the most critical metric for evaluating performance on imbalanced classification tasks like cyberbullying detection. The comparative ranking of models by F1-score, presented in Figure 8, offers the clearest evidence of our proposed model’s architectural advantages. Our Hybrid-RoBERTa-BiLSTM model achieved a top F1-score of 0.9585, establishing a new state-of-the-art performance on this dataset. Crucially, it outperformed the next-best model, RoBERTa-large, by a substantial margin of 1.44 percentage points (0.9585 vs. 0.9441). This pronounced gap visually separates our model from the cluster of other high-performing baselines, whose scores are tightly grouped in the 0.93–0.94 range. This advantage directly reflects the architecture’s enhanced ability to balance the identification of true cyberbullying cases (Recall) while maintaining high precision, a capability that sets it apart from even the most powerful standard transformer models for this task.

4.4. Complexity and Efficiency Analysis

Beyond predictive accuracy, the practical utility of a model is determined by its computational efficiency. We evaluated each model based on its size (number of parameters) and speed (inference time per sample) to analyze the performance-to-cost trade-off. This analysis, summarized in Table 6, reveals the distinct advantage of our proposed architecture.
Our Hybrid-RoBERTa-BiLSTM model achieves its state-of-the-art F1-score of 0.9585 with a moderate size of only 130 M parameters. This makes it significantly more efficient than the largest models, such as RoBERTa-large (355 M) and XLNet-large (340 M), while still outperforming them. Figure 9 positions our hybrid model in a highly desirable quadrant of high performance and low latency. While models like DistilBERT offer the fastest inference (10 ms), they do so with considerable sacrifice in accuracy. Conversely, the largest models provide diminishing returns, as their significant computational overhead does not translate to superior performance. This analysis demonstrates that our proposed architecture represents an optimal balance, providing state-of-the-art accuracy without demanding prohibitive computational resources, making it a practical and effective solution for real-world deployment.

4.5. Theoretical and Practical Implication

The implications of this work are twofold. From a scientific perspective, it validates the hypothesis that investing in specialized, hybrid architectures for the classification head can yield substantial performance gains over even larger, more complex baseline transformers. From a practical standpoint, our model offers a more effective and reliable solution for real-world content moderation systems. Its notable reduction in false negatives means fewer instances of cyberbullying are missed, while its high precision ensures that benign content is not incorrectly flagged. Furthermore, the efficiency analysis revealed that this state-of-the-art performance is achieved with moderate computational resources, making it a viable solution for deployment at scale.

5. Conclusions

This study introduced a novel hybrid deep learning architecture, the Hybrid-RoBERTa-BiLSTM-Attention model, for the critical task of cyberbullying detection. Through a rigorous process of systematic hyperparameter optimization and a comprehensive benchmark against eight state-of-the-art transformer models, we demonstrated the unequivocal superiority of our proposed approach. The results confirm that our model achieves a new state-of-the-art performance, leading across all key metrics, including a significantly higher F1-score (0.9585) and the AUC (0.9850). The success is attributed to the architectural synergy where the BiLSTM captures long-range sequential context, and the attention mechanism dynamically focuses on the most discriminative linguistic features—capabilities that go beyond standard fine-tuning.
Despite the promising results, we acknowledge several limitations that provide clear directions for future research. First, our model’s performance was validated on a specific English-language dataset. Its generalizability to other languages, dialects, or online platforms with distinct communication norms remains an open question. Second, our work is exclusively focused on textual data. Modern online harassment is increasingly multi-modal, involving images, memes, and videos, which our current architecture is not designed to analyze. Third, the model classifies text in isolation and does not consider broader conversational or social graph context, such as the history of interaction between users, which can be crucial for interpreting ambiguous messages. Finally, like many complex deep learning models, the “black-box” nature of our architecture presents challenges for interpretability, a key consideration for real-world moderation systems where transparent reasoning is often required. Future work should aim to address these challenges to build more comprehensive, context-aware, and transparent online safety systems.

Author Contributions

Conceptualization, S.M.F., M.A.M., S.A.S. and M.G.R.; data curation, S.A. and S.A.S.; methodology, S.M.F., M.A.M. and M.G.R.; project administration, M.A.M. and S.M.F.; resources, S.A.S. and M.G.R.; software, M.G.R. and S.M.F.; validation, S.M.F. and M.A.M.; visualization, M.A.G.H.; writing—original draft, M.A.M., S.A., S.A.S. and M.G.R.; writing—review and editing, M.A.G.H., M.A.-S. and S.M.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been funded by the Deanship of Scientific Research at the University of Ha’il, Saudi Arabia, under Project Number RG-23 092.

Data Availability Statement

The data and code used in this study will be available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BERT-baseBidirectional Encoder Representations from Transformers
RoBERTa-baseRobustly Optimized BERT Approach (Base)
RoBERTa-largeRobustly Optimized BERT Approach (Large)
DistilBERTDistilled BERT
ALBERT-xxlargeA Lite BERT (Extra Extra Large)
XLNet-largeeXtended Language Net (Large)
ELECTRA-baseEfficiently Learning an Encoder that Classifies Token Replacements Accurately
DeBERTa-v3-smalDecoding-enhanced BERT with Disentangled Attention (v3 Small)
BiLSTMBidirectional Long Short-Term Memory
CNN Convolutional Neural Network
DLDeep Learning
GPUGraphics Processing Unit
LSTM Long Short-Term Memory
MLMachine Learning
NLTK Natural Language Toolkit
NLPNatural Language Processing
PLM Pre-trained Language Model
RNNRecurrent Neural Network
SOTA State-of-the-Art
AUCArea Under the Curve
RoBERTa-BiLSTMRobustly Optimized Bidirectional Encoder Representations from the Transformers with Bidirectional Long Short-Term Memory

References

  1. Muneer, A.; Alwadain, A.; Ragab, M.G.; Alqushaibi, A. Cyberbullying detection on social media using stacking ensemble learning and enhanced BERT. Information 2023, 14, 467. [Google Scholar] [CrossRef]
  2. Slanbekova, G.; Turgumbayeva, A.; Umurkulova, M.; Mukhamedkarimova, D.; Chung, M. The Phenomenon of Cyberbullying: A Comprehensive Literature Review. J. Psychol. Sociol. 2024, 89, 25–37. [Google Scholar] [CrossRef]
  3. Mong, E. Cyberbullying and Its Effects on the Mental Well-Being of Adolescents. Ph.D. Thesis, North-West University, Potchefstroom, South Africa, 2020. [Google Scholar]
  4. Li, C.; Wang, P.; Martin-Moratinos, M.; Bella-Fernandez, M.; Blasco-Fontecilla, H. Traditional bullying and cyberbullying in the digital age and its associated mental health problems in children and adolescents: A meta-analysis. Eur. Child Adolesc. Psychiatry 2024, 33, 2895–2909. [Google Scholar] [CrossRef] [PubMed]
  5. Muneer, A.; Fati, S.M. A comparative analysis of machine learning techniques for cyberbullying detection on twitter. Future Internet 2020, 12, 187. [Google Scholar] [CrossRef]
  6. Hasan, M.T.; Hossain, M.A.E.; Mukta, M.S.H.; Akter, A.; Ahmed, M.; Islam, S. A review on deep-learning-based cyberbullying detection. Future Internet 2023, 15, 179. [Google Scholar] [CrossRef]
  7. Emmery, C.; Verhoeven, B.; De Pauw, G.; Jacobs, G.; Van Hee, C.; Lefever, E.; Desmet, B.; Hoste, V.; Daelemans, W. Current limitations in cyberbullying detection: On evaluation criteria, reproducibility, and data scarcity. Lang. Resour. Eval. 2021, 55, 597–633. [Google Scholar] [CrossRef]
  8. Raj, C.; Agarwal, A.; Bharathy, G.; Narayan, B.; Prasad, M. Cyberbullying detection: Hybrid models based on machine learning and natural language processing techniques. Electronics 2021, 10, 2810. [Google Scholar] [CrossRef]
  9. Beshay, N. A Deep Learning Based Multilingual Hate Speech Detection for Resource Scarce Languages. Master’s Thesis, Concordia University of Edmonton, Edmonton, AB, Canada, 2022. [Google Scholar]
  10. Vaswani, A. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  11. Ansar, W.; Goswami, S.; Chakrabarti, A. A Survey on Transformers in NLP with Focus on Efficiency. arXiv 2024, arXiv:2406.16893. [Google Scholar]
  12. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers); Association for Computational Linguistics: Minneapolis, MN, USA, 2019; pp. 4171–4186. [Google Scholar]
  13. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
  14. Mahdi, M.A.; Fati, S.M.; Hazber, M.A.; Ahamad, S.; Saad, S.A. Enhancing Arabic Cyberbullying Detection with End-to-End Transformer Model. CMES-Comput. Model. Eng. Sci. 2024, 141, 1651–1671. [Google Scholar] [CrossRef]
  15. Khan, W.; Daud, A.; Khan, K.; Muhammad, S.; Haq, R. Exploring the frontiers of deep learning and natural language processing: A comprehensive overview of key challenges and emerging trends. Nat. Lang. Process. J. 2023, 4, 100026. [Google Scholar] [CrossRef]
  16. Islam, S.; Elmekki, H.; Elsebai, A.; Bentahar, J.; Drawel, N.; Rjoub, G.; Pedrycz, W. A comprehensive survey on applications of transformers for deep learning tasks. Expert Syst. Appl. 2023, 241, 122666. [Google Scholar] [CrossRef]
  17. Torfi, A.; Shirvani, R.A.; Keneshloo, Y.; Tavaf, N.; Fox, E.A. Natural language processing advancements by deep learning: A survey. arXiv 2020, arXiv:2003.01200. [Google Scholar]
  18. Hadi, M.U.; Qureshi, R.; Shah, A.; Irfan, M.; Zafar, A.; Shaikh, M.B.; Akhtar, N.; Wu, J.; Mirjalili, S. Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects. Authorea Prepr. 2023, 1, 1–26. [Google Scholar]
  19. Bhargava, N.; Radaideh, M.I.; Kwon, O.H.; Verma, A.; Radaideh, M.I. On the Impact of Language Nuances on Sentiment Analysis with Large Language Models: Paraphrasing, Sarcasm, and Emojis. arXiv 2025, arXiv:2504.05603. [Google Scholar] [CrossRef]
  20. Wang, C.-Y.; Bi, K. Exploring the influence of the dark triad on indirect cyber aggression: A longitudinal study of a Taiwanese sample. Cyberpsychol. Behav. Soc. Netw. 2025, 28, 105–111. [Google Scholar] [CrossRef]
  21. Verma, K.; Milosevic, T.; Davis, B. Can attention-based transformers explain or interpret cyberbullying detection? In Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022), Gyeongju, Republic of Korea, 12–17 October 2022; pp. 16–29. [Google Scholar]
  22. Dinakar, K.; Reichart, R.; Lieberman, H. Modeling the detection of textual cyberbullying. In Proceedings of the International AAAI Conference on Web and Social Media, Barcelona, Spain, 21 July 2011; Volume 5, pp. 11–17. [Google Scholar]
  23. Nahar, V.; Li, X.; Pang, C. An effective approach for cyberbullying detection. Commun. Inf. Sci. Manag. Eng. 2013, 3, 238. [Google Scholar]
  24. Xu, J.-M.; Jun, K.-S.; Zhu, X.; Bellmore, A. Learning from bullying traces in social media. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; Association for Computational Linguistics: Montréal, QC, Canada, 2012; pp. 656–666. [Google Scholar]
  25. Barbierato, E.; Gatti, A. The challenges of machine learning: A critical review. Electronics 2024, 13, 416. [Google Scholar] [CrossRef]
  26. Otter, D.W.; Medina, J.R.; Kalita, J.K. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 604–624. [Google Scholar] [CrossRef]
  27. Iwendi, C.; Srivastava, G.; Khan, S.; Maddikunta, P.K.R. Cyberbullying detection solutions based on deep learning architectures. Multimed. Syst. 2023, 29, 1839–1852. [Google Scholar] [CrossRef]
  28. Agrawal, S.; Awekar, A. Deep learning for detecting cyberbullying across multiple social media platforms. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2018; pp. 141–153. [Google Scholar]
  29. Fati, S.M.; Muneer, A.; Alwadain, A.; Balogun, A.O. Cyberbullying detection on twitter using deep learning-based attention mechanisms and continuous Bag of words feature extraction. Mathematics 2023, 11, 3567. [Google Scholar] [CrossRef]
  30. Banerjee, V.; Telavane, J.; Gaikwad, P.; Vartak, P. Detection of cyberbullying using deep neural network. In Proceedings of the 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 604–607. [Google Scholar]
  31. Al-Ajlan, M.A.; Ykhlef, M. Optimized twitter cyberbullying detection based on deep learning. In Proceedings of the 2018 21st Saudi Computer Society National Computer Conference (NCC), Riyadh, Saudi Arabia, 25–26 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  32. Zhang, X.; Tong, J.; Vishwamitra, N.; Whittaker, E.; Mazer, J.P.; Kowalski, R.; Hu, H.; Luo, F.; Macbeth, J.; Dillon, E. Cyberbullying detection with a pronunciation based convolutional neural network. In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 740–745. [Google Scholar]
  33. Zhao, R.; Mao, K. Cyberbullying Detection based on Semantic-Enhanced Marginalized Denoising Auto-Encoder. IEEE Trans. Affect. Comput. 2017, 8, 328–339. [Google Scholar] [CrossRef]
  34. Lu, N.; Wu, G.; Zhang, Z.; Zheng, Y.; Ren, Y.; Choo, K.K.R. Cyberbullying detection in social media text based on character-level convolutional neural network with shortcuts. Concurr. Comput. Pract. Exp. 2020, 32, e5627. [Google Scholar] [CrossRef]
  35. Kumari, K.; Singh, J.P. Identification of cyberbullying on multi-modal social media posts using genetic algorithm. Trans. Emerg. Telecommun. Technol. 2021, 32, e3907. [Google Scholar] [CrossRef]
  36. Al-Selwi, S.M.; Hassan, M.F.; Abdulkadir, S.J.; Muneer, A.; Sumiea, E.H.; Alqushaibi, A.; Ragab, M.G. RNN-LSTM: From applications to modeling techniques and beyond—Systematic review. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102068. [Google Scholar] [CrossRef]
  37. Paul, S.; Saha, S. CyberBERT: BERT for cyberbullying identification: BERT for cyberbullying identification. Multimed. Syst. 2022, 28, 1897–1904. [Google Scholar] [CrossRef]
  38. Islam, M.M.; Uddin, M.A.; Islam, L.; Akter, A.; Sharmin, S.; Acharjee, U.K. Cyberbullying detection on social networks using machine learning approaches. In Proceedings of the 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Gold Coast, Australia, 16–18 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
  39. Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2623–2631. [Google Scholar]
  40. Owoputi, O.; O’Connor, B.; Dyer, C.; Gimpel, K.; Schneider, N.; Smith, N.A. Improved part-of-speech tagging for online conversational text with word clusters. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9–14 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA; Association for Computational Linguistics: Stroudsburg, PA, USA, 2013; pp. 380–390. [Google Scholar]
  41. Wei, J.; Zou, K. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv 2019, arXiv:1901.11196. [Google Scholar] [CrossRef]
  42. Sugiyama, A.; Yoshinaga, N. Data augmentation using back-translation for context-aware neural machine translation. In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019); Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 35–44. [Google Scholar]
  43. Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019, arXiv:1910.01108. [Google Scholar]
  44. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.R.; Le, Q.V. Xlnet: Generalized autoregressive pretraining for language understanding. Adv. Neural Inf. Process. Syst. 2019, 32, 5754–5764. [Google Scholar]
  45. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv 2019, arXiv:1909.11942. [Google Scholar]
  46. Clark, K.; Luong, M.-T.; Le, Q.V.; Manning, C.D. Electra: Pre-training text encoders as discriminators rather than generators. arXiv 2020, arXiv:2003.10555. [Google Scholar]
  47. He, P.; Gao, J.; Chen, W. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv 2021, arXiv:2111.09543. [Google Scholar]
Figure 1. Comparison of dataset word clouds. (a) The word cloud generated from the raw, unprocessed text. (b) The word cloud after our preprocessing pipeline has been applied.
Figure 1. Comparison of dataset word clouds. (a) The word cloud generated from the raw, unprocessed text. (b) The word cloud after our preprocessing pipeline has been applied.
Mca 30 00091 g001
Figure 2. Proposed Hybrid Attention-Based RoBERTa-BiLSTM methodology.
Figure 2. Proposed Hybrid Attention-Based RoBERTa-BiLSTM methodology.
Mca 30 00091 g002
Figure 3. Optima for hyperparameter optimization framework.
Figure 3. Optima for hyperparameter optimization framework.
Mca 30 00091 g003
Figure 4. Training and validation learning curves for the proposed Hybrid-RoBERTa-BiLSTM-Attention model. (a) The model’s loss and (b) accuracy over 6 epochs.
Figure 4. Training and validation learning curves for the proposed Hybrid-RoBERTa-BiLSTM-Attention model. (a) The model’s loss and (b) accuracy over 6 epochs.
Mca 30 00091 g004
Figure 5. Confusion matrices for cyberbullying detection on test set, comparing (a) the baseline RoBERTa-base model and (b) the proposed Hybrid-RoBERTa-BiLSTM-Attention model.
Figure 5. Confusion matrices for cyberbullying detection on test set, comparing (a) the baseline RoBERTa-base model and (b) the proposed Hybrid-RoBERTa-BiLSTM-Attention model.
Mca 30 00091 g005
Figure 6. Performance heatmap of all models based on 5-fold cross-validation. Each cell contains the mean score and standard deviation (±). The proposed model, highlighted in yellow, demonstrates superior performance on all metrics.
Figure 6. Performance heatmap of all models based on 5-fold cross-validation. Each cell contains the mean score and standard deviation (±). The proposed model, highlighted in yellow, demonstrates superior performance on all metrics.
Mca 30 00091 g006
Figure 7. Confusion matrices of all evaluated models on the test set. Panels (ah) show the performance of the baseline models, while panel (i) displays the results for our proposed Hybrid-RoBERTa-BiLSTM model.
Figure 7. Confusion matrices of all evaluated models on the test set. Panels (ah) show the performance of the baseline models, while panel (i) displays the results for our proposed Hybrid-RoBERTa-BiLSTM model.
Mca 30 00091 g007
Figure 8. Comparative analysis of model performance based on F1-score. The models are ranked in ascending order of their F1-score on the test set.
Figure 8. Comparative analysis of model performance based on F1-score. The models are ranked in ascending order of their F1-score on the test set.
Mca 30 00091 g008
Figure 9. Model performance trade-off analysis, plotting the number of parameters against inference time. Models are grouped by category (Efficient, Base, Large, and Hybrid), with their distributions shown in the marginal histograms at the top and right, colored accordingly. The red area indicates models with higher computational complexity and longer inference time, while the green area highlights models with lower complexity and faster inference time.
Figure 9. Model performance trade-off analysis, plotting the number of parameters against inference time. Models are grouped by category (Efficient, Base, Large, and Hybrid), with their distributions shown in the marginal histograms at the top and right, colored accordingly. The red area indicates models with higher computational complexity and longer inference time, while the green area highlights models with lower complexity and faster inference time.
Mca 30 00091 g009
Table 1. Summary of related works in cyberbullying detection, highlighting their limitations.
Table 1. Summary of related works in cyberbullying detection, highlighting their limitations.
Study(s)MethodologyKey Limitations
Dinakar et al. [22], Nahar et al. [23]Keyword-based and Rule-based SystemsLacks contextual understanding; prone to high false positive and negative rates.
Xu et al. [24]Traditional Machine Learning (SVM)Relies on manual feature engineering; struggles to capture sequential context.
Agrawal & Awekar [28]Recurrent Neural Networks (RNNs/LSTMs)Better for sequences than CNNs but still has difficulty with very long-range dependencies.
Banerjee et al. [30], Al-Ajlan & Ykhlef [31], Zhang et al. [32]Convolutional Neural Networks (CNNs)Effective for local patterns but struggles to model the overall sentence context and word order.
Suliman et al. [29], Kumari & Singh [35]Advanced Approaches (Ensemble, Multimodal)Often highly complex and may not generalize as effectively as large pre-trained language models.
Muneer et al. [1], Mishra et al. [38]Standard Fine-tuning of Transformers (BERT)The simple classification head underutilizes the rich sequential data from the encoder and lacks a focused mechanism for the final prediction.
This StudyHybrid RoBERTa-BiLSTM-AttentionAddresses the limitations of standard fine-tuning by explicitly modeling sequences (BiLSTM) and focusing on salient features (Attention).
Table 2. Examples of text samples before and after preprocessing.
Table 2. Examples of text samples before and after preprocessing.
NoOriginal TextPre-Processed Text
1@user123 check out this new article it’s CRAZY! http://some.link/abcde<user> check new article crazy <url>
2LMAO you’re a total loser 😂 #getalifelmao total loser: face_with_tears_of_joy: getalife
3He was saying things that were unbelievably mean.say thing unbelievably mean
4This is just getting ridiculous… I’m so done with the drama.getting ridiculous ‘m done drama
5@Skawtnyc You are such a disgusting waste of space. Nobody likes you, just go away already!! #pathetic<user> disgusting waste space like go away pathetic
6OMG I totally destroyed him in that last match! What an insane kill shot. 🤯 #gamingomg totally destroy match insane kill shot: exploding_head: gaming
Table 3. Final hyperparameters for model fine-tuning.
Table 3. Final hyperparameters for model fine-tuning.
HyperparameterValue
OptimizerAdamW
Learning Rate2.5 × 10−5
Batch Size16
LSTM Hidden Size256
LSTM Layers2
Dropout Rate0.3
Weight Decay0.01
Table 4. Performance of the proposed Hybrid-RoBERTa-BiLSTM-Attention model compared to the RoBERTa-base (Baseline) on the test set.
Table 4. Performance of the proposed Hybrid-RoBERTa-BiLSTM-Attention model compared to the RoBERTa-base (Baseline) on the test set.
ModelAccuracyPrecisionRecallF1-scoreAUC
RoBERTa-base (Baseline)92.8%94.2%94.3%94.3%97.6%
Hybrid-RoBERTa-BiLSTM94.8%96.4%95.3%95.9%98.5%
Table 5. Detailed classification report for the proposed Hybrid-RoBERTa-BiLSTM-Attention model on the test set.
Table 5. Detailed classification report for the proposed Hybrid-RoBERTa-BiLSTM-Attention model on the test set.
ClassPrecisionRecallF1-scoreSupport
Non-Cyberbullying0.920.940.931480
Cyberbullying0.960.950.962508
Accuracy 0.953988
Macro Avg0.940.950.943988
Weighted Avg0.950.950.953988
Table 6. Comparison of model complexity and inference time.
Table 6. Comparison of model complexity and inference time.
NoModelParameters (M)Inference Time (ms)
1DistilBERT6610
2BERT-base11022
3XLNet-large34048
4ELECTRA-base11018
5ALBERT-xxlarge22362
6DeBERTa-v3-small14130
7RoBERTa-base12525
8RoBERTa-large35555
9Hybrid-RoBERTa-BiLSTM13028
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mahdi, M.A.; Fati, S.M.; Ragab, M.G.; Hazber, M.A.G.; Ahamad, S.; Saad, S.A.; Al-Shalabi, M. A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection. Math. Comput. Appl. 2025, 30, 91. https://doi.org/10.3390/mca30040091

AMA Style

Mahdi MA, Fati SM, Ragab MG, Hazber MAG, Ahamad S, Saad SA, Al-Shalabi M. A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection. Mathematical and Computational Applications. 2025; 30(4):91. https://doi.org/10.3390/mca30040091

Chicago/Turabian Style

Mahdi, Mohammed A., Suliman Mohamed Fati, Mohammed Gamal Ragab, Mohamed A. G. Hazber, Shahanawaj Ahamad, Sawsan A. Saad, and Mohammed Al-Shalabi. 2025. "A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection" Mathematical and Computational Applications 30, no. 4: 91. https://doi.org/10.3390/mca30040091

APA Style

Mahdi, M. A., Fati, S. M., Ragab, M. G., Hazber, M. A. G., Ahamad, S., Saad, S. A., & Al-Shalabi, M. (2025). A Novel Hybrid Attention-Based RoBERTa-BiLSTM Model for Cyberbullying Detection. Mathematical and Computational Applications, 30(4), 91. https://doi.org/10.3390/mca30040091

Article Metrics

Back to TopTop